id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
39119457
pes2o/s2orc
v3-fos-license
Epidemic of Mycoplasma pneumoniae infection in Denmark , 2010 and 2011 S A Uldum (su@ssi.dk)1, J M Bangsborg2, B Gahrn-Hansen3, R Ljung4, M Mølvadgaard5, R Føns Petersen1, C Wiid Svarrer1 1. Statens Serum Institut, Copenhagen, Denmark 2. Department of Clinical Microbiology, Herlev University Hospital, Herlev, Denmark 3. Department of Clinical Microbiology, Odense University Hospital, Odense, Denmark 4. Department of Clinical Microbiology, Nordsjællands Hospital, Hillerød, Denmark 5. Department of Clinical Microbiology, Aalborg Sygehus Syd, Aalborg, Denmark Epidemics of Mycoplasma pneumoniae infection are normally seen at intervals of four to seven years [1,2].In some cases, simultaneous epidemics are seen in more than one country.In 2010, Denmark [1], England and Wales [2], Sweden [3] and Finland [4] reported more cases of M. pneumoniae infection than normal.In autumn 2011, reports from Norway [5], Sweden [3], the Netherlands [6] and Finland [4] indicated an epidemic of M. pneumoniae infection in the northern part of Europe.In Denmark, we have also seen a rise in the number of M. pneumoniae cases during autumn 2011. The surveillance of M. pneumoniae in Denmark has been described previously [1].The system is based on laboratory data from Statens Serum Institut (SSI).SSI receives samples (almost an equal number of blood/ serum samples for serology and respiratory samples for PCR) from hospitals and general practitioners for routine diagnosis.The diagnosis and surveillance of M. pneumoniae infection used to be based on serology in the past, but since the beginning of the 1990s, PCR has been introduced as a routine test at SSI for rapid and early diagnosis of M. pneumoniae.A rise in the rate of PCR positive samples at SSI from < 5 % to 15% or more is considered as indicative of an epidemic [1].During the last decade, the diagnosis of M. pneumoniae has been moved from SSI to local hospital laboratories which have also progressively introduced PCR as a routine diagnostic test for M. pneumoniae over the past years.In the beginning of October 2010, SSI saw an increase in the proportion of positive samples above the threshold (>15%) [7] (Figure 1).This tendency was confirmed by data from hospital laboratories in Denmark and in November 2010 Denmark reported a nation-wide increase in the number and proportion of M. pneumoniae PCR positive samples [1].According to SSI laboratory data, the epidemic peaked in mid-December 2010, while the number decreased rapidly during the rest of December and in January 2011.The number of cases seemed to return to a normal level during spring and early summer 2011 (Figure 1).An increase was observed again in late summer and early autumn 2011 [8].This prompted SSI to contact a selection of local laboratories all over the country, with a request to submit laboratory data on a weekly basis for M. pneumoniae PCR for 2011, to monitor if the rise could be confirmed and if it was nation-wide.The laboratories were selected to cover and represent most of the country, the eastern part (The Capital and Zeeland) the mid-south (Funen) and the north-western part (Northern Jutland). Macrolide resistance in M. pneumoniae is a growing problem especially in East Asia, but it is also seen in the United States and Europe [9].During an epidemic of M. pneumoniae, the macrolide consumption is known to increase considerably [10,11].In December 2010, Denmark saw the highest consumption in a single month (3.9 defined daily doses (DDD)/1,000 population) compared to the consumption in December during the previous nine years (2.5 DDD/1,000 population on average).According to provisional data, the consumption in November 2011 was the highest for the month of November (3.6 DDD/1,000 population) compared to the last 10 years (2.4 DDD/1,000 population on average for November months between 2001 and 2010) personal communication, Maja Laursen, the Danish Medicines Agency, January 2012. Laboratory investigation SSI is situated in the Capital Region of Denmark and receives samples predominantly from the Capital Region and the Region Zealand.To further investigate if the rise in the absolute number and in the proportion of positive tests was seen all over the country, the institute received and analysed weekly data from four hospital laboratories (North Denmark Region, Region of Southern Denmark and two laboratories from the Capital Region). To compare the years 2009 (no epidemic) with the two epidemic years (2010 and 2011) SSI requested in January 2012 results for the period from 2009 to 2011.Data for the whole period were provided by two hospital laboratories (North and Capital 1) and by SSI.The South Denmark region laboratory provided data for 20 September 2010 (week 38) to 31 December 2011 (week 52) and Capital 2 laboratory provided data for 29 August 2011 (week 35) to 31 December 2011 (week 52).Capital 2 also provided data for the epidemic period in 2010 but only for eight weeks (25 October to 19 December 2010) and not on a weekly base but in an aggregated form (Table ).The number of positive samples per week from each laboratory is presented in Figure 2.Both waves of the M. pneumonia epidemic were seen in the whole country almost simultaneously (Figure 2). To compare the two epidemic periods, data for the same period (week 43 to week 50) for the two years from the five laboratories are presented in the table.The peak periods for both epidemic waves were within the selected eight weeks.Twice the number of positive samples (1.9 times) were detected in 2011 compared with 2010, but the number of samples investigated were also almost twice (1.8 times) as high in 2011 compared with 2010.The proportion of positive samples was in general equal during both waves (in average 15%-16.3%)but for North Denmark Region, the rate was higher in 2011 (17.3%) compared with 2010 (14.5%) despite the fact that more than a double number (2.6 times) of samples were tested (Table ). In 2010, the five laboratories diagnosed approximately 70% of all cases in Denmark; assuming that this also applies for 2011, it can be estimated that more than 4,600 cases were diagnosed in Denmark (the country's population counts 5.5 million inhabitants) during the eight-week period from 24 October to 18 December 2011.This corresponds to an incidence of approximately 10 new PCR diagnosed cases per 100,000 population per week in Denmark.In the North Denmark Region, one laboratory received all samples from the region for M. pneumoniae PCR.The population size of the region is 580,000 and 125 samples on average were positive per week (Table ) giving an estimated incidence of more than 20 new cases per 100,000 population per week.In 2010, the estimated incidence for this region was only eight per 100,000 population per week.The diagnostic activity for this region was almost 1 per 100 population during the eight-week period.The diagnostic activity for the whole country can be estimated from the figures in the table.If we consider the five laboratories representing 70% of the diagnostic activity, approximately five persons per 1,000 population were investigated during the eight weeks. At SSI, we also investigated the prevalence of macrolide resistance for both 2010 and 2011.Macrolide resistance-associated mutations in the gene for the 23 sRNA were identified with a sequencing technique developed at SSI.The technique can be performed directly on DNA purified from PCR positive samples [12].We did a survey on 140 PCR positive samples consecutively received at SSI during late September and early October 2010 (the beginning of the first wave) and on 108 PCR positive samples consecutively received in January 2011 (the end of the first wave).During the second wave in 2011 we investigated 117 PCR positive samples received in late October and in the beginning of November, representing the beginning of the 2011 wave.In the first wave we found two (1.4%) and three (2.9%) mutations, respectively, and in the second wave we only found one sample with a mutation (0.9%).Data for PCR positive samples from January 2012 (the end of the second wave) are currently unavailable. Discussion and conclusions In two successive years, Denmark experienced a high number of M. pneumonia infections during autumn and early winter.The situation can be characterised as one epidemic consisting of two waves.Epidemics spanning two autumn/winter seasons were also seen in Denmark in 1962 to 1964, in 1971 to 1973 and to some degree also in 2004 to 2006 [1].The total number of PCR positive samples in 2011 was twice the number in 2010, but the number of investigated samples was also twice as high in 2011 compared with 2010 (Table ).We are unable to determine whether this reflects a true increase in the number of cases from the 2010 wave to the 2011 wave or whether this reflects an increase in the awareness of the public and among physicians.However, as the proportion of positive samples was almost equal during the two periods, it is reasonable to assume that the two waves were of almost equal size, but the duration of the 2011/12 wave seems to be longer with a more gradual decline than the 2010 wave (Figure 1).However, it seems obvious that the 2011 wave was more extensive than the 2010 wave in the North Denmark Region, and it seems also likely that this region was more affected by the second wave than the rest of the country.Although there are differences between the regions, both waves hit the whole country almost simultaneously (Table and Figure 2).The incidence and diagnostic activity for the other regions cannot be estimated as we do not know the population base for the other laboratories.The diagnostic activity for the whole country (5 per 1,000 population) can only be estimated under the assumption that the five laboratories represent 70% of the diagnostic activity during the epidemic.However, a diagnostic activity of approximately 1 per 100 population in North Denmark Region during the eight-week period in 2011 can be considered as high. The estimated average incidence of PCR diagnosed cases during the epidemic in 2011 was approximately 10 new cases per 100,000 population per week; this is probably a vast underestimation of the real number of cases of M. pneumoniae infection during this period, as many patients with mild symptoms will not consult their general practitioner, and only a fraction of patients who visit a practitioner will have samples collected for M. pneumonia PCR. Although the consumption of macrolides is high during an epidemic of M. pneumonia it does not seem to influence the prevalence of macrolide resistance in M. pneumoniae.This is in contrast to other respiratory pathogens, such as Streptococcus pneumoniae, where resistance is closely linked to increased macrolide use [13].This link was also observed following a previous Danish M. pneumoniae epidemic in 1998/99 [11].However, we still need to investigate samples collected in January 2011 before any categorical statement on M. pneumoniae susceptibility to macrolides.Macrolide resistance in M. pneumoniae may be characterised as low in Denmark, as there is still no general problem, but in specific cases, macrolide resistance can lead to relapse and prolonged disease [12]. We believe that it is important to have a national surveillance system for monitoring both the prevalence of the disease and the macrolide resistance in M. pneumoniae. Figure 2 Figure 2 Number of PCR positive samples from five selected laboratories in Denmark, 2009 to 2011 Table Number and proportion of Mycoplasma pneumoniae samples tested by PCR at five laboratories, Denmark, 25 October (week 43) to 19 December (week 50) 2010 and 24 October (week 43) to 18 December (week 50) 2011 a SSI: Statens Serum Institut.
2017-05-29T16:44:55.584Z
2012-02-02T00:00:00.000
{ "year": 2012, "sha1": "0a0ce06fee54d9da268a1429087ed8da9037c03e", "oa_license": "CCBY", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/17/5/art20073-en.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/ese.17.05.20073-en&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a0ce06fee54d9da268a1429087ed8da9037c03e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211126926
pes2o/s2orc
v3-fos-license
Multipolar magnetism in d-orbital systems: Crystal field levels, octupolar order, and orbital loop currents Sreekar Voleti, D. D. Maharaj, B. D. Gaulin, 3, 4 Graeme Luke, 3, 5 and A. Paramekanti ∗ Department of Physics, University of Toronto, 60 St. George Street, Toronto, ON, M5S 1A7 Canada Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 Canada Brockhouse Institute for Materials Research, McMaster University, Hamilton, ON L8S 4M1 Canada Canadian Institute for Advanced Research, 661 University Ave., Toronto, ON M5G 1M1 Canada TRIUMF, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3, Canada (Dated: February 17, 2020) Quantum magnets with spin J = 2, which arise in spin-orbit coupled Mott insulators, can potentially display multipolar orders. Motivated by gaining a better microscopic understanding of the local physics of such d-orbital quantum magnets, we carry out an exact diagonalization study of a simple octahedral crystal field Hamiltonian for two electrons, incorporating spin-orbit coupling (SOC) and interactions. While the rotationally invariant Kanamori interaction in the t2g sector leads to a five-fold degenerate J = 2 manifold, we find that either explicitly including the eg orbitals, or going beyond the rotationally invariant Coulomb interaction within the t2g sector, causes a degeneracy breaking of the J = 2 levels. This can lead to a low-lying non-Kramers doublet carrying quadrupolar and octupolar moments and an excited triplet which supports magnetic dipole moments, bolstering our previous phenomenological proposal for the stabilization of ferro-octupolar order in heavy transition metal oxides. We show that the spontaneous time-reversal symmetry breaking due to ferro-octupolar ordering within the non-Kramers doublet leads to electronic orbital loop currents. The resulting internal magnetic fields can potentially explain the small fields inferred from muon-spin relaxation (µSR) experiments on cubic 5d 2 osmate double perovskites Ba2ZnOsO6, Ba2CaOsO6, and Ba2MgOsO6, which were previously attributed to weak dipolar magnetism. We make further predictions for oxygen NMR experiments on these materials. We also study the reversed level scheme, where the J = 2 multiplet splits into a low-lying magnetic triplet and excited non-Kramers doublet, presenting single-ion results for the magnetic susceptibility in this case, and pointing out its possible relevance for the rhenate Ba2YReO6. Our work highlights the intimate connection between the physics of heavy transition metal oxides and that of f -electron based heavy fermion compounds. Multipolar orders have been proposed and discussed extensively in f -orbital based heavy fermion compounds 1-14 . Such multipolar orders have also been proposed to occur in d-orbital metals with large spinorbit coupling (SOC), such as LiOsO 3 and Cd 2 Re 2 O 7 , via Pomeranchuk instabilities of the Fermi liquid 15 . Optical second-harmonic generation experiments on Cd 2 Re 2 O 7 have found evidence for such an inversion broken quadrupolar ordered state below T c ∼ 200 K 16 . Other candidates for multipolar orders include proposed quadrupolar order in A 2 OsO 4 (with A = K,Rb,Cs) 17 . In recent work, we have studied d-orbital Mott insulators with large SOC and a d 2 configuration in a local octahedral environment, and proposed these systems as candidates for realizing ferro-octupolar order 18,19 . Previous studies of such d 2 quantum magnets [20][21][22] have argued that the combination of crystal field and interaction effects, leads to the stabilization of a state with total L = 1 and S = 1, which are locked by SOC into a J = 2 spin. Motivated by experiments 18,23-26 on certain cubic double perovskite (DP) Mott insulators, Ba 2 ZnOsO 6 , Ba 2 CaOsO 6 , and Ba 2 MgOsO 6 , which host a 5d 2 configuration on Os, we have instead proposed 19 that their observed nontrivial phenomenology, such as entropy and a spin gap, could be captured by assuming that the five-fold J = 2 multiplet is weakly split, resulting in a ground state non-Kramers doublet carrying quadrupolar and octupolar moments. The lack of any observed crystal distortions in X-ray and neutron diffraction experiments appears to rule out quadrupolar order 18 . Uniform ferrooctupolar ordering in the low lying doublet manifold then provides the most viable route to further reconciling the cubic symmetry, the observation of time-reversal symmetry breaking seen via µSR oscillations 23 , and the apparent lack of any magnetic Bragg peaks in elastic neutron diffraction experiments 18 . In this paper, we provide further theoretical calculations in favor of the above scenario. We first present exact diagonalization results on a simple local crystal field Hamiltonian keeping the t 2g and e g levels in an octahedral environment, showing that the combination of SOC and interactions does favor a non-Kramers ground state doublet. We show how the splitting between this doublet and the excited magnetic triplet depends on SOC and the Hund's coupling and results from perturbative t 2g -e g mixing. Such t 2g -e g mixing was discussed previously but its importance for the low energy physics appears not to have been properly recognized 21,27 . We also examine a model of just t 2g electronic states, and show that deviations of the Coulomb interaction from arXiv:2002.05737v1 [cond-mat.str-el] 13 Feb 2020 spherical symmetry, perhaps engendered by hybridization with oxygen orbitals 28 , can lead to a similar non-Kramers doublet state. This doublet-triplet splitting may be too small to be resolved using resonant inelastic X-ray scattering experiments 29,30 , but it is crucial for the low energy symmetry-breaking orders. We study the impact of ferro-octupolar order within this low energy non-Kramers doublet, and show that this leads to orbital electronic currents, generating internal magnetic fields and semi-quantitatively explain the µSR oscillations seen in Ba 2 ZnOsO 6 , Ba 2 CaOsO 6 , and Ba 2 MgOsO 6 . The non-spherical Coulomb interaction mechanism for splitting the J = 2 multiplet discussed above also permits for the possibility for the level ordering to be reversed, with a magnetic triplet ground state and an excited non-Kramers doublet. We present single ion results for the magnetic susceptibility and entropy in this case, arguing that this reversed level scheme is likely to be relevant to the 5d 2 rhenate 31 Ba 2 YReO 6 . Our theory strengthens the case for multipolar orders in a class of d-orbital Mott insulators, pointing to a smooth conceptual link between the physics of heavy dorbital oxides and f -electron based heavy fermion materials. Such octupolar order with a high transition temperature may provide a new template to store information. I. LOCAL MODEL We use the following Hamiltonian for two electrons in a d-orbital placed in an octahedral environment: where we include the octahedral crystal field splitting, SOC, and Kanamori interactions, written in the orbital basis ({yz, xz, xy}, {x 2 −y 2 , 3z 2 −r 2 }) ↔ ({1, 2, 3}, {4, 5}) where α ≡ {1, 2, 3} label t 2g orbitals and α ≡ {4, 5} label e g orbitals. The CEF term is given by: where s is the spin. The SOC term is where σ σ σ refers to the vector of Pauli matrices, and L is the orbital angular momentum. Its components in the orbital basis are given in Appendix A. We assume a Kanamori interaction for all five d-orbitals given by where S α = (1/2)c † αs σ s,s c αs . This simple form, where we use the same interaction parameters for all t 2g and e g For electronic configurations with partially filled t 2g orbitals, the most commonly used approach is to simply ignore the e g orbitals and restrict attention to the low energy t 2g states. We find that the ground state manifold in this approximation consists of a five-fold degenerate J = 2 state. However, we show below that this degeneracy is further split due to two possible microscopic mechanisms: t 2g -e g mixing and deviations of the Coulomb interaction from spherical symmetry. A. t2g-eg mixing: Exact results, perturbation theory We consider two electrons in the full d-orbital manifold including t 2g and e g states, and study this using numerical exact diagonalization in the 45 basis states. For coupling constants, we use values typical for 5d transition metal oxides: ∆ = 3 eV, U = 2.5 eV, λ = 0.4 eV, and J H = 0.25 eV. Fig.1 plots the evolution with J H of the lowest 15 energy levels which correspond to eigenstates where the two electrons are predominantly both in the t 2g sector. The indicated numbers mark the degeneracies of these multiplets. For J H = 0, there are just three energy levels, which, in increasing order of energy, correspond to having (i) both electrons in j = 1/2, (ii) one electron in j = 1/2 and one electron in j = 3/2 (energy cost 3λ/2), and (iii) both electrons in j = 3/2 (energy cost 3λ). We see that the lowest energy set of 5 states evolves adiabatically out of the first sector as we increase J H ; this set of five states corresponds to the J = 2 moment. However, a zoom-in of this multiplet, as well as of one of the higher energy multiplets, shows that the apparent five-fold degeneracy of these states is actually weakly broken as 2⊕3 due to weak t 2g -e g mixing. In particular, the naively expected five-fold degenerate J = 2 ground state is split into a non-Kramers doublet ground state and an excited magnetic triplet; for the typical values listed above, this splitting is ∼ 8 meV. Fig. 2 shows the dependence of this lowest energy doublet-triplet energy splitting (blue solid line) on ∆. We find that this splitting can be semi-quantitatively captured within third order perturbation theory, as discussed in Appendix B, where we first eliminate the e g states, to find an effective t 2g model, and then diagonalize this reduced Hamiltonian. The relevant terms arise at O(λ 2 J H /∆ 2 ), from the following sequence: (i) SOC λ promoting one electron from the t 2g manifold into the e g sector, (ii) intermediate state t 2g -e g interactions driven by Hund's coupling set by J H , and finally (iii) de-exciting back via SOC λ to end up with both electrons in the t 2g manifold. Diagonalizing this third-order perturbative Hamiltonian, in conjunction with the bare t 2g Hund's coupling, leads to the non-negligible splitting shown (red dashed line) in Fig. 2, which agrees well with the full numerical calculation in the regime of large ∆. Our result is in contrast with a previous conjecture that the splitting would appear at fourth-order in perturbation theory 21 , which would have indeed rendered this effect negligible. This highlights a non-trivial effect of t 2g -e g mixing, showing that it can be important for nucleating multipolar order in 5d Mott insulators. However, this effect by itself may be too small to account for the spin gap observed in neutron scattering experiments 18,19 on Ba 2 ZnOsO 6 , Ba 2 CaOsO 6 , and Ba 2 MgOsO 6 . We next turn to an additional mechanism, which can cooperate to enhance this splitting, or perhaps even reverse the level ordering which we argue is important in certain other materials. B. Non-spherical Coulomb interactions in t2g model The second important physical effect we consider is that projecting the Coulomb interaction to the t 2g Wannier orbitals can lead to deviations from the spherical symmetry assumption, so that U = U − 2J H . This is expected to be more important for 5d orbitals which have more significant overlap with the oxygen cage, as has been previously noted in an ab initio study 28 . We therefore numerically diagonalize the above model Hamiltonian, restricting ourselves to the Hilbert space where both electrons occupy the t 2g orbitals, and varying δU = U −(U −2J H ) to simulate the deviation from spherical symmetry. Fig.3 shows how the low energy degeneracy gets split as we go away from δU = 0. We see from here that even a small deviation δU /U ∼ 0.1 leads to a substantial splitting ∼ 20 meV. For δU > 0, we find that the non-Kramers doublet is lower in energy than the magnetic triplet, which we argue is relevant to osmates such as Ba 2 ZnOsO 6 , Ba 2 CaOsO 6 , and Ba 2 MgOsO 6 . The case where the δU < 0, so that the magnetic triplet lies lower in energy than the doublet, may be important to understand aspects of the unusual magnetism of the rhenate 31 Ba 2 YReO 6 ; this will be discussed in Section III. II. MAGNETIC FIELDS FROM OCTUPOLAR ORDER On phenomenological grounds, and the above microscopic calculations, 5d 2 oxides are candidates for a lowlying non-Kramers doublet. As shown previously 19 , this doublet may be described using the wavefunctions of the J = 2 manifold in terms of |J z eigenstates written as pseudospin-1/2 states: Each of these two states is individually time-reversal invariant. The angular momentum operators (J 2 x −J 2 y ) and (3J 2 z − J 2 ), restricted to this basis, act as pseudospin-1/2 operators (τ x , τ z ), forming the two components of an XY-like quadrupolar order parameter, while J x J y J z (with overline denoting symmetrization) behaves as τ y , and serves as the Ising-like octupolar order parameter. The mean field ferro-octupolar ordered ground state is described by each site being in the superposition state |ψ oct ± = |ψ g,↑ ± i|ψ g,↓ . Here, the signs reflect the Z 2 nature of the Ising order, and 'i' reflects the breaking of time-reversal symmetry. The broken time-reversal symmetry of the octupolar ground state would lead to internal magnetic fields in the crystal. Using exact diagonalization, we obtain |ψ oct ± as the two-electron wavefunction obtained by superposing the two degenerate time-reversal invariant ground eigenstates as above, and compute the electronic currents in these states which generate the internal magnetic fields. In the single-site picture, the orbital currents responsible for the internal fields live on the d 2 ion. We thus define the orbital current density operator as where s sums over the physical electron spin. We expand the operator Ψ in the orbital basis as where r ≡ (r, θ, φ), ψ n α refers to the real hydrogen-like wavefunction, with n = 5 and = 2 for the 5d wavefunctions, and α denotes the orbital. We thus arrive at the spatially varying expectation value of the current density operator: where the two Ising states have J(r) − = − J(r) + . Here, Y α (θ, φ) are real Tesseral harmonics, and R n (r) is the radial wavefunction. To compute the current density, we use a variational ansatz for the radial wavefunction, which takes on a hydrogenic form, but with an effective nuclear charge which decreases with r, from a bare nuclear charge Z 0 for r → 0 to the screened effective charge Z ∞ for r → ∞, over a length scale r 0 . For the Os 6+ ion relevant to Ba 2 ZnOsO 6 , Ba 2 CaOsO 6 , and Ba 2 MgOsO 6 , we use Z 0 = 76 and Z ∞ = 7, and consider different values of r 0 ; details are given in Appendix B. Using this expectation value for the current density, we compute the magnetic field via where the integral is carried out over primed variables. The two Ising time-reversed partner states have opposite magnetic fields B − (r) = −B + (r). The orbital current pattern which creates this field is shown schematically in Fig. 4 (left panel), highlighting that it is analogous to loop current orders proposed in certain cuprate and heavy fermion materials 33,34 . In a more realistic calculation, which retains hybridization with oxygen, the octupolar order we have uncovered may in fact be identical to plaquette loop current order in FIG. 5. Magnetic field generated within the crystal in the presence of ferro-octupolar order, plotted as a function of distance from the 5d 2 Os ion along the (111) direction. The two curves correspond to different choices of the screening parameter r0, which impacts the field only at short distances. The wiggles reflect the structure of the radial wavefunction. FIG. 6. Magnetic field in the presence of ferro-octupolar order, plotted as a function of distance from the 5d 2 Os ion along the (111) direction. The data are the same as in Fig. 5, but normalized by B dip which denotes the magnetic field at the same location generated by a 1µB dipole moment located at the origin and pointing along the (111) direction. the OsO 6 cage. We find that the magnetic field has a pattern which, appropriately, might be expected from a set of eight alternating "magnetic monopoles" arranged on a cube, as shown in Fig. 4 (right panel), to form an octupole centered on the Os 6+ ion. Fig. 5 shows the magnetic field expected from these orbital currents as a function of distance from the Os 6+ ion along the (111) direction, where the field strength is the largest, for two different choices of r 0 as indicated. Fig. 6 shows the same calculation, but normalizing the field by that generated by a 1µ B dipole located at the Os 6+ site. In order to estimate the typical field produced by the octupolar order at a possible muon stopping site, we consider the field distribution over a sphere of radius 1Å centered around the oxygen site which is where the muon is expected to be bound 35,36 . Fig. 7 shows a schematic plot of this distribution, where we find the maximum field to be present at points on this sphere located near the Os 6+ ion. The computed maximum field is found to be ∼ 30 Gauss, which is within a factor-of-two of the ∼ 60 Gauss magnetic field inferred from µSR experiments on Ba 2 ZnOsO 6 , Ba 2 CaOsO 6 , and Ba 2 MgOsO 6 below a transition temperature T * . This magnetic field was previously attributed to possible weak magnetic dipolar order, with a tiny ordered moment 0.02µ B . Such a tiny ordered moment is difficult to explain given the typical ∼ 1µ B local moments expected in such Mott insulators, unless one is fine-tuned to be near a quantum critical point. Our work instead naturally explains this weak field as arising from loop currents in a phase which supports octupolar order. Mg,Zn,Ca Os B-site ion FIG. 7. Schematic color plot of the ferro-octupolar magnetic field distribution over a sphere of radius 1Å around the oxygen site, located half-way between Os and the B-site ion (Mg, Zn, Ca), where the muon is expected to be bound. The largest field strength (in red) appears near the Os 6+ ion. Our calculation shows that this maximal value is ∼ 30 Gauss, comparable to the measured value ∼ 60 Gauss from µSR experiments. III. REVERSED LEVEL SCHEME: MAGNETIC TRIPLET GROUND STATE In previous work and in the above sections, we have extensively explored the case where the J = 2 multiplet is split into a low-energy non-Kramers doublet and a spingapped magnetic triplet. In this section, we explore the single-ion physics of the reversed level scheme which has also not been studied in the oxides literature. As an illustrative example of a model which leads to this level ordering, we explore the Hamiltonian in Eq. 1, but with δU = U −(U −2J H ) < 0, and projecting onto just the t 2g orbitals. We note that this deviation is not necessarily the only way in which the Coulomb interaction can deviate from spherical symmetry -indeed, imposing only the octahedral point group symmetry will allow for a broader set of interactions. Fig. 8 shows the inverse magnetic susceptibility χ −1 (T ) in this single-ion case, normalized by its value at T = 300 K, for a choice of parameters ∆ = 3 eV, U = 2.5 eV, λ = 0.4 eV, and J H = 0.25 eV (as used in the previous sections), but with δU = −0.5 eV. (This choice of an admittedly large δU is only used for the simplest model to illustrate the impact of splitting the lowest energy J = 2 multiplet; it is not meant to capture the full spectrum of higher energy excitations.) This leads to a triplet ground state with an excited non-Kramers doublet at an energy ∼ 37 meV. Interestingly, we find that χ −1 (T ) ∝ (T + T s ) in this case, exhibiting an apparent "Curie-Weiss"-like form with T s ≈ 275 K, over a wide range of temperatures 150 K. Based on this, one might misleadingly infer a Curie-Weiss temperature ∼ −275 K. Only upon going to lower temperatures, do we observe a change of slope and the correct χ −1 (T ) ∝ T Curie law associated with the single-ion low energy magnetic triplet. We find a very similar result in an even simpler model where we split the J = 2 multiplet using symmetry-allowed Stevens op- suggesting that it is a robust consequence of tripletdoublet splitting, with T s reflecting single-ion physics. Remarkably, precisely such a behavior, with a Curie-Weiss-like form for χ −1 (T ) and a break in slope on going below 150 K has been observed 31 in Ba 2 YReO 6 , leading us to suspect that the experimentally reported large "Curie-Weiss" temperature ∼ −600 K may in fact be misleading, and could partly reflect this modified single-ion physics. The true Curie-Weiss temperature in this material may thus well be much smaller, and likely closer to that seen in the d 2 osmates discussed above. Our exploration thus serves to partly rationalize the widely diverging "Curie-Weiss" temperatures reported in this class of materials as arising from the differences in the single-ion physics of different 5d ions. The nature and strength of exchange interactions between such magnetic ions will be discussed elsewhere, in the context of ongoing experiments on Ba 2 YReO 6 . IV. DISCUSSION We have shown that the physics of spin-orbit coupled J = 2 magnets can exhibit unconventional multipolar orders which emerge from a low energy non-Kramers doublet. This doublet arises from crystal field splitting of the J = 2 multiplet due to multiple physical effects: weak t 2g -e g mixing as well as deviation of the Coulomb interaction from spherical symmetry. Ferro-octupolar ordering within this doublet, which can result from the interplay of magnetic exchange and orbital repulsion 19 , provides the most viable explanation for the huge body of experimental data, including the µSR oscillations which we have shown results from orbital electronic currents. As a further test of our theory, we propose that nuclear magnetic resonance (NMR) studies on the oxygen site should show no sign of any internal fields below T * due to its octupolar structure, which is evident from the schematic plot in Fig. 4; specifically, the octupolar configuration in a cubic system is invariant under C 4 rotations about the Os-O axis followed by time-reversal. This vanishing of the field in oxygen NMR would serve to further distinguish octupolar order from possible dipolar order for which we do expect to see the internal field in the NMR spectrum. Applying uniaxial pressure along the (111) direction would break this C 4 symmetry, leading to a nonzero field at the oxygen site which may be detectable by NMR. Our work makes a compelling case for octupolar order in a d-orbital Mott insulator. Future experimental studies using pressure or doping, to suppress the octupolar transition temperature and induce metallicity, may allow one to study possible non-Fermi liquid states associated with fluctuating multipolar orders 37 . Our work emphasizes the need for additional ab initio studies of 5d oxides at various filling factors to construct the appropriate Wannier functions in order to extract the local interaction Hamiltonian. In light of our work, it is also imperative to revisit the entire body of experiments on other 5d 2 materials, such as Ba 2 YReO 6 , as well as 5d oxides at other filling factors. The top left blocks in the above matrices show the t 2g subspace, and it is clear that the angular momentum is completely quenched in the e g subspace. V. ACKNOWLEDGMENTS where n is the principal quantum number, l is the angular momentum quantum number, and ρ(r) = 2r/na(r). L 2l+1 n−l−1 is the generalized Laguerre polynomial, and N nl is a normalization constant. a(r) is a function which captures the screening by the inner electrons, which we call the "effective" Bohr radius. The function must be chosen such that lim r→0 a(r) = a 0 /Z 0 ; lim r→∞ a(r) = a 0 /Z ∞ (C2) where a 0 is the (hydrogen) Bohr radius, Z 0 is the bare charge of the nucleus, and Z ∞ is the effective charge that an electron sees at large distances. We propose the following simple form: with r 0 being a tuning parameter which determines how the effective charge falls off with distance. For instance, for an Os 6+ ion, Z 0 = 76 and Z ∞ = 7 (since all electrons except the one 5d electron we focus on will contribute to screening at large distances). A reasonable value for r 0 is that it is smaller than the ionic radius ∼ 70pm; we thus consider r 0 = 10-20pm. If we are interested in 5d electrons, the radial wavefunction is of the form R 52 (r) = N 52 2r 5a(r) 2 e −r/5a(r) L 5 2 2r 5a(r) The normalization constant N 52 depends on r 0 . Hence the full wavefunction is given by ψ nlα = R nl (r)Y lα (θ, φ),
2020-02-17T02:00:25.437Z
2020-02-14T00:00:00.000
{ "year": 2020, "sha1": "b9c1a448627c34c23921cf1293cce5e4a900cda3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.05737", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5902a37d32e83ff6340976199ee82765b6563ada", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251043429
pes2o/s2orc
v3-fos-license
The Spiritual Supporter Scale as a New Tool for Assessing Spiritual Care Competencies in Professionals: Design, Validation, and Psychometric Evaluation This study aimed to design, validate and standardize the Spiritual Supporter (SpSup) Scale, a tool designed to assess competency to provide spiritual care including knowledge, sensitivity to spiritual needs and spiritual support skills. This instrument can be used by all those engaged in or training for caregiving roles. The study was conducted in Poland in the Polish language. The SpSup Scale demonstrates high overall reliability (Cronbach’s α = 0.88), a satisfactory diagnostic accuracy (0.79), and a satisfactory discriminatory power of the items. Given the psychometric properties of SpSup Scale demonstrated here, the scale is recommended for the assessment of the competency to provide spiritual care in both clinical and research settings in Poland. Introduction Spirituality has long been widely discussed in the caregiving professions as it relates to the provision of comprehensive care, and support for people in difficult situations (Bożek et al., 2020;Chow et al., 2021;Wells-Di et al., 2021;Younkin et al., 2021). While contemporary medical literature increasingly emphasises the need for a holistic approach to support patients and to recognise their somatic suffering in social, mental, and emotional terms (Muszala, 2017;Puchalski et al., 2013;Saunders, 1964), it is also important to consider the individual's spirituality (Sulmasy, 2002) by using the biopsychosocial-spiritual model of the human being (Balboni et al., 2014;. Patients report that they appreciate skill in spiritual care in, and the satisfaction of spiritual needs by, the professionals caring for them (Büssing et al., 2015;O'Callaghan et al., 2019). Spiritual care skills in healthcare professionals contribute to patients' satisfaction with treatment and care, well-being, and quality of life (Siddall et al., 2015) while reducing anxiety (Hughes et al., 2004) and depression (Bekelman et al., 2007). Patients are able to better cope with disease and have a more positive attitude despite deteriorating health (Brady et al., 1999;Whitford et al., 2008). The relationships between quality of life, coping with disease and receiving spiritual support confirm that spirituality is an essential dimension of patient care (Vandenhoeck, 2013). The importance of spiritual care has been illustrated in diverse groups including: the elderly (Oz et al., 2021), disabled people (Kaye & Raghavan, 2002); as well as oncology (Ben-Arye et al., 2006), psychiatry (Galanter et al., 2011), cardiology (Ozdemir et al., 2021), thoracic (Chen et al., 2021) and HIV-positive patients (Chang et al., 2018;Dalmida et al., 2015). Furthermore, interest in spiritual competencies has also been expressed in the fields of teaching (Epstein, 2018;Harbinson & Bell, 2015), psychotherapy (Mutter et al., 2010;Ren, 2012) and in training for other healthcare professions such as nursing and midwifery (Deluga et al., 2021;McSherry et al., 2021). In summary, gaining competence in and providing spiritual care is important for all professionals who are dealing with people who suffer. Need for a New Tool to Assess Competency in Spiritual Care If spirituality is implicated within the diagnosis and treatment of those experiencing suffering, it is important to ensure that staff are appropriately educated (Lucchetti et al., 2012;. In order to ensure relevant competencies, a validated tool that allows us to assess and confirm the skill level is needed. A review of the literature reveals many scales for the assessment of spiritual needs (Anandarajah & Hight, 2001;Best et al., 2020;Büssing et al., 2010Büssing et al., , 2015Groves & Klauser, 2009;Maugans, 1996;Neely, 2009;Puchalski, 2002;Ross & McSherry, 2018), and spiritual care competencies, including the Spiritual Care Competency Scale (SCCS) for nurses (Frick et al., 2019;Pastrana et al., 2021), the Spiritual Care Competence Questionnaire (SCCQ) for various professions (Van Leeuwen et al., 2009), and the Servant Leadership and Spirituality Scales (Maglione & Neville, 2021). Tools that examine spirituality or religiousness as a phenomenon 2. Existential quest, especially with regard to: the meaning of life, suffering, and death; issues of personal dignity and personhood; a sense of individual freedom and responsibility, hope and despair, reconciliation and forgiveness, love and joy; 3. Values by which a person lives, especially in relation to oneself and others, work, nature, art and culture, ethical and moral choices, and life at large (PTODM, 2021). Healthcare professionals should be aware of all these dimensions as a potential source of patients' coping in the face of death or spiritual suffering. Study Objectives In view of the broad potential application of spiritual care, we decided not to limit ourselves to medical professions but to design a tool that would be useful for all people engaged in or training for caregiving professions, for example medical and healthcare professionals, psychologists, and teachers. Our objective was the construction and validation of a tool to study: 1. Respondents' opinions on spirituality and their understanding of their own spirituality; 2. Attitude to spirituality in a relationship with a person in need of care and support; 3. The level of skills necessary to diagnose the spiritual suffering in supported persons; 4. Respondents' readiness to provide spiritual support to those who suffer. The proposed scale is intended for students and practitioners in the caregiving professions. Study Design The design, development and standardization of the SpSup Scale were carried out according to established standards for the development and psychometric validation of research scales and questionnaires (AERA APA, 2014; Boynton & Greenhalgh, 2004;Brzeziński, 1985;Dogan, 2016;Dufrene & Young, 2014;Koenig & Al Zaben, 2021;Rubacha, 2008;Sousa & Rojjanasrirat, 2011;Wild et al., 2005). Since we conducted our study, Koenig and Al Zaben (2021) have outlined the steps for the development and psychometric validation of a new scale in spirituality measurement and we have followed most of them. The stages of scale development are described below in 4 phases: (1) Generation of items; (2) Cognitive debriefing of scale; (3) Validation and standardization-Study 1; and (4) Validation and standardization-Study 2. Methodology is summarised in Fig. 1, and each phase is explained in full below. All analyses were conducted in SPSS. The next stage of validation is currently underway and will be reported in a future paper. As recommended by Koenig and Al Zaben (2021), the authors will compare the SpSup Scale to existing scales to assess construct validation. Participants and Data Collection The project was approved by the Ethics Committee at Collegium Medicum in Medicum in Bydgoszcz of the Nicolaus Copernicus University in Toruń (number 736/2018) and conducted between 2017 and 2019. For study I and II of validation and standardization, the questionnaire containing the SpSup Scale was distributed to university students by oral invitation, online or as a printed document. In the case of teachers who participated in study II of validation Phase 1: Generation of Items Potential items for the first draft version of the scale were based on the definitions of spirituality and its dimensions given above (theoretical and definition indicators) (Koenig & Al Zaben, 2021;Rubacha, 2008) and formulated by the research team. The researchers chose those items which corresponded most accurately to the definition of spirituality in medicine. This resulted in a provisional list of 104 items. The first draft of the scale was developed from this list, comprised of 51 items organized as 3 subscales: Me and my beliefs about spirituality (15 items); My spirituality (15 items); My idea of a relationship with a person (as such) experiencing spiritual pain (21 items). The scale included a Likert scale (4 response options from 'I strongly disagree with this statement', to 'I strongly agree') (Brzeziński, 2004), the instructions and the definition of spirituality. Phase 2: Cognitive Debriefing of Scale with 51 Items According to the literature, cognitive debriefing is used with the target group for whom the scale is prepared, or relevant experts (Sousa & Rojjanasrirat, 2011). The draft questionnaire was therefore assessed by an invited expert panel comprised of 14 members: psychologists (4), physicians (3), nurses (2) and students (5). They were asked to ensure the clarity of questions ("Are items clear and understandable for you?"), and to suggest possible paraphrasing where necessary. They were also asked to provide feedback on the tool regarding its length and usefulness, and the emotions experienced while completing it. During the study, experts were instructed to provide comments about the tool and to propose amendments ("Would you like to change anything in any item?") (Boynton & Greenhalgh, 2004;Dickie et al., 2018;Patrick et al., 2011;Sousa & Rojjanasrirat, 2011;Wild et al., 2005). Most of the questions were found to be clear and easy to understand. The structure of the questions was assessed positively. The definition of spirituality included in the scale was also evaluated positively and approved with no reservations. The inclusion of the PTODM's definition of spirituality in the instructions was vital as in Polish culture the term 'spirituality' is frequently perceived as synonymous with religiousness. Without this definition, the study could render inaccurate responses. The experts found four statements incomprehensible and proposed changes to make them clearer (Table 1). The research team discussed differences in opinion and agreed on the most appropriate versions of the items and wording. As a result, the second draft of the scale The following statements were prepared based on the definition of spirituality The following statements were prepared based on the definition of spirituality In contact with another person, I assume that faith is not an essential element of support for this person In contact with a patient/person in need of support, I assume that faith is not an important element of this support I do not personally engage in a relationship with another person Faith is not a crucial factor in providing effective support to another person In a relationship with the patient/person in need of support, not only the body or somatic symptoms but also the actual spiritual suffering and dilemmas are important When someone complains to me about problems with forgiving, I can recognise it I can tell when someone in a conversation with me is complaining about problems with forgiving, I try to help them with it When someone says they find it hard to forgive, I can see it I can tell when someone is suffering on the spiritual level, for example, because they find it hard to forgive was constructed with 51 items and 3 subscales, similar to the first version but with corrected wording and the same Likert scale. Phase 3: Study 1 The first study of the SpSup Scale was undertaken to establish scale reliability, internal consistency, and discriminatory power of items in a relatively homogenous population. From 2017 to 2018, participants were recruited from the medical faculties at 2 Polish universities. The sample contained 204 medical students, of whom 127 were female and 67 male (for 10 participants-no data). The median age was 22.99 years (range: 19-30) ( Table 2). Psychometric Evaluation of Study I The Internal Consistency of Items and the Initial Reliability of the Scale In the first step, the items' discriminatory power was verified to exclude those with a weak correlation with the overall scale score. The results of these calculations are presented in Table 3. 3 The initial reliability of the tool, based on Cronbach's alpha, was 0.929 (95% confidence interval: 0.914-0.942). Following analysis, the statements with Cronbach's alpha below 0.20 were removed. These questions were excluded from further analyses. Finally, the Cronbach's alpha was recalculated for all remaining items. The resulting values were satisfactory, with the tool reliability at this stage assessed at 0.940 (95% confidence interval: 0.927-0.951), indicating a very high and satisfactory outcome for our scale. Exploratory Factor Analysis In the next step, exploratory factor analysis was performed to determine the factor structure of the tool (Table 4). The optimal number of factors was established through parallel analysis (Green et al., 2012;Horn, 1965) to extract the number of factors for which eigenvalues were at least in the 95th percentile of the expected eigenvalue (Green et al., 2012). This method was selected because it is believed to produce the best results of all methods based on eigenvalues (Schmitt, 2011;Zwick & Velicer, 1986). In addition, factor analysis was further justified with the results of Bartlett's test of sphericity, with correlations between items significantly different from zero (χ 2 [465] = 2964,00; p < 0.001). The Kaiser-Meyer-Olkin test confirmed the adequate sample size for factor analysis (KMO = 0.865). The analysis showed five factors that explained 48% of the variance in all items. Some items did not load on any of the corresponding factors or presented high factor loadings on more than one latent variable. The final factor solution is shown in Table 5. Given the expected (and existing) correlations between the factors, rotated factor loadings are presented (oblimin rotation). Scale Reliability In order to check the reliability of the scale, Cronbach's alpha (with 95% confidence interval) and McDonald's omega were calculated (AERA APA, 2014). The results are presented in Table 6. The results for the scale points and the 95% confidence interval indicated a high level of internal consistency for the scale overall and individual subscales. Scales 4 Factor 5 -and 5 featured a slightly lower, albeit still acceptable, level of reliability. The discriminatory power of the items was re-estimated with regards to the overall score and individual subscales (Table 7). All indicators exceeded the value of 0.20 and can therefore be considered satisfactory. The final outcome of the first standardisation performed as Study I was a questionnaire consisting of 31 questions organised into five subscales: 1. Attitude to prayer (5 items). 2. Beliefs regarding spirituality (10 items). 3. Spirituality in relation to one's own suffering and the suffering of others (9 items). 4. Sensitivity to the suffering of others (3 items). 5. Attitude to community (4 items). Phase 4: Study II The second study of the SpSup Scale was undertaken to establish the psychometric properties of the scale (e.g. scale reliability, internal consistency, discriminatory power of items, exploratory factor analysis) and was performed on a larger and more diverse population of respondents. In addition, the comparison of psychometric factors between different groups of participants was performed. At the end of scale standardisation, the final norms were defined, leading to the final version of SpSup Scale. Characteristics of the Sample The sample collected from 2018 to 2020 contained 527 participants who were working or preparing to work as professional caregivers: medical students, students of other healthcare faculties, students of non-healthcare faculties and teachers) of whom 416 (79%) were female and 96 (18.22%) male (no data: n = 15, 2.85%). The median age was 25.76 years, with age range 19-70 years. Four comparative groups were distinguished based on occupational affiliations. As a result, the following groups were studied: teachers (n = 85; 16.13%), medical students (n = 189; 35.86%), students of other healthcare faculties (n = 109; 20.68%), and students of non-healthcare faculties (n = 144; 27.32%). In the teacher group, most of the respondents were female (n = 54; 63.53%). The average age in this subgroup was 46.55 years (range 24-70 years) with average professional experience of 22.92 years (SD = 7.87; range 4-45 years). In the group of medical students, the mean age of the respondents was 24.28 years (range 22-28) with a majority of women (n = 125; 66.14%). At the time of the study, all students in this group were in the fifth year of study. In the group of students of other healthcare faculties, most students were in their third year of bachelor level study (n = 78; 71.56%), while the remainder were second year students of master level study. The group was dominated by women (n = 104; 95.41%). The average age in this subgroup was 22.34 years (range 21-29 years). The group of non-healthcare students was dominated by first-year and second year students of bachelor level study (n = 90; 62.50%; and n = 12; 8.33%, respectively), while the remainder were first year students of master level study. The average age in this subgroup was 20.77 years (range 19-25 years). More information about the demographic characteristics in Study II is presented in Table 8. Psychometric Evaluation of Study II Factor Structure The theoretical structure developed in Study 1 was tested using confirmatory factor analysis (CFA). It allowed us to verify the adequacy of the five-factor model. Given the ordinal measurement level of the scale and the significant skewness and kurtosis of some items (skewness above ± 2.0 was found in Item 1; kurtosis above ± 2.0 was found in items 1, 2, 4, 25), the diagonally weighted least squares (DWLS) method was used for the model estimation. The results suggested that the identified latent factors represented a significant part of the 'shared' variance in many cases. In view of this, it was necessary to verify whether the variance was sufficiently significant to provide the basis for isolating the second-order factor to explain the covariance of the first-order factors. To this end, the CFA was performed again to test the fit of the hierarchical model with In both cases, factor loadings for individual items were generally satisfactory and statistically significant. Item 11 was an exception as its fully standardised factor loading was 0.08 and 0.09 for Models 1 and 2, respectively. Nevertheless, it was statistically significant. The exact values of the fully standardised factor loadings for both models are presented in Table 10. Given the results, the theoretical validity can be assumed to have been confirmed in terms of factor stability. Furthermore, the acceptable fit of the hierarchical model with the second-order factor also indicates that, next to five specific dimensions of spirituality, a primary dimension can be distinguished, being the overall spiritual awareness. Scale Reliability and Discriminatory Power of Items The mean scores, standard deviations, and other descriptive statistics for the dimensions of spirituality and overall test score are presented in Table 11. This table also shows the reliability levels for individual measurements. Reliability was assessed using Cronbach's alpha and McDonald's omega (AERA APA, 2014). The last of the measures was calculated due to the lack of strict unidimensionality in the analysed test. Nearly all subscales demonstrated a satisfactory level of reliability, with the highest observed for Attitude to prayer. Reliability was also satisfactory for the overall spirituality level. Only the Spirituality in relation to one's own suffering and the Suffering of others subscales were characterised by a borderline reliability level (above 0.60), which was attributed to a lower mean correlation between items (0.19). Nevertheless, all subscales should be considered to be potentially useful. Differences in Spirituality Between Groups According to Sex, Profession and Age The dimensions of spirituality were tested for possible sex-specific differences. Given the considerable differences in the size of both groups and the lack of normal distributions for the tested variables, the groups were compared using the Mann-Whitney U test. The analysis showed no statistically significant sex-specific differences for Attitude to prayer; Beliefs regarding spirituality; and Sensitivity to suffering. Minor differences (small effect size) between females and males were observed for Spirituality in relation to one's own suffering; Suffering of others; and Attitude to community. Women demonstrated higher scores on these scales. They also presented a higher level of overall spirituality, with a small effect size of differences for men. The spirituality dimensions were also tested for possible differences among the four identified professional groups. To this end, a one-way analysis of variance (ANOVA) was used with ω 2 values to measure the effect size. The analysis showed no statistically significant differences for Attitude to prayer; Spirituality in relation to one's own suffering; and Suffering of others. However, minor differences (small effect size) among the groups were observed for Attitude to community; Sensitivity to the suffering of others; and overall spirituality level. Tukey's test was used for pairwise post hoc testing. It revealed statistically significant differences for Attitude to community only in the comparison of students of other healthcare faculties with students of non-healthcare faculties: t = 3.22; d = 0.43; p = 0.007. Cohen's d showed a medium effect size for differences. The scores for Attitude to community were higher for students of other healthcare faculties compared with students of non-healthcare faculties (M = 9.09; SD = 2.12; and M = 8.15; SD = 2.27, respectively). No differences were observed between other groups for this dimension of spirituality. The pairwise comparisons revealed no statistically significant differences for Sensitivity to the suffering of others. However, a comparison between future physicians (medical students) and students of non-healthcare faculties showed a trend towards statistical significance: t = 2.40; d = 0.25; p = 0.078. A similar trend was observed when comparing future physicians with teachers: t = − 2.39; d = − 0.31; p = 0.081. The effect size for both differences was medium. The scores for Sensitivity to the suffering of others were higher for medical students compared with students of non-healthcare faculties and teachers (M = 6.01; SD = 1.62 for future physicians; M = 5.59; SD = 1.72 for students of non-healthcare faculties; and M = 5.52; SD = 1.52 for teachers). No differences were observed between other groups for this dimension of spirituality. Regarding the overall level of spirituality, statistically significant differences were observed only when comparing students of other healthcare faculties with students of non-healthcare faculties: t = 3.61; d = 0.48; p = 0.002. The effect size for the differences was moderate. The former group had higher scores for the overall level of spirituality compared with the latter group (M = 66.92; SD = 9.62; and M = 61.88; SD = 11.27, respectively). No differences were observed between other groups in this dimension of spirituality. We investigated whether the respective dimensions of spirituality were related to respondents' age and seniority (in this case, correlations were calculated only for the group of teachers in which this variable was measured). Given the significant sample size, the Pearson correlation coefficient was used. The analysis showed no statistically significant relationships between spirituality and its dimensions and the demographic variables of age and seniority. In terms of the factors, the only (very weak) correlation was found between Attitude to prayer and seniority (r = 0.09; p = 0.047). Diagnostic Accuracy To estimate the diagnostic accuracy of a test, one needs to compare the results obtained in a tested group of respondents with an external criterion that allows us to assess the same variable as the one measured by that test. In our case, the external criterion was defined as the respondents' behaviour, for instance, their opinion regarding spiritual support and care in a specific situation (task). To this end, 43 subjects were asked to complete the scale, while the results were calculated using Yule's formula. The result was 0.79, with the estimated significance level ϰ 2 = 27.51. The chi-square critical value was from 3.841, ∞, which means that the obtained result ϰ 2 falls within this range. The result can therefore be assumed to be statistically significant at p < 0.05. Consequently, the relationship between the respondents' perception of their ability to perform a given task and the SpSup Scale score was found to be true, with the type I error probability of 0.001 (one in 1000) or less. Sten Scores and the Key for the Scale Calculations The final step in the development of the proposed questionnaire was to establish a standardised scale for the calculation of the scale scores. Given that no significant differences were found among groups in terms of the dimensions of spirituality, common standards were adopted for all respondents using Sten scores. Scores within Sten 1-2 were defined as very low, 3-4 as low, 5-6 as medium, 7-8 as high, and 9-10 as very high. Discussion Training courses for people in caregiving professions, such as physicians, nurses, midwives, psychologists, pedagogists, teachers, chaplains and other helpers, focus on improvement in skills, an outcome which requires evaluation (Cortés-Rodríguez et al.,., 2022;Moore et al., 2018;Puchalski et al., 2021). As our university was the first in Poland to introduce spirituality into the medical curriculum, we wanted to develop a scale to assess the outcomes of this programme. A literature review showed that, despite the availability of several spiritual care tools, none of them captured the variables of interest to us (Deluga et al., 2020;Dobrowolska et al., 2016;Heszen-Niejodek & Gruszczyńska, 2004;Jarosz, 2011;Piotrowski et al., 2013). Furthermore, we wanted a scale that was relevant beyond healthcare. After methodological consultations, the target group of the SpSup Scale was extended, and the scale can now be used to test any adult working in or preparing for a caregiving profession. Results of the validation and standardization of our tool and the obtained psychometric values are highly satisfactory. It is worth highlighting the overall high reliability of the scale (Cronbach's α = 0.88) and subscales (1 = 0.65; 2 = 0.85; 3 = 0.84; 4 = 0.73; 5 = 0.73), a satisfactory diagnostic accuracy (0.79, with the estimated significance level ϰ 2 = 27.51), and a satisfactory discrimination index. Construct validation of the SpSup Scale is currently underway through correlation with similar scales, as recommended by Koenig and Al Zaben (2021, pp. 3475-3476). As such, the SpSup Scale is recommended for the assessment of spiritual care, in both clinical and research settings, with regards to the following components: (1) Respondents' opinions on spirituality and their understanding of their own spirituality; (2) Attitude to spirituality in a relationship with a person in need of care and support; (3) The level of skills necessary to diagnose the spiritual suffering in supported persons; and (4) Respondents' readiness to provide spiritual support to those who suffer. Study Limitations This study has some limitations. The questionnaire's format as it stands may be too long for everyday use. Future studies should investigate whether a shorter version of the scale could be created. In addition, future research studies should replicate the present study using large cohorts to establish correlation between SpSup Scale and factors such as personality or emotional intelligence. We also believe that crosscomparing findings among multiple professional domains would reveal insightful and useful findings. Conclusions Supporting others requires many competencies and skills from professionals. In addition to knowledge, experience and technical skills directly related to the specific profession, people looking for support are increasingly expecting interpersonal competencies in their caregivers, including those related to spiritual support. Regardless of their belief system, a suffering person wants to be treated not only by a specialist qualified in a specific field, but also by a fellow human being capable of showing concern, recognising emotions, talking, and offering help. Many universities are implementing programmes for the development of interpersonal attitudes and other qualifications necessary for specialists to show multilevel support suited to clients'/patients' needs. To evaluate the effect of such training courses, it is necessary to have appropriate measures. The Polish version of the SpSup Scale has been constructed as an instrument for measuring spiritual competencies among professionals. Considering the good psychometric properties of the tool, its use is recommended for the assessment of spiritual care and support, along with their components, in both clinical and research settings. INSTRUCTIONS: This questionnaire examines beliefs about spiritual care provided to a suffering person who needs spiritual support. The following statements were prepared based on the definition of spirituality presented below. Please read the definition and all statements carefully and try to respond to each statement using the following formula: 1. I strongly disagree with this statement 2. I disagree with this statement 3. I agree with this statement 4. I strongly agree with this statement Definition of spirituality proposed by Polskie Towarzystwo Opieki Duchowej w Medycynie/Polish Association for Spiritual Care in Medicine (PTODM). 2 Spirituality is a dimension of human life that relates to transcendence and other existentially important values. Dimensions of spirituality: 1) Religiousness of a person, especially their relationship with God, personal beliefs, and religious practices, as well as community interaction; 2) Existential quest, especially regarding: -the meaning of life, suffering, and death, issues of own dignity, who one actually is as a person; -a sense of individual freedom and responsibility, hope and despair, reconciliation and forgiveness, love and joy. 3) Values by which a person lives, especially with regards to oneself and others, work, nature, art and culture, ethical and moral choices, and life at large. Author Contributions MF-K and MK contributed to the study conception and design. Material preparation, data collection and analysis were performed by MF-K. The first draft of the manuscript was written by MF-K. Megan Best was involved in critical analysis and interpretation of the study results. All authors read and approved the final manuscript. Funding The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
2022-07-26T13:35:48.018Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "c2aa5b7bb219020bb687d99f599df733c6f40e81", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10943-022-01608-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "c2aa5b7bb219020bb687d99f599df733c6f40e81", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225338672
pes2o/s2orc
v3-fos-license
Pattern Recognition in Multivariate Time Series: Towards an Automated Event Detection Method for Smart Manufacturing Systems : This paper presents a framework to utilize multivariate time series data to automatically identify reoccurring events, e Introduction The manufacturing industry is currently in the midst of the fourth industrial revolution. This digital transformation towards smart manufacturing systems (SMS) is based on the three pillars connectivity, virtualization, and data utilization [1]. This circumstance is fueled by the rapid development in both information technology (IT) and operational technology (OT), which has led to an increasingly connected and automated world: in essence, a merging of both worlds (IT/OT). Technologies like sensor technology, cloud computing, and AI/big data analytics lead not only to a dramatic increase in the amount of (manufacturing) data, but also to rapidly developing data processing capabilities, which have raised interest in data mining approaches for automating repetitive processes [2]. Time series data, thereby, are one of the most common information representation variants in a variety of different business areas. Advanced process monitoring, on which we rely regularly in SMS, typically yields multidimensional data to increase its effectiveness. A specific branch in time series analysis deals with the recognition of reoccurring patterns within the data (see Figure 1). Time series data analysis generally utilizes established pattern recognition methods in order to identify the steady state, anomalies and characteristic failure patterns. However, the identification of such distinct patterns in multivariate time series represents a highly complex problem due to the interactions (correlation) between variables, time dependency and the usually nonlinear nature of the regarded data [3,4]. Furthermore, real-word data are usually noisy and, thus, require pre-processing to become a valuable input for analytical algorithms. Pre-processing enhances the ability of the overall approach to be flexible enough to detect various disturbances, such as missing values, noise or outliers, within the data set, but also sufficiently restrictive so that not all insignificant fluctuations (e.g., measurement errors) are labeled as irregularities. Standard distance-based similarity metrics, which are often used in related unsupervised learning approaches, can therefore not be applied without adaptation for different types of problems [4,5]. the ability of the overall approach to be flexible enough to detect various disturbances, such as missing values, noise or outliers, within the data set, but also sufficiently restrictive so that not all insignificant fluctuations (e.g., measurement errors) are labeled as irregularities. Standard distancebased similarity metrics, which are often used in related unsupervised learning approaches, can therefore not be applied without adaptation for different types of problems [4,5]. There is a growing interest in the field of pattern recognition, especially for multivariate time series data due to, on the one hand, increasing availability of potent algorithms and easy-to-use tools, and on the other hand, the realization of the potential valuable impact of insights derived from such data sets. Most of the recent work on clustering, however, focuses on grouping different time series into similar batches [6,7]. The paper by [8] for instance presents a popular approach using fuzzy clustering combined with a dynamic time warping technique resulting in an enhance performance when compared to previous methods. There are only a few authors focusing on pattern recognition within a single time series, such as we regularly are confronted in maintenance applications [4]. This paper presents a framework to utilize multivariate data to identify reoccurring patterns in real-world manufacturing data. The objective is to identify failure patterns for new applications in the area of maintenance. We analyze the drying process of plastic granulate in an industrial drying hopper equipped with multiple sensors. The number of different failure patterns (sources for defects) is not known beforehand and the sensor readings are subject to natural fluctuations (noise). The overall work presented in this manuscript includes a comparison of two different approaches towards the identification of unique patterns in the data set. One processing path includes the sequential use of common segmentation and clustering algorithms to identify patterns, which might lead to a better respective (both steps of the chain) and, therefore, overall performance. The second approach features a collaborative method with a built-in time dependency structure, thereby avoiding a multiplication of losses due to a sequential processing chain. The better performing method is fine-tuned afterwards in terms of its hyperparameters. The resulting patterns (clusters) that are identified by the framework can then serve as input for advanced monitoring methods predicting upcoming failures and ultimately reducing unplanned machine downtime in the future. The outline of this paper features a short introduction to the topic of plastic granulate drying and time series clustering in Section 2, followed by the proposed framework in Section 3. Finally, the results are presented in Section 4 and consequently discussed in Section 5. There is a growing interest in the field of pattern recognition, especially for multivariate time series data due to, on the one hand, increasing availability of potent algorithms and easy-to-use tools, and on the other hand, the realization of the potential valuable impact of insights derived from such data sets. Most of the recent work on clustering, however, focuses on grouping different time series into similar batches [6,7]. The paper by [8] for instance presents a popular approach using fuzzy clustering combined with a dynamic time warping technique resulting in an enhance performance when compared to previous methods. There are only a few authors focusing on pattern recognition within a single time series, such as we regularly are confronted in maintenance applications [4]. This paper presents a framework to utilize multivariate data to identify reoccurring patterns in real-world manufacturing data. The objective is to identify failure patterns for new applications in the area of maintenance. We analyze the drying process of plastic granulate in an industrial drying hopper equipped with multiple sensors. The number of different failure patterns (sources for defects) is not known beforehand and the sensor readings are subject to natural fluctuations (noise). The overall work presented in this manuscript includes a comparison of two different approaches towards the identification of unique patterns in the data set. One processing path includes the sequential use of common segmentation and clustering algorithms to identify patterns, which might lead to a better respective (both steps of the chain) and, therefore, overall performance. The second approach features a collaborative method with a built-in time dependency structure, thereby avoiding a multiplication of losses due to a sequential processing chain. The better performing method is fine-tuned afterwards in terms of its hyperparameters. The resulting patterns (clusters) that are identified by the framework can then serve as input for advanced monitoring methods predicting upcoming failures and ultimately reducing unplanned machine downtime in the future. The outline of this paper features a short introduction to the topic of plastic granulate drying and time series clustering in Section 2, followed by the proposed framework in Section 3. Finally, the results are presented in Section 4 and consequently discussed in Section 5. State of the Art Industry 4.0, machine learning, and artificial intelligence (AI) are popular topics in the current scientific discourse and, therefore, the current state of the art is subject to frequent changes or newly arising theories and topics [1]. Thus, we chose to apply the Grounded Theory Approach, originally developed by Glaser and Strauss in 1967 [9], to gather and subsequently analyze the existing knowledge on the topic of pattern recognition on time series data in the context of Smart Manufacturing Systems (SMS). The approach encourages inductive reasoning for the formulation of the experimental setup and, hence, fosters the development of a practical state of the art framework [10]. The corresponding literature was selected from the results of a search term optimization identifying comparable research approaches and problems. We used established academic research databases, in particular Scopus, Web of Science, IEEE Xplore, and ScienceDirect, to execute the keyword-based literature search process. The most relevant papers were selected according to their degree of relation to subject of patter recognition in time series for SMS. In order to establish an automated event detection method for SMS through an unsupervised machine learning framework, we need to take a closer look at the data and its underlying manufacturing process. Domain knowledge is key to developing successful machine learning applications in a manufacturing context. Data mining process phases, such as feature extraction and the selection of suitable algorithms and machine learning models, benefit form process understanding and the appropriate application of these insights [4]. The following chapter introduces the general drying process of plastic granulates and presents respective data usable for the analysis. Manufacturing Process Drying hoppers are industrial machines, which are mostly used to dry and remove moisture from non-hygroscopic and slightly hygroscopic plastic pellets. They are especially effective for the process of drying the surface area of non-hygroscopic resin pellets prior to further fabrication [11] to avoid quality issues further down the process chain such as voids caused by gasification of moisture in the final parts. The drying process in general is a vital step in the chain of manufacturing for diverse types of polymers since the excess moisture will otherwise react during the melting process. This might result in a loss of molecular weight and subsequently to changes in regard of the physical properties, such as reduced tensile and impact strength of the material [12,13]. Thereby, most of the modern drying hoppers are structured similarly, in order to ensure an even temperature distribution while also maintaining an even mass material flow. They feature a vertically cylindric body with a conical hole at the bottom, as depicted in Figure 2. The spreader tubes inside the hopper then inject hot and dry air into the chamber, while the granulate is slowly moving from the top to the bottom valve [11,13]. After the hot air has passed the material, it is cleared, dried, (re-)heated (if necessary) and then again reinjected into the drying chamber. For a drying process to be successful in achieving the targeted humidity, there are three major factors that need to be considered: Drying time, drying temperature, and the dryness of the circulating air [13]. The most intuitive factor might be the drying time since the circulating air can only take a certain amount of humidity from the material until it is saturated. The specific amount of time that the material needs to stay in the machine in order to achieve a defined humidity can be calculated as a function of air temperature, input dryness of the material and target humidity. In general, the drying process has a concave form insofar as the percentage humidity reduction is steep and flattens during later stages [12,14]. Both air dryness and temperature also play decisive roles for the drying process and are usually coordinated in dependency of one another. By drying the air, the relative humidity is reduced and consequently the moisture holding capacity increases, whereas a higher temperature speeds up the drying process, since it facilitates the diffusion of water from the interior to the surface of the material [11,14]. However, there might be adverse effects of too high temperatures on materials quality depending on the plastic itself that need to be considered. Due to the critical implications of a malfunctioning drying hopper system, the corresponding process is usually monitored round the clock. In our use case, a six-zoned temperature probe for the monitoring process, which continuously measures the temperature at different locations (heights) of the vertically aligned drying chamber (see Figure 1), is applied. Doing so facilitates the indirect detection of a variety of different reoccurring disruptions in the drying process, such as over-or under-dried material, heater malfunctions, and many more since most of those incidents are directly connected or at least can be derived indirectly from the temperature of the circulating air [13]. This is also the reason why these six temperature zones play a major role in determining the performances of the different pattern recognition approaches presented in this paper. The data set used for this analysis is based on two separate data sources, two individually controlled, yet identical, drying hopper machines located in the same manufacturing plant. Each drying hopper was monitored over a specific time, 6 months for machine one (M1) and 4 months for machine two (M2), whereby an automatic reading of the sensor output was conducted every 60 s over the entire monitoring period. In addition to the different temperature zones, the data set contains a variety of other operational process related sensor data. After the initial preprocessing of the data set, which involves the removal of missing, damaged, and severely fragmented data objects, the resulting data set consists of 31 (M1) and 37 (M2) different features and 263,479 (M1) and 176,703 (M2) instances. The signals from the sensors are partly continuous, such as the real time temperature measurements of the circulating air or panel, as well as partly binary for information on the status of different components such as heater or blower (on/off). The following Figure 3 illustrates the different events involved, by displaying the steady state of the machine (left) in comparison to a typical disruption (right). Due to the critical implications of a malfunctioning drying hopper system, the corresponding process is usually monitored round the clock. In our use case, a six-zoned temperature probe for the monitoring process, which continuously measures the temperature at different locations (heights) of the vertically aligned drying chamber (see Figure 1), is applied. Doing so facilitates the indirect detection of a variety of different reoccurring disruptions in the drying process, such as over-or under-dried material, heater malfunctions, and many more since most of those incidents are directly connected or at least can be derived indirectly from the temperature of the circulating air [13]. This is also the reason why these six temperature zones play a major role in determining the performances of the different pattern recognition approaches presented in this paper. The data set used for this analysis is based on two separate data sources, two individually controlled, yet identical, drying hopper machines located in the same manufacturing plant. Each drying hopper was monitored over a specific time, 6 months for machine one (M1) and 4 months for machine two (M2), whereby an automatic reading of the sensor output was conducted every 60 s over the entire monitoring period. In addition to the different temperature zones, the data set contains a variety of other operational process related sensor data. After the initial preprocessing of the data set, which involves the removal of missing, damaged, and severely fragmented data objects, the resulting data set consists of 31 (M1) and 37 (M2) different features and 263,479 (M1) and 176,703 (M2) instances. The signals from the sensors are partly continuous, such as the real time temperature measurements of the circulating air or panel, as well as partly binary for information on the status of different components such as heater or blower (on/off). The following Figure 3 illustrates the different events involved, by displaying the steady state of the machine (left) in comparison to a typical disruption (right). (M2) instances. The signals from the sensors are partly continuous, such as the real time temperature measurements of the circulating air or panel, as well as partly binary for information on the status of different components such as heater or blower (on/off). The following Figure 3 illustrates the different events involved, by displaying the steady state of the machine (left) in comparison to a typical disruption (right). Analytical Approaches There is a plethora of scientific work available focused on the topic of time series analysis, in particular within the last decades. Two major themes among the published research that are strongly related to the practical application of those algorithms are (i) fault (anomaly) detection within time series and (ii) clustering of different time series [4]. Since both of these areas are comparably well developed evidenced by the large amount of available scientific contributions, some pattern recognition approaches employed this to construct a pipeline-like processing framework to cluster time series data. The works [15][16][17] are comprehensive reviews and applications focused on this topical area and used segmentation and clustering methods subsequently. The majority of these papers apply correlation analysis, wavelet transformation, and principle component analysis (PCA) based metrics to identify homogeneous segments and to assign the data points to each cluster. Classical distance-based metrics (e.g., Euclidean distance) are still used in various scientific works today, most often to provide a benchmark for the evaluation process [4,5]. A more recent work by [8] presented an approach using fuzzy clustering combined with a dynamic time warping technique to identify the different clusters after the segmentation has been completed. Another reoccurring theme in the field of pattern recognition in times series is the domain those approaches are applied mostly. The majority of datasets that has been used to evaluate theoretical frameworks stem from the medical domain (e.g., ECG and biological signals), the financial sector (e.g., exchange rates and stock trade), and speech recognition applications [3][4][5]. Significantly fewer works focus on manufacturing applications and the analysis of industrial machines (e.g., gas turbine and water level monitoring), especially in a maintenance context [4]. Most recent contributions to the maintenance discourse focus on the prediction and classification of previously defined failure patterns, thereby solely relying on the preceding groundwork of experts. Hence, the use of time series analysis in SMS itself provides a multitude of opportunities for further research. Additionally, the effectiveness of cutting-edge event detection methods is examined for industrial applications, which later can be used to assist or even replace expert centered error recognition systems. Most importantly, the adaption and usage of advanced pattern recognition algorithms for SMS serves as a stepping stone into further studies combining domain expertise and novel machine learning algorithms for solving advanced industrial problems, therefore moving towards a more automated data processing chain overall. To achieve this goal, this paper aims to combine the best of both worlds: the latest findings from the general time series analysis literature as well as complex, real-world industrial data to put those cutting-edge approaches to the test. In doing so, we also use feature fusion [4] and sequential pattern recognition methods [18], which have been successfully applied previously within an industrial context. However, an important distinction to comparable works in this field is that we start without any prior expert knowledge on the characteristics (e.g., type, frequency, and shape) of the failure patterns. Furthermore, we also chose a quantitative method to compare competing time series analysis approaches by implementing various methods in combination of different cost function, while comparable papers often select promising configurations beforehand without further examination. This rigorous (in-depth) proceeding, however, results in a trade-off so that only a limited number of different algorithms (or rather pathways) could be implemented in the scope of this scientific work, which is admittedly smaller than comparable studies, such as [19,20] produced. Research Methodology An overview of the novel framework proposed in this paper is illustrated in Figure 4. It is important to note that it includes two separate approaches to pattern recognition that were conducted separately and then compared to identify the superior process given the unique characteristics of the data set. This is intentionally included in the framework as data sets differ significantly in their nature and providing options to the user that allow to compare the performance as part of the framework is a promising way of leveraging these differences in data sets today. Principle Component Analysis (PCA) The first step, after generally checking and cleaning the data (pre-processing), involves a PCA to identify irrelevant features to reduce the dimensionality (feature reduction) and therefore overall complexity of the data set. In doing so, the implemented algorithm is successively constructing "artificial variables" (principle components) as linear combination of the original variables, so that those new vectors are orthogonally aligned to each other. This new representation of the data with linear independent vectors can subsequently be used, instead of the singular vectors, as input data for the affiliated analysis processes (segmentation and clustering) [4]. After the data has been cleared and rearranged, we can start with the first step (of the first pathway) of segmenting the time series. Heuristic Segmentation Following the first branch of the pattern recognition pathway, we consider the multivariate signal = { } =1 with ∈ ℝ being a m-dimensional (number of sensors) observation. stands for the total number of observations and is the index of the measurements in time. Consequently, the multivariate time series can also be expressed as the following × matrix: In order to identify sections of the time series with similar trajectories (patterns), we first need to partition the signal into internal homogeneous segments. Ideally, the time series will be automatically split into segments representing (i) regular machine behavior (steady state) on the one side and (ii) segments with a variety of events (including failure events) on the other side as the final output. An event or segment in this case can therefore be formulated as = { } = +1 , with being the − ℎ overall segment of the partition and ( − ) with 1 ≤ < ≤ denoting the length and location of the interval. For a less formal notation, it can also be denoted as .. . The points , in which the state of the signal changes, are usually referred to as change points or break points and are given as = { 1 , 2 , … , } with 1 being the chronologically first break point in the time series. The set of true break points, which was acquired by manually labeling the time series, is denoted as * with its corresponding elements { 1 * , 2 * , … , * }, while ̂= { 1 , 2 , … ,̂} is used to record the approximated break points retuned by the algorithm. To solve this problem, we will apply the heuristic changepoint detection method implemented Principle Component Analysis (PCA) The first step, after generally checking and cleaning the data (pre-processing), involves a PCA to identify irrelevant features to reduce the dimensionality (feature reduction) and therefore overall complexity of the data set. In doing so, the implemented algorithm is successively constructing "artificial variables" (principle components) as linear combination of the original variables, so that those new vectors are orthogonally aligned to each other. This new representation of the data with linear independent vectors can subsequently be used, instead of the singular vectors, as input data for the affiliated analysis processes (segmentation and clustering) [4]. After the data has been cleared and rearranged, we can start with the first step (of the first pathway) of segmenting the time series. Heuristic Segmentation Following the first branch of the pattern recognition pathway, we consider the multivariate signal X = {x t } T t=1 with x t ∈ R m being a m-dimensional (number of sensors) observation. T. stands for the total number of observations and t is the index of the measurements in time. Consequently, the multivariate time series can also be expressed as the following m × T matrix: In order to identify sections of the time series with similar trajectories (patterns), we first need to partition the signal into internal homogeneous segments. Ideally, the time series will be automatically split into segments representing (i) regular machine behavior (steady state) on the one side and (ii) segments with a variety of events (including failure events) on the other side as the final output. An event or segment in this case can therefore be formulated as X i = {x t } b t=a+1 , with X i being the i − th overall segment of the partition and (b − a) with 1 ≤ a < b ≤ T denoting the length and location of the interval. For a less formal notation, it can also be denoted as X a..b . The points t, in which the state of the signal changes, are usually referred to as change points or break points and are given as B = {b 1 , b 2 , . . . , b n } with b 1 being the chronologically first break point in the time series. The set of true break points, which was acquired by manually labeling the time series, is denoted as B * with its . . ,b n is used to record the approximated break points retuned by the algorithm. To solve this problem, we will apply the heuristic changepoint detection method implemented in a package called ruptures [5], which can be described as a combination of three defining elements: cost function, search method, and (penalty) constraint (see Figure 5). automatically split into segments representing (i) regular machine behavior (steady state) on the one side and (ii) segments with a variety of events (including failure events) on the other side as the final output. An event or segment in this case can therefore be formulated as = { } = +1 , with being the − ℎ overall segment of the partition and ( − ) with 1 ≤ < ≤ denoting the length and location of the interval. For a less formal notation, it can also be denoted as .. . The points , in which the state of the signal changes, are usually referred to as change points or break points and are given as = { 1 , 2 , … , } with 1 being the chronologically first break point in the time series. The set of true break points, which was acquired by manually labeling the time series, is denoted as * with its corresponding elements { 1 * , 2 * , … , * }, while ̂= { 1 , 2 , … ,̂} is used to record the approximated break points retuned by the algorithm. To solve this problem, we will apply the heuristic changepoint detection method implemented in a package called ruptures [5], which can be described as a combination of three defining elements: cost function, search method, and (penalty) constraint (see Figure 5). Formally the change point detection (segmentation) can be viewed as model selection problem, which tries to choose the best possible partition of the time series given a quantitative criterion. This criterion is most often expressed through the form of cost functions and is therefore denoted as C(B), which needs to be minimized during this process of optimization [5]. In fact, the criterion function consists of a sum of costs of all segments that define the segmentation: For this formulation we also need to define the dummy indices b 0 as the first index and b n+1 as the end of the time series. Furthermore, we need to introduce a penalty function for our overall minimization problem, since the algorithm would otherwise always converge to an oversegmentation, where all points are ultimately represented in their own individual segment and therefore minimizing the overall sum of costs [5]. The final optimization problem for the segmentation is therefore given by: The penalty function (constraint) can also be interpreted as complexity measure for the segmentation and needs to be adjusted for each heuristic, respectively. In this case we choose a linear penalty function given by pen l 0 (B) := β|B| with β > 0. Choosing a linear penalty function thereby aligns with successful implementations from recent literature [5,[21][22][23] and with well-known model selection criterions such as the Akaike's information criterion and the Bayesian information criterion. After our problem has been defined, we need to choose a search method to calculate the break points of the time series. There are a variety of different solving algorithms available from which we will explore three approaches (also featured in ruptures) in more depth in the following section. Sliding Window Segmentation The core idea of the sliding window algorithm is to approach the partitioning (segmentation) problem by moving a fix sized window over the time series. The window basically represents the area of vision or rather area of focus of the algorithm for a given iteration [6]. Thereby, the approach implemented by ruptures is not directly dividing the times series in each iteration solely based on the information given in the window, but rather by calculating a so-called discrepancy measure for each point in the signal. To do so, the window is split in two parts ("left" and "right") along its center point [5]. The discrepancy measure is defined as following: Given a cost function c(·) and interval [a, b], the discrepancy measure is basically computing the difference in cost between treating the sub-signal x a..b as one segment in comparison to splitting the interval up into two separate segments (x a..t and x t..b ) at point t. Consequently, the discrepancy function is reaching high values when the two sub-windows contain dissimilar segments according to the cost function. After the values of the discrepancy function have been computed for all points of the time series, the algorithm analyses the discrepancy curve to identify the maximum values (peak search). The peaks of the function correspond to the incidents, in which the window was holding two highly dissimilar segments in each sub-window and therefore gives us the break points of the signal [5]. Figure 6 provides a schematic overview of the algorithm to provide more clarification of its functionality and objective. Top-Down Segmentation The top-down method is a greedy-like approach to partition the time series iteratively. To do so, it considers the whole time series and calculates the total cost for each possible binary segmentation of the signal at first. After the most cost-efficient partitioning has been found, the time series is split at the identified change point and the procedure repeats for each sub-signal, respectively [5,6]. The calculation on which the partitioning decision of each stage of the iterative algorithm is based on can be easily displayed with the following formula: It represents the decision mechanism for the first iteration of the algorithm. However, it can be applied to each sub-signal after the initial time series has been divided initially. The stopping criterion of the approach depends entirely on the previously defined penalty function of the overall problem (2). Thereby, the cost-savior of each partitioning is compared to the penalty of introducing another break point to the existing set of change points. The algorithm terminates when there is no partitioning within the iteration that has a higher cost-savior compared to the penalty value [5]. The schematic depiction in Figure 7 provides an overview over the whole process. Top-Down Segmentation The top-down method is a greedy-like approach to partition the time series iteratively. To do so, it considers the whole time series and calculates the total cost for each possible binary segmentation of the signal at first. After the most cost-efficient partitioning has been found, the time series is split at the identified change point and the procedure repeats for each sub-signal, respectively [5,6]. The calculation on which the partitioning decision of each stage of the iterative algorithm is based on can be easily displayed with the following formula: It represents the decision mechanism for the first iteration of the algorithm. However, it can be applied to each sub-signal after the initial time series has been divided initially. The stopping criterion of the approach depends entirely on the previously defined penalty function of the overall problem (2). Thereby, the cost-savior of each partitioning is compared to the penalty of introducing another break point to the existing set of change points. The algorithm terminates when there is no partitioning within the iteration that has a higher cost-savior compared to the penalty value [5]. The schematic depiction in Figure 7 provides an overview over the whole process. applied to each sub-signal after the initial time series has been divided initially. The stopping criterion of the approach depends entirely on the previously defined penalty function of the overall problem (2). Thereby, the cost-savior of each partitioning is compared to the penalty of introducing another break point to the existing set of change points. The algorithm terminates when there is no partitioning within the iteration that has a higher cost-savior compared to the penalty value [5]. The schematic depiction in Figure 7 provides an overview over the whole process. Bottom-Up Segmentation The bottom-up segmentation can be best described as the counterpart to the previously discussed top-down approach. Herby, the start of the algorithm is characterized by the creation of a fine, evenly distributed partitioning (all segments have the same length) of the entire time series. Subsequently, the algorithm merges the most cost-efficient pair of adjacent segments in a greedy-like fashion [5,6]. To clarify the overall process, a schematic overview of the process is depicted in Figure 8. Bottom-Up Segmentation The bottom-up segmentation can be best described as the counterpart to the previously discussed top-down approach. Herby, the start of the algorithm is characterized by the creation of a fine, evenly distributed partitioning (all segments have the same length) of the entire time series. Subsequently, the algorithm merges the most cost-efficient pair of adjacent segments in a greedy-like fashion [5,6]. To clarify the overall process, a schematic overview of the process is depicted in Figure 8. In contrast to other bottom-up methods covered in the current literature on segmentation [6], the approach we used in this case also harnesses the discrepancy measure defined above (see sliding window) in order to determine the most cost-efficient pairs to merge. During each iteration, all temporary change points are ranked according to the discrepancy measure, which takes into account all of the points in both adjacent segments. Consequently, the break point with the lowest discrepancy measure is deleted from the set of potential break points. This process continuous until the increase in costs, due to the removal of a break point, cannot be compensated by the reduction of the penalty and the algorithm terminates [5]. After completing the heuristic segmentation, standard clustering approaches such as k-means or Gaussian mixture models can be applied (in combination with DTW) to sort the time series snippets into equal groups [8]. These approaches work similarly to parts of the TICC algorithm and are therefore relevant to the depictions in the following chapter as indicated by the '*' in Figure 4. Toeplitz inverse-Covariance Clustering (TICC) In contrary to the heuristic approaches that start by partitioning the signal first and then conduct the clustering, the alternative method we explored to tackle the problem includes the algorithm TICC, which approaches the problem by simultaneously segmenting and clustering the multivariate time series through the use of Markov random fields (MRF). The reason for using Markov networks is that they denote relationships between variables generally much stronger than more simple correlationbased models. Thereby, each cluster is defined as a dependency network, which is taking into account both the interdependencies of all dimensions (features) of an observation ∈ ℝ and the dependency to near neighbor point {… , −2 , −1 , +1 , +2 , … } . Each cluster network can be described as a multilayer undirected graph, were the vertices are representing the random variables and the edges indicate a certain dependency among them [3]. The layers of each network correspond to the number of consecutive observations (instances) and therefore to the receptive field of the model (see Figure 9). In contrast to other bottom-up methods covered in the current literature on segmentation [6], the approach we used in this case also harnesses the discrepancy measure defined above (see sliding window) in order to determine the most cost-efficient pairs to merge. During each iteration, all temporary change points are ranked according to the discrepancy measure, which takes into account all of the points in both adjacent segments. Consequently, the break point with the lowest discrepancy measure is deleted from the set of potential break points. This process continuous until the increase in costs, due to the removal of a break point, cannot be compensated by the reduction of the penalty and the algorithm terminates [5]. After completing the heuristic segmentation, standard clustering approaches such as k-means or Gaussian mixture models can be applied (in combination with DTW) to sort the time series snippets into equal groups [8]. These approaches work similarly to parts of the TICC algorithm and are therefore relevant to the depictions in the following chapter as indicated by the '*' in Figure 4. Toeplitz inverse-Covariance Clustering (TICC) In contrary to the heuristic approaches that start by partitioning the signal first and then conduct the clustering, the alternative method we explored to tackle the problem includes the algorithm TICC, which approaches the problem by simultaneously segmenting and clustering the multivariate time series through the use of Markov random fields (MRF). The reason for using Markov networks is that they denote relationships between variables generally much stronger than more simple correlation-based models. Thereby, each cluster is defined as a dependency network, which is taking into account both the interdependencies of all dimensions (features) of an observation x t ∈ R m and the dependency to near neighbor point . . . , x t−2 , x t−1 , x t+1 , x t+2 , . . . . Each cluster network can be described as a multilayer undirected graph, were the vertices are representing the random variables and the edges indicate a certain dependency among them [3]. The layers of each network correspond to the number of consecutive observations (instances) and therefore to the receptive field of the model (see Figure 9). series through the use of Markov random fields (MRF). The reason for using Markov networks is that they denote relationships between variables generally much stronger than more simple correlationbased models. Thereby, each cluster is defined as a dependency network, which is taking into account both the interdependencies of all dimensions (features) of an observation ∈ ℝ and the dependency to near neighbor point {… , −2 , −1 , +1 , +2 , … } . Each cluster network can be described as a multilayer undirected graph, were the vertices are representing the random variables and the edges indicate a certain dependency among them [3]. The layers of each network correspond to the number of consecutive observations (instances) and therefore to the receptive field of the model (see Figure 9). Another important differentiating quality of MRFs is that, under certain criteria, they can be used to mimic a multivariate Gaussian model with respect to the graphical representation of the multivariate distribution. Since this applies in our case, we can use the precision matrix (inverse covariance matrix) as a formal representation of the graphical structure, where a zero (conditional independence) in the matrix corresponds to a missing edge between two variables. The benefits of Another important differentiating quality of MRFs is that, under certain criteria, they can be used to mimic a multivariate Gaussian model with respect to the graphical representation of the multivariate distribution. Since this applies in our case, we can use the precision matrix (inverse covariance matrix) as a formal representation of the graphical structure, where a zero (conditional independence) in the matrix corresponds to a missing edge between two variables. The benefits of using the precision matrix Θ over the covariance matrix are manifold and include computational advantages as the result of a tendency of being sparse. Furthermore, the sparsity of the MRFs also prevents overfitting and therefore increase the robustness of the overall approach [3,24]. Problem Formulation In addition to the preciously established notation, we need to introduce a few modifications for the formulation of the TICC problem. First, we are interested in a short subsequence (with w T), rather than just one point in time x t . Therefore, to assign the data points to the corresponding clusters, we need to address those points as nw− dimensional vectors X t = x t−w+1 , . . . , x t . The number of clusters, which has to be defined beforehand, will be denoted as K ∈ N and the time horizon of the short subsequence will be set by the window size w. A challenge of the TICC approach is to address the assignment problem of matching the data points to one of the K clusters and thus determining the assignment sets P = {P 1 , . . . , P K } with P i ⊂ {1, 2, . . . , T}. Furthermore, the algorithm must update the cluster parameters Θ = {Θ 1 , . . . , Θ K } based on the previously calculated assignment mappings [3]. The overall optimization problem can be formulated as: This entire problem is called Toeplitz inverse covariance-based clustering (TICC). T in the formula reflects the set of symmetric block Toeplitz nw × nw matrices, which adds an additional constraint for the construction of the MRFs. The enforced structure ensures the time-invariant property of each cluster so that the cluster assignments do not depend on gate-events. This implies that any edge between two layers l and l + 1 must also exist for the layers l + 1 and l + 2. The first expression λ • Θ i 1 represents an additional sparsity constraint based on the Hadamard product of the inverse covariance matrix with the regularization parameter λ ∈ R nw×nw . The second part of the formula (X t , Θ i ) states the core optimization problem of cluster parameter fitting, given assignment set P i (log likelihood). The last part of the overall problem addresses the desired property of temporal consistency. By adding a penalty β for each time that two consecutive observations are assigned to two different cluster X t−1 P i , the overall algorithm is incentivized to hold those instances to a minimum [3]. The method to handle this complex problem can be described as a variation of the expectation maximization (EM) algorithm, which is commonly used to solve related clustering issues by applying Gaussian Mixture Models [25]. Cluster assignment (Expectation) Similar to the expectation step in the common EM algorithm, TICC starts with the assignment of observations to the given clusters. However, to begin this process we need a prior distribution to which the data points can be assigned to. Hence, the overall TICC algorithm is initialized by conducting a k-means clustering to calculate the initial cluster parameters Θ. After the initialization phase, the points are assigned to the most suitable cluster by fixing the values of the precision matrix Θ [3]. This leaves us with the following combinatorial optimization problem for P: The sparsity constraint is not directly relevant for this step since the shape of the precision matrix Θ is automatically fixed with its values. Thus, the resulting constant term of the Hadamard product can be neglected. Each of the short subsequences X i is therefore primarily assigned to a cluster K based on its maximum likelihood under the regard of temporal consistency (minimize number of "break points"). The problem (6) is subsequently solved by harnessing the dynamic programming approach of the Viterbi algorithm [26]. The problem is translated into a graphical representation (Figure 10), which represents each decision (per instance) the algorithm can make to assign the data points consecutively [3]. approach of the Viterbi algorithm [26]. The problem is translated into a graphical representation ( Figure 10), which represents each decision (per instance) the algorithm can make to assign the data points consecutively [3]. Subsequently, the Viterbi algorithm is returning the minimum cost path (Viterbi path) by calculating the total cost values of each vertex recursively while remembering the minimum cost path simultaneously [26]. Updating Cluster Parameters (Maximization) The maximization step of the EM algorithm concerns the updating of the inverse covariance matrices of the clusters once the assignments of the E-step have been given. For this step, the assignment sets are frozen and the precision matrices Θ are adjusted based on the corresponding observations. Similar to the assignment problem before, the overall TICC problem (5) can be simplified for this step, since the temporal consistency is strongly tied to the assignment of the datapoints. As seen before, the whole term can therefore be dropped while solving the newly arising subproblem (7), which is expressed in the following formula [3]: Another relevant property of this optimization function is that the updating problem for each cluster can be calculated independently since there is no reliance on previous or cross-connected results of other clusters as in problem (6). Therefore, all updates can be done in parallel rather than sequentially [3]. In order to bring the overall problem into an easier to handle form, we need to rearrange the previously established formula. The negative log likelihood can be expressed as Subsequently, the Viterbi algorithm is returning the minimum cost path (Viterbi path) by calculating the total cost values of each vertex recursively while remembering the minimum cost path simultaneously [26]. Updating Cluster Parameters (Maximization) The maximization step of the EM algorithm concerns the updating of the inverse covariance matrices of the clusters once the assignments of the E-step have been given. For this step, the assignment sets P are frozen and the precision matrices Θ i are adjusted based on the corresponding observations. Similar to the assignment problem before, the overall TICC problem (5) can be simplified for this step, since the temporal consistency is strongly tied to the assignment of the datapoints. As seen before, the whole term can therefore be dropped while solving the newly arising subproblem (7), which is expressed in the following formula [3]: Another relevant property of this optimization function is that the updating problem for each cluster can be calculated independently since there is no reliance on previous or cross-connected results of other clusters as in problem (6). Therefore, all updates can be done in parallel rather than sequentially [3]. In order to bring the overall problem into an easier to handle form, we need to rearrange the previously established formula. The negative log likelihood can be expressed as following: Thereby, |P i | stands for the number of observations assigned to each cluster and det for the determinant of the matrix Θ i . The term tr(·) is the abbreviation for the trace of a quadratic matrix. The included value S i stands for the empirical covariance of the random sample, which is often used to approximate the true covariance matrix Σ of a stochastic problem [27]. The remaining term C represents all other constant (variable independent) terms of the problem, which can also be neglected according to the argumentation above. We can now substitute the log likelihood expression of the initial updating problem (7) with the new representation in (8) and ultimately (after a few adjustments) reach the following identical optimization problem for each cluster (9) [3]: Note, that the coefficient of the sparsity constraint is permanently constant and therefore can be incorporated into the regularization parameter λ by scaling the values accordingly. For further simplifications we will hence remove the coefficient and furthermore also neglect the indices of the variables and parameters to highlight the independence of each subproblem from the others [3]. Many problems can be efficiently solved (time) sequentially by using the alternating direction method of multipliers (ADMM). This algorithm, aimed at solving convex problems, has constantly shown promising results and performed especially well on large-scale tasks [28]. However, in order to make this approach applicable, we need to reformulate our problem once more to align it with the ADMM requirements. Therefore, a consensus variable Z is introduced to allow the optimization function to be split into two (variable-) independent target functions [3]. Similar to the EM algorithm, ADMM is approximating the optimal result of the primal optimization problem in an alternating manner, by updating one variable while the others are fixed. For a sufficiently large number of iterations, the solution of the algorithm converges to the optimal solution of the problem (9). In practice, the method is stopped when the residual variables reach values close to zero (equality constraint is almost met), indicating a nearly feasible solution [3,28]. Those two alternating steps of datapoint (to cluster) assignment and updating the cluster parameters repeat until the cluster assignments become repetitive (stationary) and the overall TICC algorithm terminates [3]. Finally, the high-level outline for the TICC method is provided in the following Algorithm 1. Algorithm 1: TICC (high-level) initialization Cluster parameters Θ, cluster assignments P repeat E-step: Assign points to clusters → P M-step: Update cluster parameters →Θ until Stationarity return Θ,P Results In this chapter, the results of the two alternative analytical pathways are presented. For the comparison of the two pathways, an evaluation systematic is introduced that allows for a problem centric comparison of the results. The performance evaluation of individual approaches and their corresponding configuration, consisting of search method and cost function, is implemented by applying a variety of established and commonly accepted evaluation metrics from different fields of machine learning, which are adjusted for the segmentation problem. The first metric is the Hausdorff measure, which is measuring the greatest temporal distance between a true change point and a predicted change point. It is therefore measuring the accuracy of the approximated break points [5,29]. The second measure is the Rand index and basically measures the number of agreements between two segmentations. It is commonly used for the evaluation of clustering performances and is also related to the (mathematical) accuracy [5]. The final evaluation metric, which is commonly used in the field of classification, is the F1-Score. It is a combination of the precision (reliability) and recall (completeness) of a classification. For the F1-Score calculation no element is prioritized over another and therefore the total value is given as the harmonic mean of both measure components [5]. The true change points that are required input to use the evaluation metrics were determined manually leveraging domain experts' input. The first aspect of the experimental results deals with the problem of choosing a proper cost function in order to identify homogeneous segments. Therefore, a variety of different cost functions have been applied to the dataset by using the Sliding window approach discussed earlier. The results are displayed in Figure 11. that are required input to use the evaluation metrics were determined manually leveraging domain experts' input. The first aspect of the experimental results deals with the problem of choosing a proper cost function in order to identify homogeneous segments. Therefore, a variety of different cost functions have been applied to the dataset by using the Sliding window approach discussed earlier. The results are displayed in Figure 11. Since both the Rand index and the F1-score are measures in the range [0,1], the Hausdorff metric is scaled accordingly. This is done by measuring the pairwise percentage difference of a given cost function in comparison to the worst result of the whole set of cost functions. Therefore, the approach with the worst Hausdorff metric always scores a zero, while the other scores display the percentage superiority of each approach in comparison to that. For the analytics problem used in this paper, we compared four different cost functions in total: The findings displayed in Figure 11 were found to show mixed results. While the L2 function scores superior in the field of temporal distance and recall, it is inferior in regard of the precision and ultimately in the total F1-Score. The L1 cost function on the other hand scores poorly in terms of the Hausdorff metric (L2 is relatively more than 60% better) but dominates the other approach in terms of precision and consequently scores best in term of the F1-Score. The two remaining cost functions remain unremarkable, since they consistently score significantly worse than the leasing approach for Since both the Rand index and the F1-score are measures in the range [0, 1], the Hausdorff metric is scaled accordingly. This is done by measuring the pairwise percentage difference of a given cost function in comparison to the worst result of the whole set of cost functions. Therefore, the approach with the worst Hausdorff metric always scores a zero, while the other scores display the percentage superiority of each approach in comparison to that. For the analytics problem used in this paper, we compared four different cost functions in total: The findings displayed in Figure 11 were found to show mixed results. While the L2 function scores superior in the field of temporal distance and recall, it is inferior in regard of the precision and ultimately in the total F1-Score. The L1 cost function on the other hand scores poorly in terms of the Hausdorff metric (L2 is relatively more than 60% better) but dominates the other approach in terms of precision and consequently scores best in term of the F1-Score. The two remaining cost functions remain unremarkable, since they consistently score significantly worse than the leasing approach for each metric category, except for the recall since all of the approaches are close there. In the end, the classical L2 cost function was chosen as benchmark for the comparison of the pathway methods based on its superiority in two out of three measures (Hausdorff and Rand index). This was also used in the following analysis and discussion of the results. The second area of investigation is the comparison of the different searching methods with each other. The TICC algorithm was also included into this comparison by dropping the cluster labels from the results and therefore turning the multi-cluster output into a binary clustering ("steady mode" and "event"). The results are presented in Figure 12. The most striking result of the experiment is the superiority of the TICC approach in comparison to all heuristic methods. Even the best historical approach, namely sliding window, which has constantly outperformed the other two approaches except for the recall, is surpassed by the TICC algorithm remarkably. The pattern recognition approach provided by TICC identified a total of 32 events (disturbances of steady state), which in turn were broken down into four different clusters (patterns). Furthermore, it revealed the existence of not one but two steady states of the machine, which could not be identified by solely monitoring the temperature zones. The remaining three clusters were identified as actual disturbances of the drying process. An overview over all major events is given in the following Figure 13. The pattern in the top left of the diagram represents a section of the steady state (cluster A) of the The most striking result of the experiment is the superiority of the TICC approach in comparison to all heuristic methods. Even the best historical approach, namely sliding window, which has constantly outperformed the other two approaches except for the recall, is surpassed by the TICC algorithm remarkably. The pattern recognition approach provided by TICC identified a total of 32 events (disturbances of steady state), which in turn were broken down into four different clusters (patterns). Furthermore, it revealed the existence of not one but two steady states of the machine, which could not be identified by solely monitoring the temperature zones. The remaining three clusters were identified as actual disturbances of the drying process. An overview over all major events is given in the following Figure 13. The pattern in the top left of the diagram represents a section of the steady state (cluster A) of the machine and depicts the natural fluctuations, especially of the sixth temperature zone (material input). The first disturbance of this natural state can be seen in the bottom left of the diagram and will be referred to as cluster C or "canyon". The disturbance occurred in total 10 times during the whole monitoring period with a mean duration of about 3000 min, which corresponds to approximately 46 h from start to the finish. The shortest duration of a cluster C events was 1530 min (25.5 h) and the longest duration was 3210 min (53.5 h). The second recorded disturbance can be seen in the bottom right and is referred to as cluster D or "shark fin". It appeared in total six times and was more fluctuant in terms of its durations, varying from approximately 1500 min (25 h) to 7300 min (121 h). The duration with the highest density (3 of 6) was thereby 1500 min (median). The last common disturbance can be seen in the top right of the diagram and is referred to as cluster B or "random spike". This disturbance appears less systematic then the events presented before since its durations range from 50 min to 400 min, which is notably shorter than the other common disturbance durations identified. The shape of the events of cluster B, which appeared a total of nine times, also can be seen to be less homogeneous in comparison to cluster C and D. However, in addition to correct assignments the algorithm has made automatically, there have been some events which raised doubt in terms of their affiliation to the assigned failure clusters. Those irregularities are presented in the following Figure 14. The top left of the diagram shows the steady state as a comparison. The other three events show a characteristic pattern respectively but were still all assigned to the cluster C ("canyon"). However, in a later step of the manual analyzing process they have been found to be significantly different from the corresponding evens in the cluster C, therefore declared as related subclusters. Each of the subclusters occurred a total of two times within the entire dataset. Discussion This paper presents a functional framework to automatically analyze multivariate times series data in order to identify reoccurring patterns within a manufacturing setting. In doing so, we have tested two different pathways to approach the pattern recognition problem in time series applications and thereby displayed their strengths and weaknesses. As the data sets and their requirements vary significantly even within the manufacturing domain, the option to quickly apply and evaluate different approaches is a key feature towards a robust framework. However, to truly reflect the full wealth of different data sets and time series, additional alternatives might have to be included in the future. Nevertheless, the process and evaluation enable the principle scalability of the presented framework. We have shown that a two-step approach of using a segmentation before clustering the data appears to be inferior to a collaborative method. The obtained results from our manufacturing time series data set have shown that even the first step of the sequential approach (heuristic segmentation) has already returned a worse performance for all possible searching methods when compared to the TICC approach in this case. This can be observed in the Figure 12 of the results, since the TICC algorithm scored significantly higher in almost all statistical evaluation metrics. The only indictor where TICC scored below the alternative BU and TD approaches was the recall metric. This can, however, be easily explained in this context due to the low precision scores of both heuristic methods, since they have a tendency to oversegment the data. A rather fine partitioning consequently increases the probability that the true change points will be among the predicted set, but consequently increase the number of false (unnecessary) break points, reducing the precision drastically. The better performance of the TICC approach could also not be explained through an unproper cost function, since we have vetted various homogeneity metrics in order to identify the most suiting failure function for the given dataset (see Figure 11). All in all, the finding indicates that the second processing pathway is the superior approach in this application case. We have also shown that the TICC featuring pathway is capable of returning strong results, since all of the three major events in the times series have been found and furthermore assigned to different clusters. Even though it was not able to recognize the events displayed in Figure 14, this can be explained though the low frequency of those disturbances and therefore a lack of significance in order to define a new cluster based on only a few available examples. A further dissection of the cluster C and D suggests that those clusters correspond to the real-world activity of shutting the machine down for the weekend. National holidays were also taken into consideration and made it possible to explain a significant number of events that took part over the cause of a day or two. Furthermore, the differences in the cluster C and D might be the result of the difference between a rapid machine shutdown, were the material is removed immediately and a cooling process, were the material remained in the machine. This could also indicate an unproper shutdown, were the material was supposed to be removed for the weekend break but forgotten inside the dryer. The events assigned to cluster B (random spikes) could, however, not be tied to related real-world events at this point and most likely correspond to random disturbances caused for example by an improper handling of the machine or related short-term issues. However, the insights from the analysis can now be used to investigate the root cause and then return with a correct label for these events in the future after further consultation with domain experts. Like most analytics projects, the continuous cycle is key for a long-term sustainable solution and providing real value to the application area. All in all, our results show that the multivariate clustering approach displayed in the second pathway is able to return strong results for the time series data set in this case. It is safe to say that the new framework can identify common events, which in some cases correspond with relevant sources of failure. Therefore, these insights can subsequently be used to implement a near real-time warning system (or as input for a more sophisticated predictive maintenance system) that is capable of not only identifying and correctly clustering disturbances, but also tying it to a real-world activity. Conclusions and Future Work This paper presents a framework to utilize multivariate time series data to automatically identify reoccurring failure patterns in a real-world smart manufacturing system. To do so, the mixed binary (on/off) and continuous (temperature) data provided by the monitoring system of an industrial drying hopper was analyzed following two different processing pathways. The first approach was built-up using a subsequent segmentation and clustering approach, the second branch featured a collaborative method at its core (TICC). Both pathways have been facilitated by a standard PCA analysis (feature fusion) and a hyperparameter optimization (TPE) approach. The second procedure featuring TICC returned constantly superior results in comparison to all heuristic segmentation methods for the available time series data set. It was therefore expanded and finally enabled the recognition of three major (most frequent) failure patterns in the time series ("canyon", "shark fin", and "random spike"). Furthermore, it was also able to recognize all disruptions of the steady state but failed to identify all of the less frequent failure patterns as stand-alone clusters. Besides evaluating the statistical accuracy, we furthermore leveraged domain expertise to verify the results and were, e.g., able to identify one event (canyon) as a shutdown process that aligns with weekends and holidays. Nevertheless, the identified cluster can be used subsequently to enhance the monitoring process of the drying machine even further and to establish a predictive maintenance system in order to foresee potential upcoming machine downtimes. This framework can also be applied to other (related) problems in the field of preventive maintenance and pattern recognition in time series in general. Industrial monitoring and maintenance systems can therefore be extended by adding a failure detection component to enhance the machine restoration process (speed) further. Such algorithms can also be used to automatically analyze large amounts of data generated by the machinery pool, thereby reducing the manual workload necessary for this process. The presented framework can contribute to reducing unnecessary machine downtimes in the future, improve the troubleshooting process in case a failure occurs, and consequently increase the effective machine running time and overall productivity of the plant. An additional advantage from a managerial point of view is the possibility to automate the process and improved transparency of the production process. However, there also ethical implications that need to be considered aside of the overall impact of the presented research findings on the business process and workflow. On the one hand, the increase in efficiency is not only favorable for the business objectives overall due to the potential cost reduction, but also to the society in general given potential improvement of environmental (e.g., reduction in co 2 -emissions and overall waste of energy) and working atmosphere (e.g., reduction of unnecessary stress from unplanned machine failure) benefits. On the other hand, those techniques may also present an opportunity for being abused when other types of time-series data, such as personal data, of workers, are analyzed. For instance, biometric and behavioral data can be monitored and analyzed automatically, potentially leading to issues with regard to privacy and other negative implications for workers. Thus, it is necessary to investigate policy implications hindering inappropriate application of such time-series analytics conflicting with individuals' privacy rights. The presented process and analytical model are influenced by design and thus to a certain extend limited. Since the whole performance measuring process (correct number of clusters) is dependent on the manual determination of the true change points and clusters (or subclusters) it reflects a certain subjectivity and dependence on domain knowledge and metadata. Furthermore, the overall clustering approach is initialized through a distance-based algorithm (k-means), which makes it biased in terms of its priors and the subsequent estimation of the Gaussian mixture distributions. The effectiveness of the proposed framework can also not be guaranteed for the entirety of industrial domains since it was only tested on one specific dataset from a particular field. In order to address these challenges, future research should focus on follow up studies to improve the framework in respect to the algorithmic initialization and study the applicability of the underlying concept on similar use-cases in the industrial domain. Additionally, comparing the methodology to other approaches in further studies using a comparable data set will increase the transparency and understanding of its value.
2020-09-10T10:24:31.824Z
2020-09-05T00:00:00.000
{ "year": 2020, "sha1": "7743d83ce05d1600783571e6bb851b66762c37da", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-4494/4/3/88/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e6407822781b75d37967b47761464ac1014131a8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256688151
pes2o/s2orc
v3-fos-license
Neutralizing activity of BBIBP-CorV vaccine-elicited sera against Beta, Delta and other SARS-CoV-2 variants of concern The global pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has resulted in the generation of variants that may diminish host immune responses to vaccine formulations. Here we show a registered observational clinical trial (NCT04795414), we assess the safety and immunogenicity of the inactivated SARS-CoV-2 vaccine BBIBP-CorV in a cohort of 1006 vaccine recipients. No serious adverse events are observed during the term of the study. Detectable virus-specific antibody is measured and determined to be neutralizing in 698/760 (91.84%) vaccine recipients on day 28 post second vaccine dose and in 220/581 (37.87%) vaccine recipients on day 180 post second vaccine dose, whereas vaccine-elicited sera show varying degrees of reduction in neutralization against a range of key SARS-CoV-2 variants, including variant Alpha, Beta, Gamma, Iota, and Delta. Our work show diminished neutralization potency against multiple variants in vaccine-elicited sera, which indicates the potential need for additional boost vaccinations. Variants of SARS-CoV-2 present the potential for differential response and performance to delivered vaccine regimens. Here the authors characterise the neutralising antibody response to the inactivated SARS-CoV-2 vaccine BBIBP-CorV and assess functionality against a range of key SARS-CoV2 variants. G iven the unprecedented morbidity of the coronavirus disease 2019 , the efficacy of different vaccines needs to be assessed across diverse populations. The absence of immunity in the population causes susceptible people to be vulnerable to further waves of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, and healthcare workers are at a particularly high risk of infection. Sustained progress has been made in the development of SARS-CoV-2 vaccines, including inactivated vaccines 1-3 , mRNA vaccines 4,5 , adenovirus-vectored vaccines [6][7][8] , and recombinant protein subunit vaccines 9 , that are safe and exhibit immunogenicity against SARS-CoV-2. The inactivated vaccine BBIBP-CorV developed by Sinopharm, approved by the World Health Organization (WHO) for emergency use, is safe and well-tolerated in healthy people, and can induce high levels of neutralizing antibody titers to protect against SARS-CoV-2 1 . However, whether this vaccine could produce long-term protection is still under investigation. The newly emerged SARS-CoV-2 variants of concern (VOC) and variants of interest (VOI) including Alpha (lineage B.1.1.7, first detected in the United Kingdom) 10 , Beta (lineage B.1.351, first identified in South Africa) 11 , Gamma (lineage P.1, initially expanded in Brazil) 12 , and Iota (lineage B.1.526, largely found in South America) 13 , are reportedly more efficiently and rapidly transported worldwide 14 . These variants contain mutations, such as N501Y and E484K in the receptor-binding domain (RBD) of spike glycoproteins, which are important for angiotensinconverting enzyme 2 (ACE2) binding and antibody recognition 15 . The highly transmissible Delta VOC (lineage B.1.617.2, first detected in India) recently emerged shows potential for immune escape 16 and the ability to evade vaccines 17 . Consequently, there is now great concern regarding the vaccine efficacy against these resistant variants. Here, we report the safety and immunogenicity of an inactivated SARS-CoV-2 vaccine BBIBP-CorV and assess the 6-month durability of the humoral immune response in vaccine recipients, particularly evaluate the effect of multiple SARS-CoV-2 variants on vaccine-elicited neutralization. In brief, the BBIBP-CorV vaccine is safe and can effectively induce humoral responses in vaccine recipients. Neutralizing antibodies persist in 220/581 (37.87%) vaccine recipients 180 days after the second dose. Diminished neutralization potency against multiple variants is observed, indicating the potential need for additional boost vaccinations. Results Study participants. Between January 14, 2021 and March 10, 2021, a total of 1006 healthcare workers in Shanghai Ruijin Hospital were recruited in this study. Figure 1 shows an overview of this study with the key time points and sample sizes at each time point. Among 1006 vaccine recipients, 284 were male and 722 were female, with a median age of 35.00 (28.00-43.00) years, and a total of 169 (16.80%) participants had at least one underlying disease. In addition, we included a panel of 571 naive individuals to ensure the accuracy of the specific antibody immunoassay and a panel of 16 COVID-19 recovered patients for the neutralization assay. The baseline characteristics of the study participants are shown in Table 1. Safety outcomes. To date, no serious adverse events have been reported in this study. All adverse reactions were mild or moderate in severity and most cases were resolved by day seven after vaccination. A total of 447 (44.43%) of 1006 vaccine recipients experienced at least one adverse reaction after either dose. Common adverse reactions were reported more frequently after the second dose than after the first dose ( Table 2). The overall incidence of adverse reactions was 308 (30.62%) after the second dose and 241 (23.96%) after the first dose. At least one local adverse reaction occurred after either dose in 258 (25.65%) of the 1006 vaccine recipients. The proportion of vaccine recipients who reported local adverse reactions increased after the second dose. Pain at the injection site was the most common local adverse reaction, which was reported by 231 (22.96%) participants, and was reported more frequently after the second dose (160 [15.90%] Clinical laboratory measurements revealed a few mild to moderate transient abnormalities. After the first dose, 39 (3.88%) vaccine recipients had decreased hemoglobin, 51 (5.07%) had an increased white blood cell count, two (0.20%) had an increased lymphocyte count, 14 (1.39%) had an increased neutrophils count, 31 (3.08%) showed increased alanine aminotransferase levels, 49 (4.87%) showed increased aspartate aminotransferase levels, 14 (1.39%) had increased serum total bilirubin levels, 40 (3.98%) had increased blood urea nitrogen levels, and two (0.20%) had increased creatinine levels. After the second dose, 27 (2.68%) vaccine recipients had decreased hemoglobin, 46 (4.57%) had an increased white blood cell count, four (0.40%) had an increased lymphocyte count, 15 (1.49%) had increased neutrophils count, 30 (2.98%) showed increased alanine aminotransferase levels, 35 (3.48%) showed increased aspartate aminotransferase levels, two (0.20%) had increased serum total bilirubin levels, 27 (2.68%) had increased blood urea nitrogen levels, and four (0.40%) had increased creatinine levels. None of the post-vaccination abnormalities were considered clinically significant. Immunogenicity responses. Immunological analyses were performed among individuals from whom blood samples were collected at each time point. The number, sex and age of participants at each time point are shown in Supplementary Table 1. Specific antibodies against SARS-CoV-2 were also assessed in a panel of 571 naive individuals. Among them, four had relatively low antibody titers (1.12, 1.52, 1.94, and 2.26, respectively). Rapid antibody responses to SARS-CoV-2 were observed in 609 (63.17%) of 964 individuals from whom blood samples were collected on day 21 after the first dose (V3), the median antibody level was 5.32 (2.33-13.35) (Fig. 2a). Enhanced specific antibody responses against SARS-CoV-2 were detected in 731 (96.18%) of 760 vaccine recipients who had blood samples taken on day 28 after the second dose (V4), with median antibody level at 33.96 (12.56-82.04) (Fig. 2a), which was an increase relative to the antibody responses on day 21 after the first dose. The seroconversion rate of neutralizing antibodies against the wildtype strain was 698 (91.84%) of 760 individuals, and the geometric mean titer (GMT) was 62.68 (95% confidence interval [CI] 57.02-68.91) (Fig. 2b). Sex was not a factor affecting the induction of neutralizing antibodies after the second dose. Vaccine recipients with seroconversion of neutralizing antibodies on day 28 after the second dose were significantly younger than those without seroconversion (median age: 36 (Fig. 3a). In addition, 114 (24.26%) and 99 (21.06%) of the 470 vaccine-elicited sera showed a complete loss of neutralizing activity against the Gamma and Iota variants, respectively. The GMT against the Gamma variant decreased by 1.9-fold to 37.07 (95% CI 32.84-41.84), whereas a marked decrease by 3.8-fold was observed in the GMT against the Iota Fig. 1 Study profile. *Participants who were administered the vaccination and completed all safety visits, but did not have blood samples taken upon personal request. † 290 participants who showed negative neutralizing activity against the wild-type strain or who refused to undergo testing the neutralization assay against multiple variants on day 28 after the second dose were excluded. ‡ 361 participants with negative neutralizing activity against the wild-type strain on day 180 after the second dose were excluded. We further analyzed the cross-reactivity of neutralizing antibodies against the four variants. As shown in Fig. 4, on day 28 after the second dose, a total of 163 (34.68%) of the 470 vaccine-elicited sera preserved neutralizing activity against all four variants including Alpha, Beta, Gamma, and Iota. Only 14 (2.98%) of the 470 vaccine-elicited sera did not induce neutralizing antibodies against any of these four variants. Among 199 (42.34%) of the 470 vaccine-elicited sera with positive neutralizing antibodies against the Beta variant, only one had negative neutralizing activity against the Alpha, Gamma, and Iota variants. We next assessed the durability of the humoral immune response in 581 participants who were followed up on day 180 after the second dose (V5). Specific antibodies against SARS-CoV-2 could still be detected in 500 (86.06%) of the 581 vaccine recipients, although the median antibody level decreased to 6.08 (2.87-14.86) (Fig. 2a). However, significantly fewer participants had quantifiable neutralizing antibodies on day 180 after the second dose, comprising 220 (37.87%) of the 581 individuals, and the GMT decreased to 40.84 (95% CI 33.73-49.45) (Fig. 2b). The neutralization assay was also performed against the currently most prevalent variant Delta in 220 participants who showed positive neutralizing activity against the wild-type strain. Neutralizing activity against the Delta variant remained detected in 96 (43.64%) of 220 participants, and the GMT showed a 2.6fold reduction (15.93 [95% CI 12.72-19.94]) compared with that of the wild-type strain (Fig. 3b). In comparison, a panel of 16 convalescent sera collected at sixth-month post symptom onset from COVID-19 recovered patients revealed a long-lasting humoral immune response, with a median SARS-CoV-2 specific antibody level of 246.80 (89.09-366.14) (Fig. 2a). Moreover, neutralizing antibody responses were detected in all convalescent patients, and the GMT was 404.06 (95% CI 250.74-651.12), which was significantly higher than that in the vaccine recipients on day 28 after the second dose (Fig. 2b). All 16 (100%) convalescent sera were able to neutralize the Alpha, Delta, and Iota variants, with varying degrees of reduction in neutralization. Compared with the wildtype strain, variant Alpha, Iota, and Delta were 1.9-fold, 2.3-fold, and 3.4-fold less sensitive to sera, respectively (Fig. 3c). We found that two (12.50%) and one (6.25%) of the 16 convalescent sera had completely lost neutralizing activity against the Beta and Gamma variants, respectively. Compared with the wild-type strain, the GMT decreased by 4.1-fold and by 2.2-fold against the Beta and Gamma variants, respectively, in convalescent sera. Cytokine responses. Dynamic changes in several key inflammatory cytokines, including interferon-γ (IFN-γ), interleukin (IL)-10, IL-12p70, IL-13, IL-2, IL-6, IL-8, and tumor necrosis factor-α (TNF-α), were tested in serum at different time points. The levels of some cytokines showed notable changes from the first dose through 28 days after the second dose (Fig. 5). On day 21 after the first dose and on day 28 after the second dose, the levels of IFN-γ, IL-10, and IL-13 were significantly higher than their levels on the day of the first dose. The levels of IL-8 and TNF-α showed significant increases on day 21 after the first dose, followed by significant decreases on day 28 after the second dose. The cytokine responses were lower among participants who did not successfully induce neutralizing antibodies against SARS-CoV-2, including the levels of IFN-γ and TNF-α on day 28 after the second dose, and the level of IL-12p70 on the day of the first dose (Fig. 6). Discussion In this study, the inactivated vaccine BBIBP-CorV was safe and well-tolerated in participants. No serious adverse events have been reported to date, and all the adverse reactions were mild or moderate. The most frequently reported local adverse reaction was pain at the injection site, and the most common systematic adverse reaction was fatigue. Clinical laboratory measurements revealed a few mild to moderate transient abnormalities, but none of the post-vaccination abnormalities were considered clinically significant. The specific antibody assay against SARS-CoV-2 was performed to assess the humoral immune responses. To ensure the accuracy of this assay, we included a panel of 571 naive individuals with neither prior COVID-19 symptoms nor a history of SARS-CoV-2 vaccination. Among them, four had relatively low antibody titers, which may be due to the false-positive results caused by non-specific endogenous or exogenous interferents. However, the possibility of subclinical infection cannot be ruled out, despite the effective intervention and control of the COVID-19 pandemic in Shanghai, China. The neutralization assay using lentivirus-based SARS-CoV-2 pseudoviruses was performed to assess the resistance of multiple SARS-CoV-2 variants. The traditional neutralization assay for the SARS-CoV-2 vaccines using the isolated live virus must be performed at biosafety level 3 facilities, whereas the pseudotype virus-based neutralization assay against SARS-CoV-2 has been developed and can be handled in biosafety level 2 facilities 18 . A previous study revealed a high degree of concordance between the pseudotype neutralization assay and isolated live SARS-CoV-2 neutralization assay 19 , suggesting that the pseudotype neutralization assay can be used to evaluate the inhibitory effect of the vaccines on viral attachment and entry 20 . Therefore, we used a safer pseudovirus-based neutralization assay to evaluate neutralizing activity in this study. Neutralizing antibody responses against SARS-CoV-2 were successfully induced in 698 (91.84%) of the 760 individuals on day 28 after the second dose, which is lower than the previously reported seroconversion rate 1 . Although previous data indicated that factors such as age, sex, and the presence of a coexisting condition does not affect the efficacy of specific COVID-19 vaccine 4 , in this study, younger age was significantly related to the seroconversion of neutralizing antibodies. Healthcare workers tend to sleep less and have more irregular circadian rhythm than other populations, whether sleep pattern impact the vaccineelicited antibody response remains unclear. It was reported that sleep may boost virus-specific adaptive immunity and promote a cytokine milieu that supports the cellular response, additionally, lack of sleep during the night after vaccination was found to reduce the antibody response to hepatitis A, hepatitis B, and influenza vaccinations 21 . Thus, extending sleep duration at night after vaccination may result in a higher antibody response, and further studies are needed to provide more conclusive evidence on the production of neutralizing activity. The emergence of new SARS-CoV-2 variants has led to concerns regarding the potential of these variants to circumvent immunity elicited by natural infection or vaccination. In this study, we tested neutralization of BBIBP-CorV vaccine-elicited sera against a range of key SARS-CoV-2 variants, including variant Alpha, Beta, Gamma, Iota, and Delta. The Alpha variant, a highly transmissible variant of concern, consists of a series of mutations including N501Y, which is located in the receptorbinding domain of the spike protein 10 . Variants containing this mutation bind more tightly to the cellular receptor, angiotensinconverting enzyme 2 (ACE2) 22 . The Beta variant of concern carries the immune escape-associated mutation E484K 23 . Previous research showed that many highly neutralizing monoclonal antibodies, most convalescent sera, and mRNA-induced immune sera exhibited reduced inhibitory activity against viruses containing the E484K mutation 24 . The Gamma variant of concern 12 and Iota variant of interest 13 are both highly transmissible variants containing the E484K mutation. The newly identified variant of concern Delta contains diverse spike mutations, including mutation L452R 25 , which has shown resistance to some monoclonal antibodies and sera 26,27 . Recent reports have indicated that mRNA vaccine-elicited sera largely preserved neutralizing titers against the Alpha variant 28,29 . However, significant reductions in the titers of neutralizing antibodies against the Beta variant were observed in mRNA vaccine-elicited sera 30,31 . Our study provides data on the neutralizing activity against the Alpha, Beta, Gamma, and Iota variants on day 28 after the second dose in 470 BBIBP-CorV-elicited sera. In a substantial proportion of vaccine-elicited sera, a complete loss of neutralizing activity against the Alpha variant with a decrease in the GMT was observed, consistent with a previous study demonstrating a marked decrease in the GMT in neutralization of the Alpha variant 32 . In multiple studies, the Beta variant showed the strongest resistance to neutralization in convalescent or vaccinee sera 33 . According to a previous report that included 25 participants administered the BBIBP-CorV vaccine, 20 (80%) serum samples showed complete or partial loss of neutralization against the Beta variant 32 . Similarly, we found that the vaccine-elicited sera were significantly less effective in neutralizing the Beta variant. In addition, 114 (24.26%) and 99 (21.06%) vaccine-elicited sera showed a complete loss of neutralizing activity against the Gamma and Iota variants, respectively. As expected, more participants showed positive neutralizing activity against the Alpha variant than against the other three variants carrying the immune escape-associated mutation E484K, however, why the Beta variant showed stronger resistance than the Gamma and Iota variants remains unclear. Previous studies also showed that the Beta variant is more refractory to neutralization than the Gamma variant 31,34 . The possible explanation is that N-terminal domain (NTD) substitutions in the Beta variant contribute to neutralization escape 35 , whereas other undefined mutations in the Gamma and Iota We also tested the neutralizing activity against the currently most prevalent variant Delta on day 180 after the second dose in vaccine-elicited sera. It has been reported that sera from recovered patients and inactivated vaccine recipients showed significantly reduced neutralization against the Delta variant 36 , and lower neutralizing titers were observed against the Delta variant relative to the wild-type strain in vaccinated individuals 25,34,37 . In this study, significantly fewer participants had quantifiable neutralizing antibodies against the Delta variant, which is similar to previous data 38 . Most previous studies focused on the neutralizing activity of vaccinee sera soon after vaccination, thus, the durability of protection induced by the inactivated vaccine is currently unknown. mRNA vaccine-induced antibody activity remained high in 33 healthy adult participants 180 days after the second dose 39 . In our study, neutralizing antibodies elicited by the vaccine persisted for 6 months after the second dose in 220 (37.87%) of the 581 vaccine recipients, suggesting that a booster dose is needed to extend the duration of neutralizing activity against emerging variants. However, it should be noted that neutralization is a part of the humoral immune response, which does not account for all potentially protective vaccine responses. A recent report showed that the mRNA vaccine elicited a strong CD4 cytokine response involving type 1 helper T (Th1) cells 40,41 and a robust CD8 T cell response 42 , and natural SARS-CoV-2 infection may induce a memory B-cell response 43 , therefore, inactivated vaccination may induce efficient memory cellular responses despite waning neutralizing activity. This study also has several limitations. First, our study relies on lentivirus-based SARS-CoV-2 pseudoviruses, which can only model viral entry. Moreover, the contribution of additional mutations other than the spike protein to neutralization resistance in these variants cannot be confirmed. Second, our study population did not cover individuals from more susceptible groups in all ages. Caution should be taken when extrapolating our findings to adolescent, older adults, or people with preexisting diseases. Third, the cellular immunity was not comprehensively assessed in this study. Further work should provide greater insight into the role of cell-mediated immunity in vaccine efficacy against the emerging variants. Fourth, although the inactivated vaccine BBIBP-CorV elicited neutralizing antibody responses in the majority of participants, the real-world efficacy of the vaccine against the emerging variants remains to be determined. Fifth, new SARS-CoV-2 variants continuously emerge, the variants of current concern constantly change. Nevertheless, neutralization against a range of key variants of particular interest was assessed in this study. In conclusion, the data presented here contribute to the evidence of neutralization of the inactivated vaccine against known predominant SARS-CoV-2 variants. Overall, the inactivated SARS-CoV-2 vaccine BBIBP-CorV was safe and well-tolerated in the recruited healthcare workers, and rapid humoral responses were induced after the first dose vaccination. A total of 698 (91.84%) of the 760 participants had neutralizing antibodies against SARS-CoV-2 on day 28 after the second dose, and 220 (37.87%) showed preservation of the neutralizing activity on day 180 after the second dose. Diminished neutralization potency against multiple variants was also observed, suggesting the potential need for additional boost vaccinations. Methods Study design and participants. This study is registered with ClinicalTrials.gov, NCT04795414. The study protocol is available in the supplementary information file. The study protocol and informed consent were approved by the Ethics Committee of Shanghai Ruijin Hospital (RJHKY2021-12) in accordance with the Declaration of Helsinki and Good Clinical Practice. Written informed consent was obtained from all participants before the screening. From January 14, 2021 to March 10, 2021, healthcare workers in Shanghai Ruijin Hospital, aged 18-59 years, with negative serum specific antibodies against SARS-CoV-2 at the time of screening (V1), and willing to receive two doses, 21 days apart of inactivated SARS-CoV-2 vaccine (BBIBP-CorV, Sinopharm) were eligible participants and were recruited in this study. Blood samples were collected from the participants for serology tests on the day of the first dose (V2), on day 21 after the first dose (V3), and on day 28 after the second dose (V4), and on day 180 after the second dose (V5). Naive individuals were defined as those who had neither prior COVID-19 symptoms nor a history of SARS-CoV-2 vaccination. They were 18-59 years of age, and tested negative for SARS-CoV-2 viral nucleic acid. Study participants did not receive any compensation. Convalescent serum panel. A panel of 16 convalescent sera collected at sixthmonth post symptom onset was selected by matching age from a cohort of 22 COVID-19 recovered patients in a published study 44 . All individuals were 18-59 years of age, had previously tested positive for the SARS-CoV-2 viral nucleic acid, were diagnosed with COVID-19, without a history of SARS-CoV-2 vaccination at the time of sampling. This panel included six moderate cases, eight severe cases, and two critical cases. The sera were kindly provided by the First Affiliated Hospital of Bengbu Medical College (Anhui, China). Safety assessments. Each participant was informed of potential vaccine side effects and encouraged to report them. All relevant clinical data were collected from the hospital electronic system. Solicited local and systemic reactions were reported by the participants and recorded in an electronic diary, unsolicited adverse events and serious adverse events were assessed starting after administration of each dose. Laboratory safety measurements including hemoglobin, white blood cell count, lymphocyte count, neutrophils count, platelets, alanine aminotransferase, aspartate aminotransferase, serum total bilirubin, serum albumin, creatinine, and blood urea nitrogen levels were tested within 28 days post each dose. Immunogenicity assessments. Specific antibodies against SARS-CoV-2 were measured using a chemiluminescence kit manufactured by Wantai BioPharm (China). The antibody levels were expressed as the chemiluminescence signal according to the manufacturer's instructions. Pseudovirus-based neutralization assay. A pseudovirus-based neutralization assay was performed as previously described 18 2) tested in this study were manufactured by Vazyme Biotech Co., Ltd. (China). In brief, to assess the neutralization geometric mean titers (GMTs) of vaccine-elicited sera, six serial dilutions of heat-inactivated sera (in a four-fold step-wise manner) were incubated with 250 TCID 50 SARS-CoV-2 pseudoviruses per well for 1 h, together with the virus control and cell control wells, before seeding 20,000 HEK293T-ACE2 cells per well in 96-well plates. After 48 h of incubation in a 5% CO 2 environment at 37°C, the supernatant was removed, and the luminescence was measured using Luciferase Assay System (Promega Biotech Co., Ltd) according to the manufacturer's instructions. The 50% inhibitory concentration (IC 50 ) was defined as the serum dilution at which the relative light units (RLUs) were reduced by 50% compared with the virus control wells after subtraction of background RLUs in the cell control wells with only cells. The IC 50 values were calculated by generating a three-parameter non-linear regression curve fit in GraphPad Prism 8.4.0. A neutralizing antibody potency of <1:4 was considered as a negative result. Outcomes. The primary safety endpoint was any adverse reaction within 28 days after each dose of vaccination. The secondary safety endpoint was any clinical laboratory abnormality within 28 days after each dose of vaccination. The primary immunogenic endpoints were the seroconversion rate and the titers of specific antibodies and neutralizing antibodies against SARS-CoV-2 on day 28 post the second dose. The secondary immunogenic endpoints were the seroconversion rate and the titers of specific antibodies and neutralizing antibodies against SARS-CoV-2 on day 180 post the second dose. Statistical analysis. Continuous variables that were not normally distributed were presented as median (interquartile range [IQR]). Categorical variables were described as count (%). The statistical method used for analysis of antibody titers is the geometric mean (GMTs) and the corresponding 95% confidence interval (95% CI). The values were compared by the Wilcoxon signed-rank test and the Mann-Whitney U test as appropriate. The chi-square test (χ 2 ) was applied to assess the distribution in different groups. Graphs were plotted using GraphPad Prism 8.4.0 software. Venn diagrams were drawn using jvenn: an interactive Venn diagram viewer 45 . Statistical analyses were performed using SPSS 24.0 software. A two-sided p value of <0.05 was considered statistically significant. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
2023-02-09T15:27:32.521Z
2022-04-04T00:00:00.000
{ "year": 2022, "sha1": "a56f1b081951bdade5939c781b082c7645cce4f9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-29477-0.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a56f1b081951bdade5939c781b082c7645cce4f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
211085633
pes2o/s2orc
v3-fos-license
Effects of Andrographolide on Intracellular pH Regulation, Cellular Migration, and Apoptosis in Human Cervical Cancer Cells (Running Tittle: Effects of Andrographolide on pH Regulators and Apoptosis in Cervical Cancer) Cancer cells have been characterized with alkaline intracellular pH (pHi) values (≥7.2) to enable cancer proliferation, migration, and progression. The aim of the present study was to explore the concentration-dependent effects of Andrographolide, an active diterpenoid compound of herb Andrographis paniculata, on Na+/H+ exchanger isoform 1 (NHE1), cellular migration and apoptosis in human cervical cancer cells (HeLa). The pHi was detected by microspectrofluorometry method, and intracellular acidification was induced by NH4Cl prepulse technique. Viability and protein expression were determined by MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay and Western blot, respectively. Human normal endocervical cells (End1), ectocervical cells (Ect1), and HeLa were bought commercially. The resting pHi value of HeLa (≈7.47) was significantly higher than that of End1 and Ect1 (≈7.30), and shifted from alkaline to acidic following acid/base impacts. In HEPES (4-(2-Hydroxyethyl)piperazine-1-ethanesulfonic acid | N-(2-Hydroxyethyl)piperazine-N′-(2-ethanesulfonic acid) -buffered superfusate, NHE1 and V-ATPase co-existed functionally for acid extrusion in HeLa, while only NHE1 existed functionally in End/Ect1. Andrographolide (3–1000 μM) concentration-dependently inhibited NHE1 activity. Cell-migration and expressions of NHE1, V-ATPase, PARP (poly-ADP-ribose-polymerase), pro-Caspase-3, and Bcl-2 were significantly reduced by pretreating with Andrographolide (≥100 μM) for 24–48 h in HeLa. Andrographolide inhibited cell viability of End1-cells/Ect1 and HeLa (≥100 and ≥30 μM, respectively). The present findings implicate the promising clinical applications of Andrographolide on cervical cancer treatment. Introduction The intracellular pH (pH i ) in most mature mammalian cells is kept within a narrow range (≈7.2) through the combined operation of the active transmembrane transporters and the passive intracellular buffering capacity (β) [1]. Homeostasis of pH i regulates many cellular functions, such as cell growth, migration, and apoptosis in mammalian cells. However, some cancer cells have been found recently with alkaline pH i values (≥7.2) and acidic pH e values (≤7.0) [2][3][4]. This "reversed" gradient enables cancer progression by promoting proliferation, evasion of apoptosis, migration, and invasion [5,6]. Indeed, during the development of tumors, cancer cells have developed a special cellular energetic and metabolic way that affects the homeostasis of extracellular pH (pH e ) and pH i . In the absence or presence of oxygen, pyruvate is mostly converted to lactate via glycolysis in cancer cells, a process known as the Warburg effect or aerobic glycolysis, which is less efficient than oxidative phosphorylation for generating ATP (2 and 36 ATP, respectively). Thus, cancer cells increase their glucose uptake and glycolytic rate to produce more ATP. However, the accumulation of glycolytic byproducts, like lactate and H + , can lead to the intra-and extracellular acidification [7]. Maintenance of stable, mildly alkaline pH i by activating some acid extruding mechanisms is required for cancer cell proliferation and differentiation [5,6]. Furthermore, it has been shown that growth factor signaling is associated with pH i increase leading to cell stimulation and proliferation. In contrast, a decrease in pH i can result in cellular apoptosis. Moreover, mitochondria-induced acidification of the cytosol has been found as an early event that regulates Caspase activation in the mitochondrial pathway [8]. Therefore, the pH i homeostasis is vital for the cancer development and progression. Several different transmembrane pH i regulators can be allocated into two main groups: H + -equivalent extruders and H + -equivalent loaders. H + -equivalent extruders, such as Na + /H + exchanger (NHE) [9] and Na + /HCO 3 − cotransporter (NBC), are activated when intracellular pH decreases (pH i < 7.1) [10,11]. Likewise, when pH i becomes alkaline (pH i > 7.4), the H + -equivalent loaders, such as Cl − /HCO 3 − exchanger (AE) [11] and Cl − /OH − exchanger (CHE) [11,12] will be activated. Moreover, a reversible lactic acid carrier, i.e., H + -Monocarboxylate − cotransporters (MCT), and V-ATPase are highly activated during the development of tumors or under conditions of ischemia/hypoxia [12,13]. In HEPES-buffered media (HCO 3 − -free condition), pH i recovery following intracellular acidosis is inhibited by removal of extracellular Na + or by adding Hoe 694 (3-methylsulfonyl-4-piperidinobenzoyl, guanidine hydrochloride), a high affinity and selectivity NHE-1 inhibitor [9]. Moreover, NHE activation has been found to enhance tumor cell migration and invasion by creating a distinct cell surface pH gradient and invadopodial-dependent extracellular matrix degradation in human endometrial cancer cells, human breast carcinoma, and melanoma cells [4,[14][15][16][17]. The rise in pH i through activation of the NHE that modulated by growth factors plays an important role in growth control in tumor tissues [18,19]. Some of the Bcl-2 family proteins [20] and endonucleases [21] have been identified as pH sensitive during early intracellular alkalinization in apoptosis [22]. Moreover, treatment with p90rsk inhibitor has been found to reduce the ethanol-induced increase in viability of cells and expression of Na + /H + exchanger isoform 1 (NHE1) and Bcl-2 in hepatocellular carcinoma [23]. However, the related knowledge of transmembrane pH i regulators is still not clear in human normal endocervical cells (End1), human normal ectocervical cells (Ect1), and in human cervical cancer cells (HeLa). In traditional herbal medicine, Andrographis paniculata (Burm. F), generally known as the "king of bitters", has attracted wide attention due to its multiple effects: anti-oxidative, anti-diabetic, hepatoprotective, and anti-inflammatory [24][25][26]. Recently, studies suggested that Andrographis paniculata had anti-tumor and immunomodulatory effects in vitro and in vivo [27][28][29]. For example, the aqueous extract of Andrographis paniculata exerted inhibitory activities on the migration of esophageal cancer cells, and suppressed the proliferation and motility of endothelial cells [28]. Moreover, Andrographolide, the major labdane diterpenoid compound from Andrographis paniculata has been reported to be cytotoxic against various cancer cells in vitro, including human leukemic, lymphocytic cell lines, P-388, KB, COL-2, MCF-7, LU-1, and ASK cells [27,[30][31][32], mainly with the underlying mechanism of stimulating the production of cytotoxic T lymphocyte through enhanced secretion of IL-2, tumor necrosis factor-alpha secretion, and interferon-gamma [27]. Andrographolide was also found to inhibit the proliferation of various cell lines including leukemia, breast cancer, lung cancer, and melanoma cells [33,34]. On the other hand, in vivo models, Andrographolide was also found to show anti-cancer activity in B16F0 melanoma syngenic, MCF-7, and HT-29 xenograft models [33,35]. Moreover, the compound exerted direct anticancer activity, both in vitro and in vivo experiments, on cancer cells by cell-cycle arrest at G0/G1 phase through induction of cell-cycle inhibitory protein p27 and decreased expression of cyclin-dependent kinase 4 (CDK4) [33,36,37]. Apoptosis is a cell death process, and lack of apoptotic induction has been implicated in tumor development and progression [38]. Among many apoptotic regulatory proteins, the Bcl-2 family, including both anti-apoptotic (Bcl-2, Bcl-XL, Mcl-1) and pro-apoptotic members (Bid, Bax, Bad), is particularly important [39]. Moreover, studies with several different breast cancer cell lines indicated that the relative amounts of Bcl-2 and Bax proteins are highly predictive of the sensitivity to apoptosis, with the increase of Bax/Bcl-2 ratio, in mammary tumor cells [40]. A potent growth inhibitory effect of Andrographolide, after a 48-h treatment, was demonstrated in acute promyelocytic leukemia cells (HL-60 and NB4) by inducing cell differentiation and apoptosis [41,42]. The 50% cell growth inhibition concentration of Andrographolide ranges from 10 to 100 µM, depending on the type of cancer cell tested [29]. For example, some reports showed that Andrographolide at relatively high concentrations (from 40 to 100 µM) could induce apoptosis in human prostatic adenocarcinoma PC-3 cells [43] or human leukemic HL-60 cells [44]. However, there are no previous reports on Andrographolide on pH i regulators, cellular migration, and apoptosis in human cervical cancer cells. In light of the importance of pH i homeostasis on cancer progress, the aim of the present study was to characterize the functional acid extruding mechanism and examine the effect of various concentrations of Andrographolide (3-1000 µM) on pH i regulation, cellular migration, and apoptosis in cultured human cervical cancer cells. Resting and New Steady-State Intracellular pH Value of Cultured Cells of HeLa, End1, and Ect1 To examine the resting pH i of the cultured cells of End1, Ect1, and HeLa, the cells were superfused with HEPES-buffered solution (nominally free of CO 2 /HCO 3 − ; pH o 7.40). Under the HEPES-buffered solution, the original resting pH i value was 7.31 ± 0.07 (n = 5), 7.30 ± 0.06 (n = 5), and 7.47 ± 0.04 (n = 20), in the End1 cells, Ect1 cells, and HeLa cells as shown in the farthest left part of Figure 1A-C, respectively. The steady-state pH i value was shifted from alkaline to the new acidic steady-state value of pH i in all three tested cells, i.e., the End1 cells, Ect1 cells, and HeLa cells. The new steady-state value of pH i was 7.21 ± 0.07 (n = 5; p < 0.05), 7.19 ± 0.06 (n = 5; p < 0.05), and 7.25 ± 0.04 (n = 20; p < 0.001) after intracellular acid/base impact by applying NH 4 Cl (20 mM) prepulse for three times in the End1 cells, Ect1 cells, and HeLa cells as shown in most right part of Figure 1A-C, respectively. Note that the NH 4 Cl prepulse method can be explained by four phases as shown in the farthest left part of Figure 1C: phase 1 (rapid alkalization), phase 2 (slow recovery), phase 3 (rapid acidification), and phase 4 (pH i regulation), and see more details in Section 4. As shown in the farthest left part of Figure 1A-C, the pH i recovered completely from intracellular acidosis that was induced by using an NH 4 Cl prepulse technique. This result indicated that there is a mechanism of acid extrusion in the End1 cells, Ect1 cells, and HeLa cells, respectively. Note that the slope value of the pH i recovery (dpH i /min) in the three cell lines (End 1, Ect1, and Hela) was 0.12 ± 0.02 (n = 5); 0.11 ± 0.01 (n = 5); 0.07 ± 0.02 (n = 20), respectively (measured for pH i range of = 6.95 ± 0.02), Functional Identification of Intracellular Acid Extruders-NHE and V-ATPase To examine whether the active transmembrane acid extrusion mechanism in the three different tested cell cultures, i.e., the End1 cells, Ect1 cells, and HeLa cells, is Na + -dependent, the cells were further performed in Na + -free HEPES-buffered superfusate. As shown in the farthest left part of Figure 2A-C, the pH i recovered completely from intracellular acidosis as control in normal HEPES-buffered system. The removal of the extracellular Na + totally inhibited the pH i recovery rate in End1 cells and Ect1 cells, as shown in the right part of Figure 2A,B, respectively. This clearly demonstrates that, under nominally CO 2 /HCO 3 − -free conditions, the CO 2 /HCO 3 − -independent acid-extrusion mechanism that was involved in the pH i recovery following induced intracellular acidification is purely Na + -dependent. On the other hand, the pH i recovery after NH 4 Cl-induced intracellular acidification was only significantly slowed (−65%) but not completely blocked by perfusion with Na + free solution in HEPES-buffered system in HeLa cells, as shown in the right part of Figure 2C. This demonstrated that besides Na + -dependent acid-extruder(s), there is another Na + -independent extrusion mechanism (≈35%) responsible for the remaining acid extrusion in HEPES solution in HeLa cells. Histograms of Figure 2D-F show the pH i recovery slope (%) following induced intracellular acidification averaged for six experiments were similar to those shown in Figure 2A-C, respectively. To further test if the Na + -dependent acid extruder is the NHE1 in the human normal cervical cells, i.e., End1 cells and Ect1 cells, as shown in the Figure 2A,B, respectively, we added HOE 694, a specific NHE1 inhibitor, in the superfusate. As shown in the right part of Figure 3A,B, HOE 694 ((3-methylsulphonyl-4-piperidino-benzoyl) guanidine methanesulphonate) (50 µM) entirely inhibited the pH i recovery following the induced intracellular acidification in the End1 cells and Ect1 cells, respectively. On the other hand, the similar protocol of adding HOE 694 only partially inhibited pH i recovery rate (≈60%) in the human cervical cancer cells, i.e., HeLa cells, as shown in Figure 3C. Therefore, the present results provide clear pharmacological evidence that NHE1 exists functionally in the End1 cells, Ect1 cells, and HeLa cells. Moreover, these results indicate that another acid-extruding mechanism apart from NHE1 exists in HeLa cells. The histograms in Figure 3E-G show the mean pH i recovery slope before and after HOE 694 addition for several experiments that are similar to those whose results are shown in Figure 3A-C, respectively. Note that a further acidification was observed after the removal of Na from the superfusate while the addition of cariporide (HOE 694) did not cause further acidification. The possible underling mechanism for such difference has been illustrated in the Section 3. To further investigate whether the remaining Na + -independent pH i recovery, i.e., could not be inhibited by Na + -free solution ( Figure 2C) and HOE 694 ( Figure 3C), is caused by the vacuolar-type ATPase (V-ATPase) in the HeLa cells, the HeLa cells were perfused with an HOE 694/Na + -free solution pulse with 30 µM bafilomycin A1 (Balifo; V-ATPase-specific inhibitor), as shown in the farthest right part of Figure 3C,D. Either adding HOE 694 pulse bafilomycin-A1 or removing Na + pulse adding bafilomycin-A1 totally blocked the pH i recovery, as shown in the middle part and right part of Figure 3E, respectively. These results provided functional evidence that V-ATPase plays a role of ≈40% in acid extrusion in HeLa cells. The histograms of Figure 3F,G show the mean pH i recovery slope for six experiments that are similar to what are shown in Figure 3C,D, respectively. The top bars show the buffer system used, and the bars above or below the trace show the application of NH4Cl and treatment, respectively. (D-F) Histograms show the pHi recovery slope (%) following induced intracellular acidification averaged for six experiments similar to those shown in A-C, respectively. The pHi recovery rate was measured at pHi = 6.96 ± 0.08, the level of significance was set at ** p < 0.005 versus the control. Error bars represent the mean ± SEM (standard error mean). Figure 2. Effect of Na + -free solution on intracellular pH (pH i ) recovery subsequent induced intracellular acidification in HEPES-buffered condition in the End1, Ect1, and HeLa cells. (A-C) The top bars show the buffer system used, and the bars above or below the trace show the application of NH 4 Cl and treatment, respectively. (D-F) Histograms show the pH i recovery slope (%) following induced intracellular acidification averaged for six experiments similar to those shown in A-C, respectively. The pH i recovery rate was measured at pH i = 6.96 ± 0.08, the level of significance was set at ** p < 0.005 versus the control. Error bars represent the mean ± SEM (standard error mean). The pHi recovery rate measured is shown at the top of the histogram, and ** p < 0.005 versus the control. Error bars represent the mean ± SEM (standard error mean). The Effect of Andrographolide on the Functional Activity of NHE and V-ATPase in HeLa Cells To measure the effect of Andrographolide on the functional activity of NHE1 and V-ATPase, the HeLa cells were superfused with various concentrations of Andrographolide (10-1000 μM) during the pHi recovery that followed NH4Cl-induced intracellular acidification in HEPES-buffered solution ( Figure 4A). As shown in Figure 4A, a lower concentration of Andrographolide (10 μM) did not change pHi recovery in HEPES-buffered solution. However, acute addition of higher concentrations control. The other part shows the pH i recovery slope after application of different design conditions, i.e., Na + free, HOE 694, Na + -free solution plus bafilomycin A1 (Balifo), and HOE 694 plus bafilomycin A1 (Balifo), respectively. (E-H) Histogram shows the normalization of pH i recovery rate of acid extrusion that are averaged for 5-6 experiments similar to those shown in A-D, respectively. The pH i recovery rate measured is shown at the top of the histogram, and ** p < 0.005 versus the control. Error bars represent the mean ± SEM (standard error mean). The Effect of Andrographolide on the Functional Activity of NHE and V-ATPase in HeLa Cells To measure the effect of Andrographolide on the functional activity of NHE1 and V-ATPase, the HeLa cells were superfused with various concentrations of Andrographolide (10-1000 µM) during the pH i recovery that followed NH 4 Cl-induced intracellular acidification in HEPES-buffered solution ( Figure 4A). As shown in Figure 4A, a lower concentration of Andrographolide (10 µM) did not change pH i recovery in HEPES-buffered solution. However, acute addition of higher concentrations of Andrographolide (≥30 µM) inhibited the pH i recovery in a concentration-dependent way, as shown in the right parts of Figure 4A. Note that the inhibition of NHE1 activity by 1000 µM of Andrographolide was dramatic (>−80%). However, the reversible recovery to normal implies that the inhibitory effect on pH i recovery was from Andrographolide, instead of the deteriorated condition of cell itself. The histogram in Figure 4B shows the normalization of pH i recovery rate of acid extrusion averaged for nine experiments similar to that shown in Figure 4A. Moreover, as repetition of NH 4 Cl prepulses might have an effect on the consecutive recovery rates, we executed an extra control experiment with six consecutive prepulses without addition of Andrographolide, as shown in Figure 4C. The result of Figure 4C showed that the repetition of six consecutive NH 4 Cl did not show a significant effect on the pHi recovery rates in HeLa cells. Therefore, our present results provided direct evidence that Andrographolide inhibited activity of NHE1/V-ATPase in a concentration-dependent way in HeLa cells. of Andrographolide (≥30 μM) inhibited the pHi recovery in a concentration-dependent way, as shown in the right parts of Figure 4A. Note that the inhibition of NHE1 activity by 1000 μM of Androgra The Effect of Pretreating with Various Concentration of Andrographolide for 24 and 48 h on Proliferation/Viability and Cell Migration The MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide) assay is the gold standard for determination of the signal from living cells, rather than dead cells, by measuring reductive activity of dehydrogenase as enzymatic conversion of the tetrazolium compound to water insoluble formazan crystals [45]. To check the chronic effect of Andrographolide on proliferation/viability in End1, Ect1, and HeLa cells, the various concentrations of Andrographolide (3-1000 μM) were added to the culture solution for 24-48 h incubation. The original images of cell morphology in our present study showed that Andrographolide concentration-dependently 4 Cl prepulse technique as a control and wash, respectively. (B) Histograms show the normalization of change on pH i recovery rate after acute application of Andrographolide (10-1000 µM) in HEPES-buffered condition, respectively. The pH i recovery rate measured is shown at the top of the histogram, and * p < 0.05 and ** p < 0.005 versus the control. Error bars represent the mean ± SEM (standard error mean). The Effect of Pretreating with Various Concentration of Andrographolide for 24 and 48 h on Proliferation/Viability and Cell Migration The MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide) assay is the gold standard for determination of the signal from living cells, rather than dead cells, by measuring reductive activity of dehydrogenase as enzymatic conversion of the tetrazolium compound to water insoluble formazan crystals [45]. To check the chronic effect of Andrographolide on proliferation/viability in End1, Ect1, and HeLa cells, the various concentrations of Andrographolide (3-1000 µM) were added to the culture solution for 24-48 h incubation. The original images of cell morphology in our present study showed that Andrographolide concentration-dependently inhibited the viability of HeLa cells both for 24 and 48 h, as shown in Figure 5A,B, respectively. Moreover, the Andrographolide-induced inhibition on viability was significant from 30 and 10 µM for the groups of 24 and 48 h, respectively. Note that the higher concentrations of 300 and 1000 µM Andrographolide inhibited cell viability dramatically (>90%). The statistical concentration dependent curves of Andrographolide on cell viability in Figure 5C,D show the mean percentage of cell viability for five experiments that are similar to those shown in Figure 5A,B (n = 5; p < 0.005), respectively. Additionally, similar experiments were applied to End1 cells and Ect1 cells, and the statistical concentration dependent curves of Andrographolide on cell viability were shown as blue color and green color in Figure 5C,D, respectively. The result showed that Andrographolide did not affect the viability until the concentration was higher than 100 µM (48 h, n = 8, p < 0.005) and 300 µM (24 h, n = 8, p < 0.005) in End1 cells and Ect1 cells, respectively. The IC50 (half maximal inhibitory concentration) values of Andrographolide-induced inhibition on viability in HeLa cells, End1 cells, and Ect1 cells were 54.5, 303.8, and 200.7 µM, respectively, for 24 h. Moreover, the IC50 values of Andrographolide-induced inhibition on viability in HeLa cells, End1 cells, and Ect1 cells were 10.1, 157.6, and 157.6 µM, respectively, for 48 h. Cancers 2020, 12, 387 9 of 21 inhibited the viability of HeLa cells both for 24 and 48 h, as shown in Figure 5A,B, respectively. Moreover, the Andrographolide-induced inhibition on viability was significant from 30 and 10 μM for the groups of 24 and 48 h, respectively. Note that the higher concentrations of 300 and 1000 μM Andrographolide inhibited cell viability dramatically (>90%). The statistical concentration dependent curves of Andrographolide on cell viability in Figure 5C,D show the mean percentage of cell viability for five e In order to check whether Andrographolide shows an anti-cancer effect of migration, the wound-healing assay experiments (see Section 4 for details) were further executed in the human cervical cancer cells, i.e., HeLa cells. The differences in gap closure between before and after adding various concentrations of Andrographolide for 24 and 48 h, respectively, were measured. If there is a significant difference in gap closure, it represents the anti-migration ability of Andrographolide in human cervical cancer cells. As shown in Figure 6A, the fixed gap of monolayer cells was created by covering a fixed small box (upper row of Figure 6A; denoted as 0 h) and the migration into the gap was imaged at the time of 24 and 48 h (middle and lower row of Figure 6A, respectively) after adding Andrographolide in the HeLa cells for culture. As shown in Figure 6, Andrographolide (50-300 µM) significantly inhibited HeLa cell migration at 24 h (−26% to −84%, respectively) and 48 h (−34% to −95%, respectively). Note that the concentration of 300 µM Andrographolide not only totally inhibited migration, but also caused cell death, as shown in the farthest right pane of Figure 6A. Moreover, the enlarged pictures of the related higher concentrations are located at the right bottom of Figure 6A. The histogram of Figure 6B,C shows the normalization of migration inhibition percentage that was averaged for 11 experiments similar to those shown in Figure 6A. In order to check whether Andrographolide shows an anti-cancer effect of migration, the wound-healing assay experiments (see Section 4 for details) were further executed in the human cervical cancer cells, i.e., HeLa cells. The differences in gap closure between before and after adding various concentrations of Andrographolide for 24 and 48 h, respectively, were measured. If there is a significant difference in gap closure, it represents the anti-migration ability of Andrographolide in human cervical cancer cells. As shown in Figure 6A, the fixed gap of monolayer cells was created by covering a fixed small box (upper row of Figure 6A; denoted as 0 h) and the migration into the gap was imaged at the time of 24 and 48 h (middle and lower row of Figure 6A, respectively) after adding Andrographolide in the HeLa cells for culture. As shown in Figure 6, Andrographolide (50-300 μM) significantly inhibited HeLa cell migration at 24 h (−26% to −84%, respectively) and 48 h (−34% to −95 The Effect of Pretreating with Various Concentration of Andrographolide for 24 and 48 h on Protein Expression of NHE1, Bcl-2, PARP, Cleaved PARP, Pro-Caspase 3, and Cleaved Caspase 3 Andrographolide inhibits cell viability in a concentration-dependent way in human cervical cancer cells, i.e., HeLa cells, whereas it does not in normal cervical cells, i.e., End1 and Ect1. Therefore, we wish to check whether the Andrographolide-induced viability inhibition was related to apoptosis. In order to check the impact of pretreating with various concentrations of various concentrations of Andrographolide for 24 and 48 h on the proteins expression of acid extruder NHE1 and apoptosis-related factors, such as Bcl-2, poly-ADP-ribose-polymerase (PARP), cleaved PARP, pro-Caspase 3, and cleaved Caspase 3 in culture HeLa cells, we used the Western blot technique in the further experiments as shown in Figure 7A. Our present study showed that the expression of NHE1, Bcl-2, PARP, cleaved PARP, pro-Caspase 3, and cleaved Caspase 3 was significantly reduced by treating with 100 µM Andrographolide for 24 and 48 h in HeLa cells (n = 4, * p < 0.05 or ** p < 0.005), as shown in the histograms of Figure 7B-E, respectively. These results suggested that the inhibition of cell migration by Andrographolide was mainly due to the decreasing functional activity and/or downregulation of protein expression of NHE1, and that the activation of apoptotic pathway involved with Bcl-2, PARP, and Caspase 3 played a key role on the mechanism of Andrographolide-induced anti-cancer effect. concentrations of Andrographolide and DMSO (Dimethyl sulfoxide) used. The pictures in the first, second, and third panels show the time (0, 24, and 48 h, respectively) after the ibidi culture-insert 2 well was removed. The effect of adding 50, 100, and 300 μM Andrographolide on cell migration for 48 h at the third row, i.e., after removing the ibidi culture-insert 2 well for 48 h, was enlarged directly and shown in the right bottom of Figure 6A (the fourth row). The bars at the right bottom of the cells show the scale of the length of picture (500 μm). (B,C) Histograms show the average normalization of wound migration in 11 experiments similar to that shown in A. The * p < 0.05 and ** p < 0.005 versus the control, respectively. Error bars represent the mean ± SEM (standard error mean). The Effect of Pretreating with Various Concentration of Andrographolide for 24 and 48 h on Protein Expression of NHE1, Bcl-2, PARP, Cleaved PARP, Pro-Caspase 3, and Cleaved Caspase 3 Andrographolide inhibits cell viability in a concentration-dependent way in human cervical cancer cells, i.e., HeLa cells, whereas it does not in normal cervical cells, i.e., End1 and Ect1. Therefore, we wish to check whether the Andrographolide-induced viability inhibition was related to apoptosis. In order to check the impact of pretreating with various concentrations of various concentrations of Andrographolide for 24 and 48 h on the proteins expression of acid extruder NHE1 and apoptosisrelated factors, such as Bcl-2, poly-ADP-ribose-polymerase (PARP), cleaved PARP, pro-Caspase 3, and cleaved Caspase 3 in culture HeLa cells, we used the Western blot technique in the further experiments as shown in Figure 7A. Our present study showed that the expression of NHE1, Bcl-2, PARP, cleaved PARP, pro-Caspase 3, and cleaved Caspase 3 was significantly reduced by treating with 100 μM Andrographolide for 24 and 48 h in HeLa cells (n = 4, * p < 0.05 or ** p < 0.005), as shown in the Note that the expression of each band is shown right under or above the blot; the predicted size of protein marker is shown at the right of the blot; the size of the detected protein is shown at the left of the blot. (B-E) The histograms show the relative expression ratio of Cleaved PARP and PARP, the ratio of cleaved Caspase 3 and Caspase 3, NHE1 and Bcl-2 to β-Actin that were averaged for several experiments similar to those shown in A, respectively. The * p < 0.05 and ** p < 0.005 versus the control, respectively. Error bars represent the mean ± SEM (standard error mean). The Resting pH i and New Steady-State pH i in Human Cervical Cancer Cells Cellular functions and activities are regulated by the delicate homeostasis of pH i . Recently, it has been demonstrated that a reversed transmembrane pH gradient is evolving as a hallmark of cancer tissues, i.e., high pH i and low pH o that enables cancer progression by promoting proliferation, the evasion of apoptosis, metabolic adaptation, migration, and invasion [5]. In our present study, in the cultured human cervical cancer cells, we have found that the original resting pH i value of HeLa cells is quite alkaline (7.47) under HEPES-buffered Tyrode solution (left part of the Figure 1C). On the other hand, the original resting pH i of the human normal ectocervical cells (Ect1 cells; Figure 1B) and the human normal endocervical cells (End1 cells; Figure 1A) were significantly more acidic (7.31 and 7.30, respectively) than that in HeLa cells under HEPES-buffered Tyrode solution. Therefore, our present findings implicate that human cultured cervical cancer cells have the phenomenon of high pH i similar like that of highly proliferated cells, i.e., embryonic stem cells and cancer cells (≈7.4) [5,46,47]. However, the evidence derived from the present findings has to be cautioned for the following reasons. Firstly, the result of the present study was based on cancer cell lines. Secondly, it was based on CO 2 /HCO 3 − -buffer free conditions, where pH i in Hela cells was significantly higher than in Ect1 and End1 cells, but it cannot be generalized for all cancer cell types/lines and for CO 2 /HCO 3 containing physiological conditions [48,49]. For example, the previous data of Swietach group showed that, in CO 2 /HCO 3 − -buffered condition, the steady state the pH i of eight carcinoma cell lines of various origin (HCT116, RT112, MDA-MB-468, MCF10A, HT29, HT1080, MiaPaca2, and HeLa) were in range of 6.9-7.3, and 7.1 for HeLa cells [48]. Steady-state pH i is a balance through the combined operation of passive intracellular buffering power and active transmembrane pH i transporters [50][51][52]. Therefore, the functional characterization of acid extruders and acid loaders, and the change of their activity during the pathophysiological progress in HeLa cells await to be examined. Moreover, such alkaline pH i value of HeLa cells was shifted to a new, steady-state pH i value of 7.25 after a few intracellular acid/base impacts of NH 4 Cl prepulse, under HEPES-buffered Tyrode (the farthest right part of Figure 1C). Such new steady-state pH i value is nearly the same as that of normal animal/human mature cells (≈7.2) [1,11,50,51,53], as well as that that of HeLa cells (≈7.1) measured under CO 2 /HCO 3 condition [48]. Note that a similar result of shifting was also found in the human normal ectocervical cells, i.e., Ect1 cells ( Figure 1B) and in the human normal endocervical cells, i.e., End1-cells ( Figure 1A). Whether the acid/base impact-induced shift of ≈0.1 to ≈0.2 pH i is simply a unique character of each cell type or complicatedly caused by changing pH i regulating mechanism during the progress of cancer cells awaits further study. For example, a study showed that female vaginas provide a characteristic low-Na + and low-pH fluid microenvironment (pH 3.6-4.4) that is considered generally protective [54]. If the pH e inside the vaginal tissue, which is dictated by blood perfusion, is higher and subsequently activates the NHE1 activity. Such interference on pH i /pH e might cause an impact on the environment (>4.7) and abnormal vaginal flora are correlated with human immunodeficiency virus and infertility [55]. Therefore, the knowledge of special character of shifting pH i from alkaline to acidic after intracellular acid/base impacts might provide another new insight on developing related medicines to cure human cervical cancer or carcinoma. Furthermore, in our present study, neither HEPES-buffered (carbonate buffer free) solution nor the large pH changes during the ammonium prepulse are physiological. The ammonium prepulse is a useful tool to trigger and measure the acid extrusion rate, but not very useful for assumptions about mechanisms used by cells to set new steady state pH i . In intact tissue (whether cancerous or healthy) the fluxes maintaining the steady state pH are low as compared to the new steady after an ammonium prepulse. In order to draw conclusions about steady state intracellular pH measurements in the presence of CO 2 /bicarbonate and absence of bicarbonate and with and without andrographolide would be necessary in the further experiments. Note that in the results in Figure 2A, there was still a very visible acidification in Na-free superfusate in both Ect1 and End1. On the contrary, no further acidification was observed after adding cariporide (HOE 694) in both Ect1 and End1, as shown in Figure 3A,B. The reason behind this is mainly because that the removal of Na from superfusate could completely inhibit all Na-dependent acid extruding mechanism, including all isoforms of NHE, say NHE2-9, apart from the main acid extruding mechanism of NHE1. Therefore, after the complete inhibition of acid extruding mechanism, the acidic cellular metabolites were continuously accumulated inside the cells to cause further acidification in both Ect1 and End 1, as shown in Figure 2A. However, the cariporide (HOE 694) could only specifically inhibit the activity of NHE1 compared to that of other NHE isoforms. Therefore, the residue but minor acid extruding mechanism still worked, more or less. Therefore, although the pH i recovery slope was completely inhibited, the accumulation of acid could be somehow extruded which prevents the further acidification in both Ect1 and End1, as shown in Figure 3A,B. Potential Role of Inhibitors/Activators of Isoforms of NHE and V-ATPase in a Clinical Setting In mammalian cells, in addition to the ubiquitous NHE1 acid extruder, the vacuolar-H + -ATPase (V-ATPase) utilizes the ATP to pump protons to the extracellular environment. V-ATPase has been reported as an important pH i regulator in many different types of cancers and as being positively correlated to cancer invasion and metastasis [56,57]. In our present study, we provide straightforward and convincing functional evidence that NHE1 and V-ATPase are functionally responsible for acid extrusion, following induced acidosis in HeLa cells, as shown in Figure 3C,D. On the other hand, the V-ATPase has not been found to play a functional role in the pH i recovery following induced intracellular acidification both in human normal cervical cells, i.e., Ect1 cells and End1 cells (Figure 2A,B and Figure 3A,B). This is a very unique characteristic difference between the human cervical normal (End1 and ECT1) and cancer cells (HeLa), which can provide a clue for strategy of treating cervical cancer in clinics. Moreover, our present findings provide that the Andrographolide-induced inhibition on the activity of NHE1 and V-ATPase (Figure 4) might play an important role in Andrographolide-induced inhibition on cell migration/proliferation (Figures 5 and 6). In other words, our present study has suggested that specific inhibitors of NHE1/V-ATPase could be a promising pharmacological agent for human cervical cancer or carcinoma. Therefore, further studies on other active acid extruders and/or acid loaders of HeLa/cervical cancer tissues should be conducted in the physiological condition. Additionally, the effects of knock down or overexpression of specific pH i regulators is worth observing in HeLa cells to see the role of pH i regulators on the cellular development and progression of cancer disease. The Acute and Chronic Effect of Andrographolide on Intracellular pH Regulating Mechanism and Apoptosis in Cervical Cancer Cells Our present study has, for the first time, provided straightforward evidence concerning acute ( Figure 4) effects of various concentrations (i.e., 10-1000 µM) of Andrographolide on functional activity of NHE1/V-ATPase of the pH i regulating mechanism in cultured human cervical cancer cells. Andrographolide showed a concentration-dependent inhibition on NHE1/V-ATPase activity. Note that the dramatic inhibition on NHE1/V-ATPase activity upon acute Andrographolide treatment is around ≈80%, and that it reverses completely ( Figure 4A) in cultured human cervical cancer cells, i.e., HeLa cells. Moreover, the Andrographolide-induced reduction on expression of NHE1 isoforms (Figure 7) was detected when the concentration was higher than 30 µM, either upon 24 and 48 h Andrographolide treatment. Indeed, recent studies show that Andrographolide inhibited the proliferation of cancer cells with GI 50 values (the concentration required to inhibit the 50% growth) ranging from 10 to 28 µM on diverse cancer cell lines for different types of human cancers, including breast, CNS, color, lung, melanoma, ovarian, prostate, and renal [29]. Similarly, our present study shows that Andrographolide (10-1000 µM) concentration-dependently decreased cellular viability ( Figure 5A-D) and migration ( Figure 6), either upon the 24 or 48 h Andrographolide treatment in human cervical cancer cells. Note that the Andrographolide-induced inhibition on cellular viability was significantly observed when the concentration was higher than 100 or 300 µM, either upon 24 and 48 h Andrographolide treatment, respectively. In other words, our present results implicate that the Andrographolide-induced inhibition on NHE1 activity/expression does not only highly correlate with the inhibition on viability and migration, but also more specific/sensitive to human cervical cancer cells than human normal cervical cells (Figures 5-7). Therefore, Andrographolide can be used in clinics to treat cervical cancer by inhibition cell viability without affecting the viability of normal cervical cells in the concentration range of 10-100 µM. Such differentiation of GI 50 concentration of Andrographolide between cancer cells and normal cells not only provides efficient therapy for killing cervical cancer cells, but also prevents the possible side effect of harming normal cervical cells. However, more specific cellular/molecular underlying mechanisms for this await further investigation. Moreover, it is notable that after 24-48 h the cells will be in the plateau phase instead of the logarithmic growth phase, which will affect proliferation and viability. It is clearly visible (Figure 5A,B) that cells after 24 and 48 h reached 100% of confluence in controls. Therefore, our present study might "underestimate" the effect of a drug (Andrographolide). Additionally, whether the cell proliferation assays are carried out on cells that are constantly proliferating is an important factor to rule out possible artefacts. On the other hand, as Andrographolide showed a significant effect on cell growth ( Figure 5), therefore, the observed inhibitory effect on migration, as shown in Figure 6, might be affected, more or less, by the die off of cells. Therefore, a further experiment on migration only awaits to be performed. Apoptosis, a programmed cell death, is activated by the cell itself through the death receptor activation (extrinsic) and stress-inducing stimuli (intrinsic) pathways to Caspase activation [58]. Expression of apoptotic regulatory proteins, including both anti-apoptotic Bcl-2 and pro-apoptotic members Bax [39] was analyzed after the pretreatment with Andrographolide for 24-48 h to see the possible apoptosis effect. In the present study, we found that chronic exposure to higher concentrations of Andrographolide (> 30 µM for 24 and 48 h) significantly activated apoptotic cascade, i.e., decreasing of pro Caspase 3, Bcl-2, PARP, while increasing cleaved PARP and cleaved Caspase 3 ( Figure 7). Moreover, the Bcl-2 in the mitochondria determines the mitochondrial release of apoptosis-associated factors such as apoptosis-protease activated factor 1, apoptosis-inducing factor, and cytochrome c [59]. Our present study showed that the 100 µM Andrographolide induced a reduction on Bcl-2 ( Figure 7) which represents the ability of Andrographolide on abolishing the mitochondrial membrane potential. Meanwhile, the uneven protein level of β-actin, i.e., β-actin, decreased in a concentration-dependent manner (from left to right), in some of our supplementary data. Though it reflected a possible incorrect technique on the part of the researcher, in light of "β-actin" being the internal control of that line, therefore, we nonetheless took it to be reliable data. This was especially so when we noticed the interesting and encouraging result that although the actin decreased in a concentration-dependent manner after Andrographolide treatment, the cleaved PARP protein levels increased in a concentration-dependent manner after Andrographolide treatment. Therefore, this assures us that the increase effect of cleaved PARP was valid. Note that the Andrographolide-induced inhibition on protein expression of NHE1 was time-and concentration-dependently correlated with that of Andrographolide-induced change on apoptotic cascade ( Figure 7A). As an acidic pH condition induces growth arrest or cell death [60], inhibition on NHE activity/expression may play a significant role in the progression of apoptosis. Indeed, p90rsk, hyperactive NHE1, even at the resting pH and the resulting cellular alkalinization, was reported to be directly related to uncontrolled proliferation in malignant cells [23]. Thus, p90rsk that regulates cellular proliferation, as well as NHE1, may be an important molecule for therapeutic targeting in Andrographolide-induced inhibition on cancer progression. The underlying molecular mechanism awaits further experiments. Moreover, the concept of alkaline pH levels in tumor cells is a new phenomenon that needs to be further investigated and explained in the future study. Moreover, similar experiments on apoptotic proteins in normal cervical cell lines, i.e., End1 and Ect1, are helpful for translational application in clinics. In addition, as Andrographolide has many effects, the effect of cariporide and bafilomycin on cell proliferation/migration/proteins expression and how it compares to the effect of Andrographolide awaits further investigation in the future. All things considered, our present study suggests that Andrographolide is a promising novel agent in the treatment of cervical cancer in clinics. Microspectrofluorometry and In Situ Calibration of Intracellular pH Fluorescent Dye BCECF To measure pH i and functional characterization of pH i regulating mechanism, cells were detected by microspectrofluorometry. This procedure has been described in detail in our previous reports [50,53,61]. Cells were loaded for with 3 µM BCECF-AM (2 ,7 -bis(2-carboxethyl)-5(6)-carboxy-fluorescein-acetoxymethyl, Thermo Fisher, Waltham, MA). BCECF epifluorescence was collected at 530nm with a converted inverted microscope, with alternative and repetitive excitation of the BCECF fluorophore at 490 and 440 nm under mono chromator control (Cairn Research, Kent, UK). Signals were digitized using a CED digitizer, and the fluorescence emission ratios were calculated and converted to pH i values by dividing the F490 by the F440 emission. BCECF fluorescence ratio was calibrated using the high-[K + ] nigericin technique (see section below for more details). Intracellular acidification was induced by transiently superfusing cells with 20 mM NH 4 Cl, a procedure known as NH 4 Cl prepulse that is shown in the section above. Measurement of pH i and recovery rates typically commenced 1min after NH 4 Cl, and pH recovery rates were calculated from the change in pH i over a 0.5-min time period (dpH i /dt). Note that the ammonium chloride was added to solution without osmotic compensation (see Section 3 for more details). The result of in situ calibration curve by using microspectrofluorometry technique with nigericin (10 µM) in Hela cells, End1 cells, and Ect1 cells is shown in supplementary figures: Figures S1-S3, respectively. Note that nigericin acted as a potassium ionophore to equalize the pH i to the pH e . The nigericin calibration solution. Please see the section below for details. NH 4 Cl Prepulse Technique NH 4 Cl prepulse techniques were used to induce acute intracellular acid loading, and the procedure has been described in detail in our previous reports [50,52,62]. This method can be explained by four phases as shown in the farthest left part of Figure 1C: phase 1 (rapid alkalization), phase 2 (slow recovery), phase 3 (rapid acidification), and phase 4 (pH i regulation). Thus, the acid extruder activity was measured as the slope of recovery from NH 4 Cl (20 mM) induced intracellular acidification. Throughout the whole experiment, the change of pH i induced by the tested drug/designed condition was compared around the 1st min after treating the drug/designed condition, unless otherwise stated, and pH recovery rates were calculated from the change in pH i over a 0.5-min time period (dpH i /dt). The background fluorescence and auto-fluorescence were small (<5%) and were ignored. In the NH 4 Cl prepulse solution, NH 4 Cl was added directly to the solution without osmotic compensation. All different solutions were adjusted to 7.4 with 4N NaOH, 4N HCl, and KOH, respectively, at 37 • C. Wound Healing Assay In order to rule out technical problems caused by the pipet-scratching in the in vitro wound healing test, we used an ibidi culture-insert 2 well (Cat. No: 81176, ibidi) in the present study, due to its defined size of the cell-free gap (500 µm). We plated the ibidi culture-insert 2 well on the 6-well culture plate, and then seeded cells into the ibidi culture-insert 2 well. When the cells reached 100% confluence, we removed the ibidi culture-insert 2 well to create a cell-free gap and then exposed cells to the fresh serum-free DMEM (Dulbecco's Modified Eagle Medium) medium both with and without andrographolide for 48 h (for detailed experimental procedures, please refer to the manufacturer's instructions). We imaged the live cells immediately (0 h) after creation of the gap and monitored the gap's distance at 24 and 48 h. The cells migrating ability following the application of andrographolide was presented by normalizing the distance of the gap in each group to the control plus DMSO (dimethyl sulfoxide) group (C + D). Western Blotting The procedure of immunoblotting analysis has been described in detail in our previous reports [50]. In brief, for SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) electrophoresis, denatured proteins were homogenized in a sample buffer (Bio-Rad, Hercules, CA, USA), fractionated on FastCast gels (Bio-Rad, Hercules, CA, USA) of either 7.5%, 10%, or 12%, depending on the molecular weight of the proteins of interest. Fractionated proteins were wet-transferred to PVDF (polyvinylidene difluoride) membranes (0.2 µm, GE Healthcare, Pittsburgh, PA, USA), blocked in 5% BSA (Bovine serum albumin) (Bioshop, Burlington, ON, Canada), and probed with primary antibodies (as listed above at the stated dilutions) overnight at 4 • C. After being treated overnight, membranes were washed three times in TBST (Sodium Chloride, Trizma Base, 0.1% Tween-20, all from Sigma), and incubated with HRP-conjugated secondary goat anti-rabbit (1:2000, Cell Signaling Technology), or HRP-conjugated secondary horse anti-mouse (1:2000, Cell Signaling Technology) antibodies. Following secondary antibody incubation, the membranes were further washed three times in TBST, and incubated with enhanced chemiluminescence substrate (Bio-Rad, Hercules, CA, USA). Western blot images were obtained on a UVP BioSpectrum 500 imager (UVP, Upland, CA, USA). Equal loadings were confirmed by probing with anti-β-actin antibody. Protein expression levels were quantified by using ImageJ software analysis. Statistical Analysis Statistical analysis was performed using Student's t-test and one-way ANOVA followed by Tukey's posttest. Data were analyzed using Prism (GraphPad Software, La Jolla, CA, USA), and the level of significance was set at * p < 0.05 and ** p < 0.005 versus the control. All data are expressed as means ± standard error of the mean (SEM). Conclusions In the present study, we have, for the first time, provide straightforward, functional, and molecular evidence of the coexistence of Na + -dependent acid-extruders, i.e., NHE1 and vacuolar proton pump (V-ATPase), for acid-extruding mechanism in cultured human cervical cancer cells, i.e., Hela cells. Moreover, Andrographolide regulates apoptotic proteins to induce cells apoptosis, and concentration-dependently decreases pH i by decreasing activity of NHE1/V-ATPase and expression of NHE1 in Hela cells. Thus, Andrographolide implicates a promising novel agent in the treatment of cervical cancer in clinics. Conflicts of Interest: The authors declare that there is no conflict of interest regarding the publication of this paper. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2020-02-13T09:20:05.790Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "27696d6d69cb2f2aa0010bca74d16e154f062c0b", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/cancers/cancers-12-00387/article_deploy/cancers-12-00387-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef4454c1a37e7046efd516108ca7bd229ecd3f5c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
239891588
pes2o/s2orc
v3-fos-license
Stress, anxiety, and depression in infertile couples are not associated with a first IVF or ICSI treatment outcome Background Psychological distress may exert a negative influence on reproductive function of couples at reproductive age. Couples seeking assisted reproductive technology (ART) treatment may have a higher prevalence of psychological distress than fertile couples. However, whether psychological distress is associated with the outcome of ART treatment remains unknown. We aimed to investigate the association of pre-treatment psychological distress and clinical pregnancy rate among infertility couples undergoing in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) treatment. Methods This nested case-control study was conducted based on women who underwent their first fresh IVF or ICSI cycle in the Jiangsu Birth Cohort Study (JBC) between November 2015 and January 2019. A total of 150 women who did not obtain clinical pregnancy after first IVF or ICSI fresh embryo transfer were identified as cases, and a total of 300 age matched women who obtained clinical pregnancy were identified as controls. Conditional logistic regression analyses were used to investigate the association between psychological distress and the outcome of first IVF or ICSI treatment, adjusting for multiple potential confounders. Results No statistically significant association was observed between score of maternal symptoms of psychological distress and clinical pregnancy. Adjusted ORs of logistic regression were 1.00 (95% CI 0.97-1.03) for anxiety, 0.98 (95% CI 0.95-1.02) for depression, and 0.98 (95% CI 0.95-1.01) for perceived stress, respectively. When treat depression and anxiety as categorical variables, 62 (13.8%) were classified as clinical depression, 11 (2.4%) were classified as clinical anxiety, among 450 women in the present study. Psychological distress symptoms were also not associated with clinical pregnancy rate. Adjusted ORs of logistic regression were 0.27 (95% CI 0.03-2.33) for anxiety, 0.88 (95% CI 0.46-1.68) for depression, respectively. Conclusions Our findings firstly indicated that psychological distress experienced prior to IVF/ICSI treatment was not associated with clinical pregnancy. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-021-04202-9. Background During the past few decades, assisted reproductive technology (ART) is widely practiced throughout the world. However, the rate of clinical pregnancy is still low [1]. Inadequate ovarian reserve [2], the presence of hydrosalpinx [3], uterine myoma [4], and endometriosis [5,6] have been established as the main pathological factors, but the determinants of clinical pregnancy is still not fully revealed. Recently, many have shown that psychological distress may aggravate poor fertility [7,8]. Pathways of the hypothalamic-pituitary adrenal axis or the hypothalamic-pituitary gonadal axis may play a key role in this regulation process [9][10][11]. Couples seeking ART treatment may have a higher prevalence of psychological distress than fertile couples [12], because of the permanency of infertility, loss of hope, the treatment itself, and several previous ART attempts [13][14][15]. Therefore, whether psychological distress is associated with the outcome of ART treatment has aroused widely concern. Relationship between psychological distress and the outcome of ART treatment remains inconclusive [16][17][18][19][20]. Three studies have reported that psychological distress was associated with decreased pregnancy rates in in vitro fertilization (IVF) patients [16,18,21]. In contrast, a recent meta-analysis found that baseline (before ART treatment has started) psychological distress was not associated with ART outcome [22]. In addition, psychological distress covers a variety of symptoms [23,24], but most previous studies only focused on single psychological distress [21,[25][26][27][28]. Furthermore, most of these studies only included IVF cycle data, and could not extend their findings to patients with intracytoplasmic sperm injection (ICSI) treatment [14,16,18,21,25,29,30]. Therefore, we conducted this nested case-control study based on a large prospective multicenter cohort in Chinese population, aimed to investigate the association of pre-treatment psychological distress and clinical pregnancy rate among infertility couples undergoing IVF or ICSI treatment. Study population We conducted this nested case-control study within the Jiangsu Birth Cohort Study (JBC), a prospective and longitudinal birth cohort study. The recruitment and assignment of JBC has been described in previous study [31]. Briefly, JBC recruited couples who were about to receive assisted reproduction at the Women's Hospital of Nanjing Medical University or Suzhou Affiliated Hospital of Nanjing Medical University. They completed standardized and structured questionnaires by face-to-face interview to collect their demographic information. The JBC followed up both of assisted reproductive outcomes and obstetrics outcome using data from medical records and questionnaires. In this study, we identified women who underwent their first fresh IVF or ICSI cycle between November 2015 and January 2019. Women who have a history of three or more pregnancy losses were excluded from this study. A total of 150 women who did not obtain clinical pregnancy after first IVF or ICSI fresh embryo transfer were identified as cases, and a total of 300 age matched women who obtained clinical pregnancy were identified as controls. Clinical pregnancy was defined as the presence of one or more intrauterine gestational sacs with normal cardiac activity. The controlled ovarian hyperstimulation (COH) protocol was divided into three groups, including agonists protocol, antagonist protocol and other protocols, based on the usage of a gonadotropin-releasing hormone agonist (GnRH-a) versus antagonist analog. In the GnRH-a protocol, GnRH-a was used in the mid-luteal phase of the first menstrual cycle. Fourteen days later, exogenous gonadotropin (Gn), including FSH, luteinizing hormone (LH) and human menopausal gonadotropin (hMG) was used to promote ovulation when the pituitary reached the regulation standard. Human chorionic gonadotropin (HCG) was injected in the case of three or more follicles with a diameter of 16 ~ 18 mm and the oocytes were taken after 36 h. In the GnRH antagonist protocols, Gn was used on 2 ~ 3 days of the menstrual cycle. When the dominant follicle reached 12 ~ 14 mm or LH > 10 U/L, antagonists were used. HCG was injected when there were three or more follicles with diameters of 16 ~ 18 mm, and oocytes were taken after 36 h. Keywords: Intracytoplasmic sperm injection (ICSI), In vitro fertilization (IVF), Psychological distress, Clinical pregnancy All methods and protocols for information collection were approved by the institutional review board of Nanjing Medical University, China NJMUIRB (2017) 002. The recruitment performed in accordance to the Helsinki declaration. Informed, written consent was obtained from all participants. Psychological assessment Psychological distress of couples including anxiety, depression and perceived stress were assessed before the assisted reproductive treatment. Anxiety was measured with the Self-Rating Anxiety Scale (SAS) [33]. The scale comprises 20 items covering autonomic, cognitive, motor, and central nervous systems symptoms. Each item is scored on a Likert scale ranging from 1 to 4 (1 = none or a little of the time, 2 = some of the time, 3 = good part of the time, 4 = most or all of the time). Participants with SAS standard scores ≥50 were considered at risk for clinical anxiety [34]. Depression was assessed using the Center for Epidemiologic Study of Depression Scale (CESD). The CESD consists of 20 items which are rated using a 4-point ordered response set to indicate how frequently symptoms were experienced during the previous week (0 = rarely or none of the time, 1 = some or a little of the time, 2 = occasionally or a moderate amount of the time, 3 = most or all of the time). Total score of CESD was generated by summing their item responses and ranging from 0 to 60 (higher scores indicating more depressive symptoms). Participants with CESD scores ≥16 were considered at risk for clinical depression [35]. Perceived stress was assessed with the Perceived Stress Scale (PSS-10), which consists of 10 items purported to measure the degree of nonspecific appraised stress over the past month [36]. Each item was rated using a 5-point ordered to indicate the frequently symptoms (0 = never, 1 = almost never, 2 = sometimes, 3 = fairly often, 4 = very often). The total PSS-10 score was ranging from 0 to 40, higher score represents greater stress [37]. Covariate information We selected several potential confounders as covariates by reviewing the literatures [38][39][40][41][42][43]. Information on female body mass index (BMI), female educational attainment (< 12 years, ≥12 years), female occupation (mental worker, physical worker or none), household income (< 50,000 CNY, 50000 ~ 100,000 CNY, 100000 ~ 200,000 CNY, > 200,000 CNY), female and male smoking (none versus any), alcohol use (rarely: < 1 time/month; regular: ≥1 time/month), sleep quality (good versus poor), and exercise (rarely: < 3 times/week; regular: ≥3 times/ week) before the start of treatment were retrieved from the questionnaire data. Infertility factor (female factor, male factor, couple's factor, and unexplained factor), duration of infertility, and prior history of pregnancy loss (nulliparous, gravid with no prior history of loss, gravid with prior history of loss) were retrieved from medical records. Sleep quality was assessed by the Pittsburgh Sleep Quality Index (PSQI) developed by Buysse et al. [44]. It is a self-rated questionnaire and disturbances over an l-month time interval while higher scores represent worse sleep quality [44]. We used the established cutoff > 5 to depict poor sleep quality [44,45]. Statistical analysis Non-normally distributed variables were reported as the median (25th-75th range) and were compared using the Mann-Whitney U test among groups. Nominal variables were tested either with the Chi-square test or Fisher's exact test. Conditional logistic regression was used to estimate ORs with 95% CIs to assess the association between pre-treatment psychological distress and clinical pregnancy. All statistical analyses were performed using the R software version 4.0.2 (http:// www.R-proje ct. org/). P < 0.05 was considered statistically significant. Results As shown in Table 1, a total of 150 cases of women that failed to obtained clinical pregnancy and 300 controls of women that obtained clinical pregnancy were included in this study. The age was adequately matched between cases and controls (P > 0.05). Similar distributions of other baseline characteristics were also observed between cases and controls. Infertility factors were not associated to the clinical pregnancy (Supplementary Table 1 0.98 (95% CI 0.95-1.01) for perceived stress, respectively. Similar associations were observed in their partners and in couples (Table 2). When treat depression and anxiety as categorical variables, 62 (13.8%) were classified as clinical depression, 11 (2.4%) were classified as clinical anxiety, among 450 women in the present study. Psychological distress symptoms were also not associated with clinical pregnancy rate. Adjusted ORs of logistic regression were 0.27 (95% CI 0.03-2.33) for anxiety, 0.88 (95% CI 0.46-1.68) for depression, respectively. Furthermore, using women without any psychological symptom (neither depression nor anxiety) as a reference, the ORs for were 0.95 (95% CI 0.49-1.84) for those who exposed to one symptom (anxiety or depression), and 0.29 (95% CI 0.03-2.63) for two symptoms (anxiety and depression), respectively (Table 3). Discussion In this nested case-control study, we found that psychological distress before IVF or ICSI treatment, in general or as specific types, were not associated with clinical pregnancy in infertile couples during the first fresh cycle. This study is the first one to evaluate the effect of prepregnancy depression, anxiety, or stress individually and comprehensively on the clinical pregnancy probability among IVF/ICSI treated women of infertility in the Chinese population. Our finding was supported by several recent studies [14,[26][27][28][29][46][47][48]. Three prospective studies of the literature on stress and IVF outcome had concluded that stress in women, before or during treatment, was not correlated with pregnancy outcome [26][27][28]. Concerning to depression and anxiety, four prospective studies had concluded that no association between pre-treatment depression/anxiety in women and pregnancy outcome [14,29,46,48]. In addition, only one study in Chinese population (264 IVF or ICSI women) had explored depression, anxiety and stress simultaneously, which reported that women's stress, anxiety, and depression were unlikely have correlation with clinical pregnancy [29]. Given most of previous studies only focused on single aspect of psychological distress and lack of control for potential confounders, our study provided more reliable evidence for the association. Some studies showed that psychological distress predicted a higher rate of poor outcomes [25,[49][50][51]. However, there were several limitations should be concerned. First, most of the former studies did not included women with only first-time fresh IVF or ICSI. Because, women's emotional experiences might be affected by previous experience of ART treatment [13,14,52,53]. Second, few studies used multidimensional evaluation of psychological distress of in infertile couples. Third, a majority of studies couldn't were failed to fully control for potential confounding factors for the association, such as lifestyle Table 2 Conditional logistic Regression Analysis of psychological distress level on pregnancy rate of first IVF or ICSI cycle among 450 couples Values are median and range for continuous variables unless indicated otherwise a P values were derived with Mann-Whitney U test for nonnormally distributed continuous variables b Univariable conditional logistic regression analyses of psychological distress level on pregnancy rate of first IVF or ICSI cycle c Model 1: Multivariable conditional logistic regression analyses were adjusted for female pre-treatment BMI, educational attainment, occupation, household income, infertility factor, duration of infertility d Model 2: Multivariable conditional logistic regression analyses were adjusted for female pre-treatment BMI, educational attainment, occupation, household income, infertility factor, duration of infertility, prior history of pregnancy loss, alcohol use, sleep quality, exercise, female and male smoking before the start of treatment e Variable contains missing data Woman The main strengths of our study include the prospective cohort based nested case-control design and standard assessments conducted separately in two Chinese ART clinics. In addition, our data are available to adjust for potential confounders such as causes of infertility and important lifestyle factors. Some limitations should be also noted. First, although our perceived psychological scales were simultaneously administered before treatment, but single time point limited us to comprehensively evaluate the psychological influence on outcome of ART treatment. Second, the data of psychotherapy or psychopharmacological treatments was not available, thus we could not control the potential confounding on our findings. Third, our sample size is slightly larger than two existing literature in China [27,29], but still might be inadequate to detect relatively small effects. In addition, we included unequal groups (150 cases versus 300 controls) in our study, which may lead to additional bias [59]. Therefore, validation study with larger sample size is warranted in the future. Conclusion In summary, our study on psychological distress and IVF or ICSI outcome did not observed significant influence of pre-treatment psychological distress (e.g., anxiety or depression) on the rate of clinical pregnancy. Further, women are still encouraged to express psychological distress before treatment and the development of intervention strategies to improve coping are helpful, not only toward reducing emotional suffering, but also to avoid discontinuing treatment before reaching goal of live birth. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12884-021-04202-9. Table 3. Conditional logistic Regression Analysis of psychological distress symptoms on pregnancy rate of first IVF or ICSI cycle among women without endometriosis, or chronic endometritis or autoimmune disorders. Supplementary Figure 1. Flowchart for inclusion and exclusion of the study population. Table 3 Conditional logistic Regression Analysis of psychological distress symptoms on pregnancy rate of first IVF or ICSI cycle among 450 women a Univariable conditional logistic regression analyses of psychological distress level on pregnancy rate of first IVF or ICSI cycle b Model 1: Multivariable conditional logistic regression analyses were adjusted for female pre-treatment BMI, educational attainment, occupation, household income, infertility factor, duration of infertility c Model 2: Multivariable conditional logistic regression analyses were adjusted for female pre-treatment BMI, educational attainment, occupation, household income, infertility factor, duration of infertility, prior history of pregnancy loss, alcohol use, sleep quality, exercise, female and male smoking before the start of treatment d Norm was defined as women without any symptom including depression, anxiety. One symptom was defined as women with any symptom including depression or anxiety. Two symptoms were defined as women with depression and anxiety
2021-10-27T13:40:46.524Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "01ec3f5e5b0934590ba27b0dd00b6429f072fdd6", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-021-04202-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01ec3f5e5b0934590ba27b0dd00b6429f072fdd6", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258082891
pes2o/s2orc
v3-fos-license
Prediction of drug sensitivity based on multi-omics data using deep learning and similarity network fusion approaches With the rapid development of multi-omics technologies and accumulation of large-scale bio-datasets, many studies have conducted a more comprehensive understanding of human diseases and drug sensitivity from multiple biomolecules, such as DNA, RNA, proteins and metabolites. Using single omics data is difficult to systematically and comprehensively analyze the complex disease pathology and drug pharmacology. The molecularly targeted therapy-based approaches face some challenges, such as insufficient target gene labeling ability, and no clear targets for non-specific chemotherapeutic drugs. Consequently, the integrated analysis of multi-omics data has become a new direction for scientists to explore the mechanism of disease and drug. However, the available drug sensitivity prediction models based on multi-omics data still have problems such as overfitting, lack of interpretability, difficulties in integrating heterogeneous data, and the prediction accuracy needs to be improved. In this paper, we proposed a novel drug sensitivity prediction (NDSP) model based on deep learning and similarity network fusion approaches, which extracts drug targets using an improved sparse principal component analysis (SPCA) method for each omics data, and construct sample similarity networks based on the sparse feature matrices. Furthermore, the fused similarity networks are put into a deep neural network for training, which greatly reduces the data dimensionality and weakens the risk of overfitting problem. We use three omics of data, RNA sequence, copy number aberration and methylation, and select 35 drugs from Genomics of Drug Sensitivity in Cancer (GDSC) for experiments, including Food and Drug Administration (FDA)-approved targeted drugs, FDA-unapproved targeted drugs and non-specific therapies. Compared with some current deep learning methods, our proposed method can extract highly interpretable biological features to achieve highly accurate sensitivity prediction of targeted and non-specific cancer drugs, which is beneficial for the development of precision oncology beyond targeted therapy. Introduction In the last few years, due to the continuous development of highthroughput bio-data and bioinformatics technologies, people have paid more and more attention to analyze tumor biomarkers and drug targets. The use of genomic data to guide the treatment of cancer patients represents the central principle, which matches patients to specific tumor types and treatments based on the molecularly targeted drugs (Zhang and Yue, 2015) (Kumar-Sinha and Chinnaiyan, 2018) . Researchers have identified many molecular lesions as triggers that drive cancer, and suggested that each cancer has its own genetic imprint and tumor marker. The corresponding therapeutic drug is designed for a well-studied target that promotes tumor growth (the target can be a protein molecule on the surface or inside the tumor cell, or a gene fragment). However, the drug response and sensitivity to cancer treatment (chemotherapy or targeted drugs) is a complex pharmacology that usually depends on many factors, especially the patient's genomic profile (Lee et al., 2018). In clinical practice, molecularly targeted drugs are recommended for patients only if the target gene is mutated. However, according to available studies, only about 9% of patients can be identified by known target genes in precision therapy (Min et al., 2018). Additionally, only about 11% of patients can enter clinical trials. Most importantly, only 5% of patients achieve optimal treatment outcomes in precision oncology (Cheng et al., 2018) (Marquart et al., 2018) (Zehir et al., 2017). In consequence, there are limitations in selecting drugs for molecularly targeted therapies based on the genomic status of the patient. Large-scale pharmaco-genomes based on cell lines or patient-derived xenografts (PDX) models in recent years have been working to uncover relationships between multi-omics biosignatures and drugs, aiming to obtain drugs that match tumors. The results of PDX and existing large-scale pharmacogenetic screens of cell lines show that nearly all cancer patients are sensitive to one or more targeted drugs or non-specific chemotherapeutic drugs. As a result, how to accurately match cancer patients with their sensitive drugs is currently a critical research challenge. According to the previous summary, there are usually two computational and analytical approaches for predicting drug response. The first one is using regression approaches to predict the value of the evaluation criteria of cell lines to drug response, and the second one is classifying the sensitivity of each drug on the basis of cell lines (Ahmadi Moughari and Eslahchi, 2021). Choi et al. presented a computational model based on the elastic network regressions and deep neural networks (Choi et al., 2020). They predicted the probability of drug sensitivity of a specific cell line to a drug based on the similarity of the drug to a reference group. Wang et al. proposed a matrix factorization with similarity regularization model (SRMF) to predict drug response values, which is based on the gene expression similarity of cell lines and pharmacochemical similarity . In addition, there are many other regression computational methods. When recommending appropriate and effective therapies for cancer patients, it is important to determine the drugs to which they are sensitive. However, even knowing the drug response value itself may not provide additional information in clinical treatment. Therefore, classifying cell lines as sensitive or resistant to each drug is a more straightforward and effective method than regressing their response values. Furthermore, the regression problem could have been transformed into a classification problem by setting a threshold value. Most studies have shown that gene expression data is the most powerful data type for classifying and predicting drug response (Ding et al., 2016) (Iorio et al., 2016) (Graim et al., 2018) (Koras et al., 2020). In 2014, there were scholars who used baseline gene expression levels and in vitro drug sensitivity of cell lines to predict clinical drug response (Geeleher et al., 2014). MJ et al. used gene expression microarrays to assess the prognosis of patients with primary breast cancer (Van De Vijver et al., 2002). Non-etheless, with the development of next-generation sequencing and mass spectrometry technologies, which accelerates the development of omics research toward quantification and high throughput, there is an increasing need for the ability to fuse biological features to study whole treatment processes. Proteomic, transcriptomic, methylomic, histone posttranslational modifications, and microbiomic features all influence the host response to various diseases and cancers. The integration of multi-omics approaches has led to a deeper understanding of disease etiology, where data from a single genomics cannot capture the complexity of all factors associated with understanding a phenomenon (e.g., disease) (Zitnik et al., 2019). Models that integrate multi-omics data to identify patients' drug sensitivity in advance have become the central object of cancer research (Olivier et al., 2019) (Chaudhary et al., 2018). Researchers have already proposed some multi-omics machine learning and deep learning methods for drug sensitivity prediction. However, the biomolecular data are often high-dimensional, e.g., methylation data may be 400,000 to 500,000 dimensions while the sample size is only about 1,000. These methods may suffer from overfitting problems and have difficulties in fusing multi-omics data. In addition, the interpretability of deep neural networks is relatively low, and biomedical methods lacking interpretability make it difficult for the reliable diagnoses of doctors. Moreover, the accuracy of these existing models also has some room for improvement. In response to these challenges, we proposed a novel multiomics drug sensitivity prediction model (NDSP) based on deep learning and similarity network fusion approaches. The model extracts biomarkers using an improved sparse principal component analysis (SPCA) method for each omics data, and constructs sample similarity networks based on the sparse biomarker matrices, which greatly reduces the dimensionality of multi-mics data and weakens the risk of overfitting in the training process of deep learning. Finally, the fused similarity networks are put into a deep neural network for training and the model can make full use of the high integrability and interpretability of the similarity networks. Compared with some current deep learning methods, our proposed model has the ability to handle high dimensional data and highly interpretable feature selection capabilities. More importantly, the model has higher prediction accuracy than existing models for both targeted and non-specific therapeutics drugs, which is beneficial for the development of precision oncology. Frontiers in Bioengineering and Biotechnology frontiersin.org 2 Related work 2.1 Single gene expression data models A number of researchers have proposed cancer drug sensitivity prediction models based on single genomics data. For example, Ali oskooei et al. proposed a network-based tree integration (netBite) machine learning approach to identify the biomarkers of drug sensitivity using gene expression data. The authors applied the netBite model to a set of GDSC data for 50 anticancer drugs, where Linifanib was able to achieve an accuracy of about 0.7, and demonstrated that netBite outperformed Random Forest in predicting IC50 drug sensitivity, but only for drugs targeting membrane receptor pathways (MRps): iGfR, RtK and eGfR signaling pathways (Oskooei et al., 2019). Gilleher et al. integrated several computational and statistical tools such as linear ridge regression, logistic ridge regression, elastic network and lasso regression to analyze the data of 138 drugs from nearly 700 cell lines to predict drug sensitivity in vivo. The experiments proved that ridge regression models trained on GDSC gene expression data could be translated to clinical trial data of Erlotinib, Docetaxel, Bortezomib, and Cisplatin. The paper also indicated that the inclusion of non-breast cancer samples in model training process improves the predictive accuracy of the final model compared with the models trained on breast cancer cell lines only (Geeleher et al., 2014). This gene expression delivery pathway based on ridge regression also roughly predicted the drug response of The Cancer Genome Atlas (TCGA) (Geeleher et al., 2017) (Weinstein et al., 2013). Multi-omics data models Due to the large biological system, the single genomics data cannot capture all complex factors related to understanding a biological phenomenon (e.g., disease) (Zitnik et al., 2019). Learning methods that integrate multi-omics data are beginning to be widely used in biology and medicine, such as identification of driver genes (Dimitrakopoulos et al., 2018) (Mo et al., 2013), patient stratification (S Khakabimamaghani et al., 2019), cancer subtype discovery (Liang et al., 2014), patients survival prediction (Chaudhary et al., 2018), and drug sensitivity prediction. More and more multi-omics drug-sensitive datasets are made publicly available, especially in pan-cancer models (Iorio et al., 2016). The application of multi-omics data allows machine learning models to better characterize biological processes from different perspectives (Wang et al., 2014) (Argelaguet et al., 2018). Ding et al. proposed a data-driven precision medicine approach to learn new biological features from omics data to address the dimensionality challenge. The copy number variation, mutation, and gene expression data were concatenated. The variance-based mixed-fit feature selection was performed using the original omics features as the input to the elastic network approach to predict the binarized IC50 values (Ding et al., 2018). Chiu et al. also proposed an autoencoder-based integrated genomic profiling deep learning model for drug response prediction (Chiu et al., 2019). The model contained three deep neural networks. The first layer was a mutation encoder pre-trained using a large pan-cancer dataset (The Cancer Genome Atlas; TCGA) to abstract the core representation of high-dimensional mutation data. The second layer was a trained expression encoder, and the third layer was a drug response prediction network that integrated the first two subnetworks. Hossein et al. presented a multi-omics drug response prediction model named MOLI based on deep neural network (Sharifi-Noghabi et al., 2019), which integrated three omics data of somatic mutations, copy number aberrations and gene expression data for multi-omics analysis. To address the key challenge of how to integrate diverse data types, the model proposed the first end-to-end post-integration approach. This approach used each histological data type to make separate type-specific neural networks, and every encoding sub-network learned features on behalf of its omics data type. Moreover, the extracted features were connected into the feature representation, which was optimized by a joint cost function made up of a binary cross-entropy loss and a triplet loss, while updating all the data for three omics. Patient similarity network Although machine learning methods can handle large-scale data, they are usually considered as black boxes that do not explain well the favorability of specific features for prediction. Interpretability is particularly needed in clinical treatment. Patient similarity networks, a framework that excels in integrating heterogeneous data, handling sparse data, and generating interpretable models, has been applied to several biological fields with good results (Wang et al., 2014). Pai et al. proposed the interpretable patient classification model (netDx), which was a supervised machine learning approach similar to a recommender system using integrated patient similarity networks (Pai et al., 2019). Patients in unknown states can be grouped according to their similarity to determine its risk of the certain disease. The model integrates six types of data across four cancer types, and the experiment results show that netDx performs significantly better than most other machine learning methods on most cancer types. Compared with traditional machine learningbased patient classifiers, the results of netDx are more interpretable and allow visualization of decision boundaries in the context of patient similarity space. Limitations of the existing models The existing deep learning approaches based on multi-omics data still have four major challenges. First, learning new information features from omics data is a key step for model-based drug sensitivity prediction. However, biomolecular datasets tend to be high-dimensional, i.e., with a large number of features and a small number of samples. There is a significant risk of overfitting using deep learning models. Second, deep learning models are a black box, and researchers need to spend a lot of effort to explain what role specific features play in prediction. The black-box approaches are difficult to succeed in the clinical setting because physicians must have an understanding of the underlying relevant features of the disease in order to make a confident and reliable diagnosis. Third, how to integrate different data types is a key challenge in multi-omics analysis, and the main ways are early integration and late integration. In the previously mentioned models that fuse the feature representations learned from each omics before classification, a large number of unaligned gene points are inevitably discarded actively to facilitate feature fusion, leading to data loss problems. Fourth, the results of existing multi-omics drug response prediction methods are unsatisfactory, and there is space for improvement. Datasets In this study, we utilize the available oncology therapeutic genomic data from the Genomics of Drug Sensitivity in Cancer (GDSC) database. This dataset is widely analyzed by statistical and machine learning approaches for drug sensitivity prediction. For example, cell line similarity and drug similarity based models (Sheng et al., 2015), quantitative structure-activity relationship (QSAR) analysis using kernelized Bayesian matrix decomposition (Ammad-Ud-Din et al., 2014), lasso and elastic network models for predicting drug sensitivity and target identification (Barretina et al., 2012) (Park et al., 2015). We select mutation data, cell line annotation and drug IC50 data from GDSC, including targets, signaling pathways, point mutation and copy number variation information and IC50 values of some genes, and several phenotypes for 518 oncology drugs in 988 cell lines. For drug sensitivity study, we select 35 drugs from the GDSC database as experimental subjects, including 14 FDA-approved targeted therapeutics, 16 drugs with clear targets but not yet approved by FDA, and 5 non-specific cancer therapeutics without targets, as shown in Figure 1. The genomic signature of each cell line contains RNA-Sequence values for 44,421 probes, which is known as whole transcriptome shotgun sequencing (WTSS). It contains transcriptional analysis of 1,000 human cancer cell lines to explore questions such as the state of genomic signature on drug response and whether genomic alterations synergistically explain more of the variation in drug response. RNA Sequencing has been considered an effective method for gene discovery, helping to view different transcripts of genes, post-transcriptional modifications, gene fusions, mutations/SNPs, changes in gene expression over time, and differences in gene expression in different groups. Copy number aberration (CNA) data The CNA data is downloaded from Cell Model Passports: https://cellmodelpassports.sanger.ac.uk/downloads. Copy number aberration exists in DNA fragments of natural populations and is a common form of structural genomic variation. Abnormal DNA copy number variation is an important molecular mechanism for many human diseases such as cancer and hereditary diseases. Deletion fragments may contain oncogenes for tumors, while amplified fragments may harbor oncogenes. The genomic signature of each cell line in the collated data contains somatic copy number variation for 21,878 gene loci. Methylation data The methylation data is downloaded from Gene Expression Omnibus (GEO): https://www.ncbi.nlm.nih.gov/geo/query/acc. cgi?acc=GSE68379. It reports how cancer-driven alterations detected in 11,215 tumors and 29 different tissues (integrating multiple omics) correlate with responses to 265 compounds in 1,001 cancer cell lines. Cell lines are very similar to tumors in these areas of alteration, and there are many examples of altered genes and pathways conferring drug sensitivity and resistance. Methylation is an important modification of proteins and nucleic acids that regulates the expression and shutdown of genes and is closely associated with many diseases such as cancer, aging, and Alzheimer's disease, and is one of the key studies in epigenetics. Here we use DNA methylation, which turns off the activity of certain genes, and altered DNA methylation status is prevalent in tumors. The genomic signature of each cell line in our experiments contains the methylation status values of 365,860 CpG loci. (Zou et al., 2006). Suppose X ∈ R m×n is a data matrix with m features and n samples. The SPCA via L 0 -penalty can be adopted to analyze the matrix: where u is a m × 1 vector to represent the first principal component (PC) loading and s represents the number of genes retained by the model, u 2 represent L 2 norms (Euclidean norm) and u 0 denotes the L 0 norm, which is equal to the number of non-zero elements of u. Researchers usually use the singular value decomposition framework (SVD) to solve this problem (Lin et al., 2016). Therefore, Formula (1) can also be written as: where v is n × 1 vector to represent the first principal component. The following alternate iterative projection strategy (Journée et al., 2010) is used to solve the problem in Formula (2) until convergence: where P(z, s) is called s-sparse projection operator. It is a p-dimensional column vector and its i-th (i 1, 2, . . . , p) element is defined as follows: where supp(z, s) denotes the set of indexes of the largest s absolute elements of z. Our proposed model uses SPCA for dimensionality reduction and feature selection. SPCA is an unsupervised model, and a feature importance parameter t is calculated based on a classical machine learning model-Random Forest (RF). The unsupervised SPCA method and the supervised classification RF model are combined to evaluate whether the genes in the selected PCs can better predict the sensitivity of the drugs. The workflow of the SPCA with the parameter t is shown in Figure 2. Suppose there are M features X 1 , X 2 , ..., X M , K categories, and D decision trees in the random forest. If the node where the feature X j appears in decision tree, the Gini index score GI j for the feature X j is expressed as follows: where p jk denotes the proportion of the category k for the feature X j . Suppose node(X j ) is the set that the feature X j appears in the nodes, the importance t jd of the feature X j in the decision tree d: The importance t j of the feature X j in the random forest: Finally, all the obtained importance scores are normalized to calculate the feature importance: where M denotes the number of features. The improvement process of the SPCA with feature importance is described as below. Firstly, the SPCA analyzes the data matrix X to get the M largest elements of the absolute value of z, and to make all other positions 0 for spare principal component operation. At this point, the features in the selected principal components are put into the RF classifier for evaluation. The obtained feature importance t updates the data matrix X. This loop is repeated until convergence. Similarity network fusion After completing the SPCA with feature importance, we obtain the independent feature matrix for each omics data, the RNA-Sequence matrix S ∈ R a×n , methylation feature matrix M ∈ R b×n , CNA feature matrix C ∈ R c×n , and a, b, c denote the numbers of features retained in each of the three omics. Next, a sample similarity network needs to be constructed for each omics data. Two main similarity calculation algorithms are used. The Pearson correlation coefficient is suitable for linear continuous variables and the Kendall correlation coefficient is suitable for discrete variables. For the RNA-Sequence and Methylation data, we use the Pearson correlation coefficient: where n indicates the number of the samples, x i , y i denotes the expression information of the i-th gene locus of sample x, y, x, y denotes the mean gene expression value of sample x, y. The CNA data are integer discrete representing the variance multiples, so we use the Kendall rank correlation coefficient: where C denotes the number of pairs of elements in x, y that have consistency; E denotes the number of pairs of elements in x, y that have inconsistency. n 2 1 2 n n − 1 ( )is the binomial coefficient of the number of ways to select two items. After the similarity calculation, three independent sample similarity matrices are obtained, S′∈ R n×n , M′ ∈ R n×n , C′ ∈ R n×n . The data of each omics are turned into n × n size matrices, so that hundreds of thousands of dimensions of omics data reduce to thousands of dimensions of sample similarity matrix, which not only solves the problem of high dimensionality, but also makes the integration operation of multi-omics heterogeneous data much easier. We directly stitch the matrices of several omics data horizontally, as shown in Figure 3, and then use the deep learning model to perform classification operations, instead of turning multi-omics data into one matrix by superposition. This can avoid the information loss during fusion of the multiple omics data. Deep learning approach We construct a simple 7-layer deep neural network model and put the n × 3n fused similar networks into it for training (Figure 4). This neural network contains three one-dimensional convolutional layers, each convolutional layer is followed by a max pooling layer, and a batch normalization layer added after the last convolutional layer. In addition, the first two fully connected layers use "relu" as the activation function while the third fully connected Frontiers in Bioengineering and Biotechnology frontiersin.org layer uses "softmax". The cross-entropy loss function is used, which is a commonly measurement in dealing with classification problems. Experiment results The deep learning autoencoder model (Chiu et al., 2019), in which the mutation encoder and gene expression encoder were linked the prediction network. The multi-omics post-integration with deep neural networks model (MOLI) (Sharifi-Noghabi et al., 2019) takes somatic mutation, copy number aberration and gene expression data as input, and integrates them for drug response prediction. We conduct experiments on these two deep learning models and the interpretable patient classification model using an integrated patient similarity network (netDx-RF, netDx-EN, netDx-AdaBoost, netDx-SVR, netDx-KNN) (Pai et al., 2019) to compare the results with our proposed model NDSP. The 35 drugs we selected include 30 targeted drugs and 5 nontargeted chemotherapy drugs, with the targeted drugs divided into 14 FDA-approved drugs in clinical use and 16 FDA-unapproved drugs. The results are evaluated using the sensitivity, specificity, precision, accuracy and F1-score of the model as indicators. Finally, we use the metascape platform to perform enrichment analysis of targets retained by our proposed method NDSP during feature selection and analyze the association and biological significance of these targets with that drug and disease. In the data preprocessing step, we collected and classified the multiomics samples (cell lines) into sensitive and non-sensitive classes based on the binarized IC50 values of each specific drug. The unsupervised SPCA in our proposed model NDSP is first used for dimensionality reduction and feature selection. At this time, the PCs based on the SPCA may not relate with the specific drug. Therefore, the supervised model Random Forest (RF) is combined to evaluate whether the genes in the selected PCs could better predict the sensitivity of the specific drug. The feature importance parameter t is calculated based on the classification results of the RF. By updating the feature importance t and repeating the loops of SPCA and RF, the genes in the selected PCs can strongly correlate with the sensitivity of the specific drug. Results of targeted therapy drugs The mean values of each metric for our proposed method NDSP and the seven baseline models in the 30 targeted drug trials are shown in Table 1. As can be seen from Table 1, the average sensitivity and specificity of NDSP can reach 91% and 91%, respectively, and basically exceed the baseline models in each index. Although the specificity is a little lower than the netDx model using the RF classifier, but the sensitivity is 23% higher than the netDx model. Overall, the best performance among the seven baseline models is still the MOLI model, but its average sensitivity, specificity, precision, accuracy and F1 scores can only reach 0.76, 0.86, 0.82, 0.82, and 0.8, respectively. As shown in Figure 5, the accuracy of NDSP basically reaches 0.9. The netDx model tests five classifiers: EN, SVR, KNN, AdaBoost, and RF. We can see that the accuracy of netDx with RF classifier is the best, but it still has some distance from our proposed model NDSP. NDSP has the highest overall accuracy and fewer outlier points, indicating stable performance. In general, the experiment results of NDSP are the best in regards to the accuracy. Figure 6 shows the prediction precision of NDSP trained on 30 targeted therapy drugs. It can be seen that the precision of our model on each targeted drug is above 0.82 and is mainly concentrated on 0.88 to 0.93. Results of non-targeted therapy drugs To verify whether our proposed model NDSP can work in precision oncology beyond targeted therapy, we conduct experiments on 5 non-specific therapeutic drugs. The mean values of each index for the eight models in the five experiments with non-targeted drugs are shown in Table 2. The comparison results in Table 2 are similar to the experiments on targeted drugs. NDSP could achieve an average sensitivity and specificity of 0.9 and 0.92 respectively on nontargeted therapy drugs, and basically exceed the baseline models in all metrics. The specificity of the netDx model using the RF classifier is higher than that of the NDSP model, but the sensitivity is only 0.34, which is 66% lower than that of the NDSP model. In the non-targeted drug experiments, the seven baseline models perform much worse than in the targeted drug experiments, probably because of the low number of experiments. But the NDSP model still maintains good performance. Overall, the best performance among the seven baseline models is still the MOLI model, but its average sensitivity, specificity, precision, accuracy and F1 scores are only 0.69, 0.80, 0.75, 0.78, and 0.73, respectively, which are still some distance from NDSP. The (p-0 denotes precision of classifying class 0; p-1 denotes the precision of classifying class 1; F1score-0, denotes the F1-score of classifying class 0; F1score-1, denotes the F1-score of classifying class 1). The bold values mean the best results. FIGURE 5 Accuracy of all 8 models on 30 targeted therapy drugs (the points outside the boxplot are outliers). FIGURE 6 Precision of NDSP on targeted therapy drugs. Frontiers in Bioengineering and Biotechnology frontiersin.org The bold values mean the best results. FIGURE 8 Precision of NDSP on non-targeted therapy drugs. Frontiers in Bioengineering and Biotechnology frontiersin.org specificity of netDx models using RF, EN, AdaBoost and SVR is generally good, but the sensitivity is poor in all cases. The Autoencoder model also has imbalanced sensitivity and specificity. Figure 7 shows that the NDSP model has the highest prediction accuracy, reaching above 0.9 with small variation. Among the seven baseline models, the netDx model using RF is the best. But the accuracy is only 0.8 to 0.9, which is not as good as the NDSP model. The other models have accuracy between 0.4 and 0.85 with large variability. Figure 8 demonstrates that the prediction precision of our model on all 5 non-targeted drugs is above 0.88. Overall, the results of NDSP are optimal for both molecularly targeted and non-specific drugs, which indicates that NDSP is generalizable and can be useful for precision therapy beyond targeted therapy. Enrichment analysis To further validate the biological interpretability of our proposed model NDSP, we perform a biological enrichment analysis using the results of the multi-omics gene selection of the new model in Alectinib drug. The first principal component is obtained from the data of each omics in a SPCA module with the addition of a classifier. Drug Alectinib is mainly used for the FIGURE 9 Results of pathway enrichment analysis of the drug Alectinib. (A) pathway results for the first PC of seq omics data; (B) pathway results for the first PC of CNA omics data; (C) pathway results for the first PC of methylation omics data. FIGURE 10 Enrichment analysis in DisGeNET. Frontiers in Bioengineering and Biotechnology frontiersin.org treatment of non-small cell lung cancer and blocks the activity of ALK. The results of the pathway enrichment analysis are shown in Figure 9. A more concentrated distribution of gene sites selected by our model would indicate that the gene set is associated with a specific function or phenotype and is able to select pathways and gene sites that are more relevant to lung cancer. For example, in the RNA-seq omics results, the ERAD pathway corresponding to GO: 1904292 is highly associated with heritable lung disease regulatory mechanisms. And analysis in the integrated platform for integrating information on human disease-associated genes and variants (DisGeNET) shows that the selected gene sites are associated with non-small cell lung cancer, as shown in Figure 10. In CNA omics data, HIF-1 survival signaling corresponding to WP3614 in WikiPathway is associated with tumor development. Discussion We proposed a novel drug sensitivity prediction model (NDSP) that combines biological multi-omics data, SPCA with classical machine learning classifier, patient similarity networks and deep learning. We use data from three omics: RNA sequencing data, Copy Number Aberration data and DNA methylation data. The SPCA with feature importance method is used for feature selection. Then we use patient similarity network to measure the similarity of the three omics feature matrices separately to obtain three matrices of n × n size, which is very efficient at integrating heterogeneous data and can generate interpretable models. This greatly reduces the size of the matrices, making hundreds of thousands of dimensions of omics data into a few thousand dimensions of sample similarity matrices to solve the high dimensionality problem of data. Moreover, it also makes the integration of multi-omics heterogeneous data easier. Finally, the three similarity networks are spliced horizontally and put into a deep neural network model for classification prediction. We have conducted experiments using both targeted and non-targeted drugs. The available results show that our proposed model NDSP outperforms classical machine learning and deep neural network models in terms of sensitivity, specificity, accuracy, precision and F1-score. More importantly, the drugs selected for the experiments include both targeted and non-specific therapeutic drugs, which implies that the model has a certain degree of generality, and can be useful in precision therapy beyond traditional precision oncology and targeted therapy. The results of the enrichment analysis also show that the targets selected by NDSP are biologically interpretable and have some correlation with the corresponding drugs and diseases. This will guide physicians in selecting optimal treatment options while minimizing the negative effects associated with ineffective treatments, thereby fulfilling the promise of precision therapy. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material. Author contributions X-YL and X-YM conceived of the presented idea, carried out the experiments, analyzed the result, and wrote the manuscript. X-YL conceived the project and revised the manuscript. All authors read and approved the final manuscript.
2023-04-13T13:05:32.312Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "cd2c2a12d964e34c1739213d5a3cc32f669f10b6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2023.1156372/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "215e1aadf1a14248a30e58da9c890af0b451c1bf", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
243832676
pes2o/s2orc
v3-fos-license
BBC-Oxford British Sign Language Dataset In this work, we introduce the BBC-Oxford British Sign Language (BOBSL) dataset, a large-scale video collection of British Sign Language (BSL). BOBSL is an extended and publicly released dataset based on the BSL-1K dataset introduced in previous work. We describe the motivation for the dataset, together with statistics and available annotations. We conduct experiments to provide baselines for the tasks of sign recognition, sign language alignment, and sign language translation. Finally, we describe several strengths and limitations of the data from the perspectives of machine learning and linguistics, note sources of bias present in the dataset, and discuss potential applications of BOBSL in the context of sign language technology. The dataset is available at https://www.robots.ox.ac.uk/~vgg/data/bobsl/. INTRODUCTION Sign languages are visual languages that have evolved in deaf communities. They possess complex grammatical structures and lexicons [2], akin to the complexity of spoken languages. In this paper, we introduce a large-scale dataset of British Sign Language (BSL), the sign language of the British deaf community. To date, a central challenge in conducting sign language technology research has been a lack of large-scale public datasets for training and evaluating computational models [3]. The goal of the BBC-Oxford British Sign Language (BOBSL) dataset is to provide a collection of BSL videos to support research on tasks such as sign recognition, sign language alignment and sign language translation. The rest of the paper is structured as follows: in Sec. 2 we provide an overview of the BOBSL dataset; in Sec. 3, we describe the collection and annotation (both automatic and manual) of the dataset, and also the evaluation partitions. Next, in Sec. 4 we give implementation details and descriptions of models for baselines on the tasks of recognition, alignment and translation. In Sec. 5 we present our evaluation protocols and baseline results for the dataset. In Sec. 6 we discuss the opportunities and limitations of the data from the perspectives of sign linguistics and downstream applications and note several sources of bias present in the data before concluding in Sec. 7. BOBSL DATASET OVERVIEW In this section, we first give an overview of BOBSL content and statistics (Sec. 2.1). Next, we compare BOBSL to existing sign language datasets (Sec. 2.2), outline data usage terms (Sec. 2.3) and describe its relationship to the BSL-1K dataset (Sec. 2.4). Dataset content and statistics The data consists of BSL-interpreted BBC broadcast footage, along with English subtitles corresponding to the audio content, as shown in Fig 1. The data contains 1,962 episodes, which span a total of 426 differently named TV shows. We use the term episode to refer to a single video of contiguous broadcast content, whereas a show (such as "Countryfile") refers to a collection of episodes grouped thematically by the broadcaster, whose episodes typically share significant overlap in subject matter, presenters, actors or storylines. The shows can be partitioned into five genres using BBC metadata as shown in Fig. 2; with the majority of shows being factual, i.e. documentaries. These can be further divided into 22 topics, as shown in Fig. 3. Including horror, period and medical dramas, history, nature and science documentaries, sitcoms, children's shows, and programs covering cooking, beauty, business and travel, the BOBSL data covers a wide range of topics. Statistics of the BOBSL data are presented in Tab. 1. The 1,962 episodes have a duration of approximately 1,467 hours (i.e. 45 minutes per episode on average, with the majority of episodes lasting approximately 30 or 60 minutes, as shown in Fig. 5). The videos have a resolution of 444 × 444 pixels and a frame rate of 25 fps. There are approximately 1.2M sentences extracted from English subtitles covering a total vocabulary size of 78K English words. BOBSL contains a total of 39 signers (interpreters). We divide the data into train, validation and test splits based on signers, to enable signer-independent evaluation, i.e. there is no signer overlap between the three splits. The distribution of programs associated to each signer, together with beautiful petals that wave about in the wind the split information is illustrated in Fig. 4. We note that a few signers appear very frequently. Comparison to existing datasets In Tab. 2, we present a number of existing datasets used for sign language research -mainly for the tasks of sign recognition, sign spotting, continuous sign language recognition, sign language translation and sign language production. Benchmarks have been proposed for American [6], [8], [9], [16], [17], [18], German [21], [22], Swiss-German [14], [27], Flemish [27], Chinese [4], [5], [19], [20], Finnish [15], Indian [13], [25], Greek [26], Turkish [11], [12], Korean [24] and British [1], [10], [28] sign languages. These datasets can be grouped into isolated signing (where the signer performs a single sign, usually at a slow speed for clarity, starting from and ending in a neutral pose) and coarticulated signing. Co-articulated signing, or "signs in context", TABLE 1: Statistics summarising the data distributed across the splits of BOBSL. Num. Signers indicates the number of signer identities within a partition, Num. Raw Subtitles denotes the number of subtitles (which do not necessarily form complete sentences) associated with the original broadcasts, while Num. Sentences indicates the number of English sentences that were parsed from these subtitles using the process described in Sec. 3.4. Text Vocabulary indicates the vocabulary across the sentences after removing punctuation, special characters, digits etc. Out-of-vocab denotes the number of words that are not present in the training split, while Singletons denotes the number of words appearing only once in the given partition. Duration indicates the duration of the episodes. --1K 7K 1K -TV 1 SWISSTXT-NEWS [27] DSGS --11K 73K 6K -TV 9 SWISSTXT-RAW-WEATHER [27] DSGS ------TV 12 SWISSTXT-RAW-NEWS [27] DSGS ------TV 76 VRT-NEWS [27] VGT describes signing that exhibits variation in sign form caused by immediately preceding or following signs, or signs articulated at the same time. If we are to build robust models which can understand sign language "in the wild", we need to recognise coarticulated signs. Most datasets in Tab. 2 fall into one or more of the following categories: (i) They have a limited number of signersfor example, Devisign [4], ASLLVD [6], ISL [25], GSL [26] have 8 or fewer signers. (ii) They have a limited vocabulary of signs -for example, Purdue RVL-SLLL [16], BOSTON104 [17], INCLUDE [13], AUTSL [12], SMILE [14] only have a few hundred signs. (iii) They have a large vocabulary of signs but only of isolated signs -for example MSASL [8] and WLASL [9] have vocabularies of 1K and 2K signs, respectively. (iv) They are recorded in lab settings. (v) They are limited in total duration -for example the popular PHOENIX14T [22] dataset contains only 11 hours of content. (vi) They represent natural co-articulated signs but cover a limited domain of discourse -for example, the videos in PHOENIX14T [22] and SWISSTXT-WEATHER [27] are only from weather broadcasts. In summary, the BOBSL dataset presents several advantages: it consists of co-articulated signs as opposed to isolated signs, representing more natural signing (note that BOBSL nevertheless remains distinct from conversational signing, due to its use of interpreted content). BOBSL provides the largest source of continuous signing (1,467 hours); it covers a large domain of discourse; it is automatically annotated for a large vocabulary of more than 2,000 signs. We note that since the annotations provided on the training and validation sets are obtained through automatic methods, they may contain some noise. Research use and potential changes BSL translation services are currently supplied to the BBC by Red Bee Media Ltd. They have indicated that they and their staff are happy for their footage to be used for research purposes. However, if the position changes the dataset will need to be revised accordingly. Researchers should be mindful of this, and should be aware that the 'Permission to Use' form they will need to sign obligates them to delete portions (or, indeed, the whole) of the dataset in the future, if so instructed. Relationship to the BSL-1K dataset In previous work [1], we introduced the BSL-1K dataset, a collection of BSL videos that were automatically annotated with sign instances via a keyword spotting method. This collection of automatic sign instances was further expanded through other methods for sign localisation [10], [29]. A short test sequence was manually annotated for temporal sign segmentation evaluation in [30], [31]. Manual signing sequence to corresponding subtitle alignments have also been performed on BSL-1K in more recent work [32]. However, BSL-1K remained as an internal dataset. BOBSL represents a public, extended dataset based on BSL-1K using videos drawn from the same source distribution with no overlap between episodes to BSL-1K, but significant overlap between signers and shows, and preserving the same signer independent train, validation and test split identities for signers that appear in both datasets. The BOBSL dataset is larger than BSL-1K (1,467 hours vs 1,060 hours). We have automatically annotated BOBSL with sign instance timings using the same techniques as for BSL-1K and also provide signing sequence to corresponding text alignments. Through a data-sharing agreement with the BBC, BOBSL is available for non-commercial research usage. BOBSL DATASET CONSTRUCTION In this section, we describe the construction of the BOBSL dataset. We first describe the raw source data and the pre-processing pipeline employed to prepare the data for sign language research (Sec. 3.1). Next, we describe how the data was divided into train, validation and test splits (Sec. 3.2) and the automatic methods used to annotate this data with sign instance timings (Sec. 3.3). We detail the manual annotation processes we employ (Sec. 3.5) together with our approach to subtitle sentence extraction (Sec. 3.4). Finally, we describe the BOBSL partitions for sign recognition (Sec. 3.6), as well as for translation and alignment tasks (Sec. 3.7). Dataset genesis. This dataset has been created in partnership with the British Broadcasting Corporation (BBC), the UK's largest public service broadcaster. The UK broadcast regulator has set a threshold for the amount of accessible content broadcasters must supply. As a result, the BBC produces subtitles for 100% of its TV output, audio description for more than 20% of its output and BSL translations for more than 5% of its output. Due to the size of its weekly broadcast output and its long-term retention of this metadata it has a comparatively large datastore of useful data for partner universities to work with. The sort of data release represented by BOBSL is a core part of BBC R&D's remit as mandated by the UK Parliament 1 result the BBC is keen to support research into accessibility services by supplying data to partner universities and administering non-commercial testing and training data to the wider academic community. Source data and pre-processing Source data. An initial collection of TV episodes were provided by the BBC. These were broadcast between 2007 and 2020 and vary from a few minutes to 120 minutes in duration (see Fig. 5 for the distribution of episode durations). Each episode is accompanied by a corresponding set of written English subtitles, derived from the audio track of the show. The programs span a wide range of topics (history, drama, science etc.)-a detailed summary of the content included is provided in Sec. 2.1. The majority of these shows are accompanied by a BSL interpreter, overlaid on the bottom right hand corner of the screen in a fixed location. Note that sign interpreters produce a translation of the speech that appears in the subtitles, as opposed to a transcription. This means that words in the subtitles may not correspond directly to individual signs produced by the interpreters, and vice versa. The videos have a height of 576x pixels, a display aspect ratio of 16:9 and a frame rate of 25 fps. Filtering and pre-processing. First, TV programs that were known to not contain a BSL interpreter in a fixed region of the screen were removed from the collection. A small number of videos that exhibited significant data corruptions were also removed. Video pre-processing. Each video was cropped to include only the bottom-right 444 × 444 pixel region containing the BSL interpreter (see Fig. 6). We employed the automatic face detection and tracking pipeline provided by the authors of [33] to detect and track faces, with the goal of blurring those appearing in the content behind the intepreter. For this it was first necessary to determine which face tracks belong to the interpreters and exclude them. To that end we extracted pose estimates from each frame using OpenPose [34] and employed a heuristic to determine background face tracks (tracks with duration shorter than 5 seconds or that do not exhibit overlap with the estimated keypoints of the interpreter). The pixels under all the background face tracks are blurred using a Gaussian filter. Using this pipeline we blur 224,957 face tracks over 170 hours of video. Some examples are shown in Fig. 7. We observe qualitatively that the pipeline performs well for clearly visible background faces. However, we note a limitation of our approach: the automatic face detector can make mistakes (typically for cases in which the background face is very small or heavily occluded) and thus there are likely to be a small number of background faces that are not blurred. Subtitle pre-processing. After manual inspection, we observed that approximately one quarter of the subtitle files exhibited discrepancies in time alignment between the audio track and the subtitle timestamps. To address these cases, we applied standard methods of forced alignment using an acoustic model 2 . After pre-processing the videos and subtitles, the audio track of each video was removed. The final result of these filtering and preprocessing steps was a collection of 1,962 videos containing BSL interpreters with corresponding audio-aligned written English subtitles that form the public dataset release. Dataset splits To support the development of signer-independent systems (in which models are evaluated on signers not seen during training), we divide the dataset into train, validation, test splits according to the estimated identity of the BSL interpreters. To determine the interpreter identity associated with each video, we employ a semi-automatic process. We first detect the face of the interpreter in a 10-second clip extracted from the temporal midpoint of the video (since the interpreter does not change over the course of a single program, a short clip suffices to perform identification and reduces computational cost relative to using the full program). This is done with a RetinaFace face detector [35] that employs a MobileNet0.25 trunk architecture [36]. The model is trained for face detection on the WIDER face benchmark [37]. Next, face embeddings are computed from each detected face bounding box with an SE-50 [38] face verification network. The face detections are then linked into tracks by minimising a cost function based on spatial overlap between face detections and similarities between face embeddings. For each track, the embeddings from the face detections are aggregated via averaging and then L2-normalised to produce a single track descriptor. Next, agglomerative clustering is used to group the track descriptors into an initial set of identity clusters (we employ the implementation provided by [39], with a distance threshold of 0.32 over the cosine similarities between descriptors). Following this, the identity clusters are checked manually, with erroneous assignments corrected by re-assigning to the correct cluster. As a result of this process, 39 identities were identified. Finally, signers are assigned to separate splits, to produce the dataset statistics given in Tab. 1. The distribution of episodes associated to each signer, together with the split information is illustrated in Fig. 4. Automatic annotation via sign spotting and localisation methods Due to the large scale of the BOBSL dataset, exhaustive manual annotation of individual signs would be prohibitively expensive. We therefore turn to automatic annotation techniques for sign instance localisation making use of the information within weaklyaligned subtitles. In particular, we employ: (1) the mouthing keyword spotting approach from [1], (2) the dictionary spotting approach from [10], and (3) the attention spotting approach from [29] to annotate the data. We give a brief summary of each method here and refer the reader to the original papers for further details. Fig. 8 provides sample annotations from each method on a sample training video. (1) Keyword spotting with mouthings. A sign may consist of not just movements of the hands, but also head movements, facial expressions and mouthings [2]. Mouthings have multiple roles: they can be used to specify the meaning of a sign in the case of polysemy and to disambiguate manual homonyms [40]. Mouthings appear frequently in BSL -accompanying over 2/3 of signs in one study [41]. From an annotation perspective, mouthings provide a cue for sign spotting, the task of localising a given sign in a signing sequence. In this work, we employ the method proposed in [1] to spot signs. This method works in several stages: First, given a target "keyword" for which we wish to spot signs, we find all occurrences of the keyword in the subtitles. Next, for each subtitle containing the keyword, we pad its temporal extent by several seconds to create a search window in which the sign has a high probability of occurring. Through preliminary experiments, we found that padding by 10 seconds on both sides worked well. Finally, we employ a keyword spotting model to find whether and when the mouthing occurs within the constructed search window. We show examples of automatically retrieved instances of four different signs on each row (magic, special, quality, wonderful) obtained through the pipeline of keyword spotting with mouthings. The model outputs a confidence score associated to each frame, we record all localisations above 0.5 threshold as our automatic mouthing annotations (after a non-maximum suppression stage as in [1]). Fig. 14 provides statistics for the amount of annotations on the training set. To derive the list of candidate keywords for spotting, we first apply text normalisation to the subtitles using the method of [42]. This normalisation converts dates and numbers to their written form, e.g. 13 becomes "thirteen". From BOBSL subtitle words, we obtain 79K (we use the original subtitles, rather than sentences for spotting-the vocabulary differs slightly due to the filtering involved in sentence extraction) and 72K unique words before and after text normalisation, respectively. We further filter the list of keywords to those that appear in the CMU phonetic dictionary [43] with at least four phonemes (the model is trained on words with at least 6 phonemes, but we found 4 to work reasonably). This filtering results in a final list of 43K search keywords. The keyword spotting model used is an improved variant of the model of Stafylakis et al. [44] from [45] (described in their paper as "P2G [44] baseline"). The model is trained on "talking heads" datasets (LRW [46] and LRS2 [47]) of BBC TV broadcasts. While the model has never been trained on signers, we observe that it generalizes well to a large set of signer mouthings. As observed in [1], the peak in the posterior probability assigned to the presence of a keyword typically corresponds to (approximately) the end of the mouthing/sign. Qualitative examples of automatically retrieved signs through this method are shown in Fig. 9. (2) Sign spotting with dictionaries. Following the method proposed in [10], given a video of an isolated sign from a dictionary, we identify whether and where it has been signed in a continuous, co-articulated sign language video. To this end, we learn a joint embedding space where we can measure similarity between isolated dictionary videos and continuous signing. This method leverages the weakly-aligned subtitles by querying words in the subtitle within a ±4sec padded neighbourhood around the subtitle timestamps (note in practice we use sentences instead of subtitles, see Sec. 3.4). In particular, we query words and phrases from the Fig. 10: BOBSL automatic sign annotations through dictionaries. We illustrate the localisation procedure for comparing dictionary samples for a given keyword with a continuous signing. BSLDict [10] vocabulary if they occur in the sentences. In order to determine whether a query from the dictionary occurs in the sentence, we check the sentence in its original, and lemmatised forms and the query in its original and text-normalised forms. If a match is found, we query the dictionary video(s) corresponding to the word/phrase. In order to obtain the embedding space, we follow a slightly different procedure than [10] for simplicity. We only perform the first stage of [10], which is to train an I3D classification model jointly on continuous annotations and BSLDict samples. We do not further train the MLP network on top of the I3D features with contrastive loss. Instead, we initialise the weights of the I3D with a stronger recognition model provided by [48], trained for 5K categories from the mouthing and dictionary annotations of BSLK-1K [1]. We do not re-initialise the batch normalisation layers unlike [10]). For joint finetuning, we use the mouthing (threshold=0.8) and dictionary (threshold=0.8) spottings from BSL-1K, as well as BSLDict videos filtered to the 1K vocabulary of [1]. We found the features from this model to be sufficiently strong to provide us automatic sign annotations on BOBSL. We obtain a single embedding for the dictionary sample by averaging features computed with multiple frame rates as in [10]. We obtain a sequence of embeddings for the BOBSL search window by applying a sliding window with a stride of 4 frames. We compute the similarity between the continuous signing search window and each of the dictionary variants for a given word/phrase: we record the location where the similarity is maximised for all variants and choose the best match as the one with highest similarity score. We record all localisations above 0.7 threshold as our automatic dictionary annotations. Fig. 14 provides statistics for the number of annotations on the training set. We refer to Fig. 10 for an illustration of the similarity plots across variants. (3) Sign localisation with Transformer attention. In contrast to the two previous automatic annotation methods, the approach [29] of localising signs differs considerably in that it is context-aware. We train a Transformer model [49] to predict, given an input stream of continuous signing, the sequence of corresponding written tokens. We then perform sign localisation by using the trained attention mechanism of the Transformer to align written English tokens to signs. More specifically, once the model is trained, new sign instances are localised for tokens that have been correctly predicted by determining the index at which the corresponding encoder-decoder attention is maximised. We observe that even low values for the maximum attention score provide good localisations; therefore, we do not apply any threshold for attention spottings. Fig. 14 provides statistics for the amount of annotations on the training set. In practice, we train the Transformer on a subset of videotext pairs, which contain at least one sign automatic annotation (from the two previously described methods) within the sentence timestamps. In such a way, we ensure there is an approximate alignment between the source signing video and target written token sequence. The encoder input video is represented by a 1024dimensional feature sequence, extracted from an I3D model provided by [48] which is trained on sign classification with BSLK-1K [1] for a 5K vocabulary of signs (obtained from mouthing and dictionary spottings) applied with a sliding window of stride 4. For building the target written sequences, we (1) lemmatise the words in every sentence assuming inflected versions of the same word map to the same sign, (2) filter to a vocabulary of 18K lemmas obtained by combining the automatic annotations from mouthing (threshold=0.7) and dictionary (threshold=0.8) spottings, and (3) remove stop words. Recent work has also demonstrated the effectiveness of the Transformer for sign spotting with dictionaries [50]-we defer an investigation of this approach to future work. Sentence extraction The subtitles associated with the BOBSL episodes are approximately aligned to the audio track of the corresponding content but do not necessarily fall into well-formed sentences. To support research into tasks such as sign language translation (which often operates at the sentence-level [23], [27]) we extract well-formed sentences from the subtitles. This is done semi-automatically by splitting subtitles on sentence boundary punctuation and employing a combination of heuristics and manual inspection to resolve ambiguous cases. To preserve an approximate time alignment between the sentences and the signing, when multiple sentences fall within a single subtitle, we employ a further simple heuristic: each sentence is assigned a duration in proportion to its written length (in characters) as a fraction of the original subtitle. Finally, we remove sentences that correspond to descriptions of background music lyrics (these are typically unsigned) and sentences that are known to fall outside the feasible signing period (e.g. those that occur after the show credits). The result of this sentence extraction process is a collection of "sentence-based" subtitles (in which each subtitle corresponds to a single sentence), summarised in Tab. 1. In comparison to the original subtitles (which are relatively uniform in duration) the distribution of sentence lengths exhibits broader variance (this effect is visualised in Fig. 11). Note that since the sentence extraction process makes use of punctuation in the subtitles, some long subtitles may be due to missing punctuation: a manual inspection of random samples determined that this occurs relatively rarely. Manual annotation Sign verification. Deaf annotators proficient in BSL used a variant of the VIA tool that was adapted for whole-sign verification [51] (see Fig. 12), similarly to the process used by [1]. To enable efficient collection, labels were collected for temporal proposals for signs in the test split by verifying/discarding automatic spottings that were assigned high confidence scores by the automatic sign spotting techniques (above 0.9 confidence for mouthing annotations, above 0.8 for the dictionary annotations). When viewing a temporal proposal, the video could be played at different speeds (and replayed if needed). For each proposed spotting location, the annotator is able to indicate: (i) whether the sign is correct, incorrect, or that they are unsure, (ii) whether fingerspelling (using the manual alphabet to spell English words) was used, (iii) further comments, including the meaning of the sign (if the proposed meaning was incorrect), and any other observations. For quality control, a small random sample of the annotations were further verified by a deaf native signer of BSL. Of the mouthing spottings within the 2,281 vocabulary (this vocabulary is described in more detail in Sec. 3.6) with a confidence of at least 0.9 that were annotated, 63.6% were marked correct, yielding 9,263 verified signs spanning 1,653 classes. The latter figure includes predictions that were corrected by annotators, as Fig. 13: Sentence alignment tool. A screenshot of the VIA sentence alignment tool [51]. The annotator uses a "draggable" visualisation of the temporal extent of the sentences at the bottom of the screen to perform alignment, with the ability to pause and replay segments. well as a small number of verified low confidence signs that were annotated during early development. Of the dictionary spottings within the 2,281 vocabulary with a confidence of at least 0.8 that were annotated, 75.8% were marked correct, yielding 15,782 verified signs (spanning 765 classes) after including corrections. These verification statistics also exclude a small number signs that were tagged by annotators as "inappropriate" in modern BSL signing. Sentence alignment. To support research into the tasks of sign language alignment and translation, we manually align the extracted sentences (which are initially coarsely aligned with the audio content) with the signing content for a subset of the episodes. The audio-aligned sentences differ from the signingaligned subtitles in both start time and duration, as shown in Fig. 16. To perform the alignment, we used an adapted version of the VIA tool, shown in Fig. 13. The annotator is presented with a list of sentences for which they are able to adjust timings by clicking and dragging elements on a webpage (this methodology is similar to the alignment tool described in the concurrent approach of [27]). We make these sentence-level alignment annotations available. BOBSL partitions for sign recognition evaluations In order to evaluate the performance of sign recognition models, we provide (i) large automatic training and validation sets of sign instances as well as a (ii) large human-verified test set for benchmarking. Recognition vocabulary. The construction of an appropriate vocabulary set for sign recognition is a challenging linguistic task for several reasons. First, BSL grammar differs significantly from English grammar (for instance, while English typically adds an "s" suffix to indicate plurality, BSL has several ways of marking a noun plural [2], e.g. through repetition, quantifier signs and wholesign modification). More broadly, there is a complex many-tomany mapping between English words and BSL signs, and there are many signs that correspond to sequences of English words. Additionally, in BSL, fingerspelling can be used to express English words that have no sign equivalents (for example, proper names). In the absence of standard writing systems for sign languages [52], a number of different gloss systems have been developed and used for corpus linguistics [28], [52]. A central challenge in adopting such approaches is scalability: providing fine-grained, consistent linguistic sign glosses requires a highly skilled team of annotators and vast labour investments (for example, the BSL Corpus [28] is an extensive ongoing research effort spanning more than a decade-even so, it has only been practical to label a relatively small fraction of the total signing data). We adopt a simple approach that trades linguistic annotation fidelity for ease of automation, and consequently, scale. For BOBSL, we consider an expanded vocabulary beyond the 1,064 word vocabulary studied in [1]. Concretely, to select a vocabulary set for a sign language recognition benchmark, we first lemmatise each word in the subtitles (this can be viewed as an approximation to mapping English words to their corresponding glosses). Next, we filter the candidate words to include only those that appear in the training set amongst the mouthing annotations with a confidence of 0.8 on at least 5 occasions. We then remove a small number of words for which we have found (through human verification) that the mouthings consistently failed to correspond to signs. Specifically, we removed words for which: (i) we had at least 15 spottings, and (ii) annotators marked at least 95% of the spottings as false positives (i.e. the sign did not correspond to the lemmatised term). Filtered words included terms like therefore, just and if. Finally, we removed terms from the vocabulary that had no verified instances and did not occur in the vocabulary of BSLDict [53]. We did not filter against SignBank 3 as opposed to the conservative vocabulary of BSL-1K [1] since the lexicon of SignBank (2,016 words) is more restricted than BSLDict (9,283 words & phrases). The result of this filtering process was a set of 2,281 words (which includes a number of proper nouns) that we take to constitute the sign language recognition vocabulary. We expect this set of vocabulary to evolve and expand over time with better sign localisation methods, it may also be possible to potentially merge and split some categories. on the training and validation sets, respectively. With a similarity score threshold of 0.7, the dictionary sign spottings yielded 5M training and 126K validation annotations. The attention sign spottings (which do not require a threshold) yielded 434K training and 9K validation annotations. We note that there exists a tradeoff between the amount of annotations and the level of noise. We therefore plot the distribution of each annotation type according to their associated scores in Fig. 14. We also show the portion of the data belonging to our vocabulary of 2,281 signs. Human-verified test set of sign instances. The human-verified sign annotations are obtained through the process described in Sec. 3.5. Statistics for the test set from verified spottings are shown in Tab. 3. SIGN-TEST has a total of 25,045 verified sign instances (9,263 from mouthings, 15,782 from dictionary spottings) which span 1,849 elements of the 2,281 vocabulary. The distribution of annotations exhibits a power law, visualised in Fig. 15. BOBSL partitions for sentence alignment and translation evaluations In order to develop methods for sign language sentence alignment and translation, we need aligned continuous signing segments and corresponding English sentences. We propose to make use of two Fig. 15: Distribution of verified signs for the recognition test set. As with real-world usage, the frequencies of annotated signs across the test set follow a power law distribution (note that the y axis uses a log scale). Here, class labels are sorted by prevalence along the x-axis for ease of visualisation. levels of alignment: (i) audio-aligned video-sentence alignments that have been filtered using automatic spotting annotations to select sentences that are likely to be reasonably well aligned to the signing (these are available in large numbers); (ii) manual videosentence alignments (these are available in smaller numbers). Spotting-filtered signing video-sentence alignments. These correspond to video segments for which an automatic sign instance annotation falls within the corresponding sentence timestamps (we restrict ourselves to annotations obtained from mouthings and dictionaries with confidence over 0.8 and use all annotations obtained through attention) and the word matching the sign occurs in the sentence. This indicates a probable approximate alignment between the signing video and corresponding sentence. For the sentence timestamps, we use the audio-aligned timestamps shifted by +2.7 seconds -this is the average shift calculated between audio-aligned and signing-aligned sentences in our manual training set (SENT-TRAIN H ) described next. We define these splits as SENT-TRAIN SF , SENT-VAL SF , SENT-TEST SF . These spottingfiltered alignments enable large-scale training over multiple domains of discourse. Manual signing video-sentence alignments. These manual sentence-level alignments are obtained through the process described in Sec. 3.5, with statistics shown in Tab. 4. There is a 4: Aligned sentence-level subtitles. Statistics summarising the BOBSL data for which manually aligned sentencelevel subtitles (indicated with an H subscript) and automatically "spotting-filtered" sentence-level subtitles (indicated with an SF subscript) are available. See Sec. 3.5 for a description of the annotation process and Sec. 3.7 for details on how these splits were constructed. † Note that SENT-TEST consists of human-aligned sentences. * Out-of-vocabulary (O-O-V) total of 32K manually aligned sentences for a total duration of 46 hours. The training set episodes are chosen to maximise the number of signers. Given access only to the manual training set, the number of out-of-vocabulary (OOV) words is 1,127 words for the validation set and 8,030 words for the test set. The distribution of show topics for the different splits is shown in Fig. 17, with science & nature representing the largest proportion for all dataset splits (see Fig. 3). We define these splits as SENT-TRAIN H , SENT-VAL H , SENT-TEST. MODELS AND IMPLEMENTATION DETAILS In this section, we describe the models employed for sign recognition (Sec. 4.1), sign language sentence alignment (Sec. 4.2) and sign language translation (Sec. 4.3) baselines for the BOBSL dataset, as well as corresponding optimisation and implementation details. Sign recognition I3D architecture. We report sign recognition experiments using a 3D convolutional model I3D [54] which has been observed to perform well for sign recognition [8], [9]. We report experiments for both RGB and optical flow input streams [55]. Flow is estimated with RAFT [56]. We initialise both models using Kinetics pretraining [54] for each modality. We input 16 consecutive video frames at 25 fps, corresponding to 0.64 seconds (it has been noted in prior work [15], [57], [58] that co-articulated (i.e. "signs in context") rather than 'isolated' signs typically exhibit a duration of up to 13 frames, though this can vary significantly). Video frames are processed at an initial spatial resolution of 256 × 256 pixels: this is then cropped to a 224 × 224 pixel square region and passed to the model (during evaluation, this crop is taken from the centre of the frame; during training, the frame is first resized isotropically along both spatial dimensions by a scaling factor uniformly sampled between 0.875 and 1 and then centre-cropped). Note, however, the flow is first estimated on the original 444×444 resolution (which we found produced qualitatively better results) before being down-sampled to 256 pixels. The input to the model is therefore of size C × 16 × 224 × 224, where C = 3 for RGB and C = 2 for flow. During inference, the model produces class posterior probabilities over a sliding window that spans the given temporal interval of interest and the scores are averaged to produce the final class predictions. Pose→Sign. We also experiment with a lightweight pose-based model, following the approach described in [1]. In particular, we extract pose estimates from each frame using OpenPose [34], yielding 70 facial, 15 upper-body and 21 per hand keypoints, (we defer exploration of more powerful sequence-level 3D poseestimation techniques such as [59] to future work). For the vast majority of frames, only one person is detected (the BSL interpreter). In the rare cases in which additional people are visible (due to the content appearing behind the interpreter), we select the pose for which keypoints have been estimated with the greatest confidence to ensure that we process at most one signer per frame. The keypoints (consisting of x, y coordinates and their associated confidences) from 16 consecutive frames are stacked to form a 3 × 16 × P pose image (where P denotes the total keypoint count per frame) that is ingested by a 2D ResNet-18 [60] to perform sign recognition. Implementation details. We train on the SIGN-TRAIN M,D sign annotations that are above 0.8 confidence, resulting in 426K training samples from the 2,281 vocabulary. For each sign annotation obtained from mouthing, the model randomly samples a sequence of 16 contiguous frames from a window covering 15 frames before the time associated with the annotation and 4 frames after, i.e., [−15, 4] around the mouthing peak. For dictionary annotations, we use a window [−3, 22] around the similarity peak. We set these intervals based on preliminary experiments. Optimisation details. For sign recognition experiments on BOBSL, all models are trained for 25 epochs using SGD with momentum (with a momentum value of 0.9), with a minibatch of size 4. An initial learning rate of 0.01 is decayed by a factor of 10 after 20 epochs. Scale and horizontal flip augmentations are applied on the input video during training for all input modalities. Colour augmentation is additionally during training on RGB input frames. Sign language sentence alignment SAT architecture. We use the Subtitle Aligner Transformer (SAT) model from [32] to temporally locate a text query corresponding to a sentence in a window of continuous signing. The encoder takes as input BERT token embeddings of the text query we wish to align. The decoder takes as input a sequence of video features from a continuous sign language video segment extracted from an I3D model trained with SIGN-TRAIN M,D on sign classification (described in Sec. 4.1) applied with a sliding window of stride 4. The decoder additionally takes as input a prior alignment: the shifted temporal boundaries of the audio-aligned text, i.e. +2.7 seconds where 2.7 is the average lag between the audio-aligned and annotated signing-aligned sentences in SENT-TRAIN H . Using these inputs, the model outputs a vector of values between 0 and 1 of length equal to the length of video features. The temporal boundaries of sentences across the entire signing video under nonoverlapping constraints is determined by maximising the sum of these output scores, using the dynamic time warping algorithm [61]. Implementation details. We follow the procedure from [32], but pre-train the model using SENT-TRAIN SF in addition to SIGN-TRAIN M,D . We firstly pretrain SAT on word-level boundaries from SIGN-TRAIN M,D with confidence scores above 0.8, where we predict a 1-second interval centred at the automatic mouthing or dictionary sign instance annotation in a randomly chosen 20second search window around the annotation. We do not input a prior alignment to the decoder. Secondly, we finetune this model using the sentence-level boundaries from SENT-TRAIN SF , where we use random shifts of these sentence-level boundaries of up to 3 seconds as a prior alignment. Thirdly, we further finetune the model on sentence-level boundaries from SENT-TRAIN H . We use 2.7-second shifted audio-aligned subtitles as a prior alignment, with additional random shifts of up to 2 seconds during training for data augmentation. When training on sentence-boundaries, we randomly select a search window of 20 seconds around the location of the prior alignment and filter to sentences longer than 0.5 seconds. We also randomly shuffle the words in 50% of the sentences and drop 15% of words during training as a data augmentation step. Optimisation details. We use the Adam optimiser with a batch size of 64. We train with a learning rate of 10 −5 at the wordpretraining stage, 0.5 × 10 −5 at finetuning with sentence-level boundaries from SENT-TRAIN SF and 10 −6 at finetuning with sentence-level boundaries from SENT-TRAIN H . At the word pretraining stage, the model is trained over 22 epochs. During the full-sentence finetuning on SENT-TRAIN SF and SENT-TRAIN H , the model is trained over 44 and 143 epochs respectively. Sign language translation Transformer architecture. We use a standard Transformer [49] encoder-decoder architecture that has been used in state-of-the-art work on sign language translation [62] and signing video to token sequence prediction [29]. Both the encoder and decoder consist of 2 attention layers, with 2 heads for each attention layer. The encoder input video consists of 1024-dimensional feature sequence extracted from an I3D model trained with SIGN-TRAIN M,D on sign classification for a vocabulary of 2,281 signs (described in Sec. 4.1) applied with a sliding window of stride 4. Implementation details. We train the Transformer architecture with SENT-TRAIN SF . During training, the ground truth written English sequences are constructed by filtering to a vocabulary of 9,163 words, obtained by selecting words which occur at least 50 times in the training split subtitles. We note that we do not perform any lemmatising or stemming and we do not remove stop words (as opposed to [29]). We subsequently filter to sentences with less than 30 words, giving us a total of 274K training samples (from 295K original samples in SENT-TRAIN SF ). Optimisation details. We use the AdamW optimiser with a batch size of 64. We train for 70 epochs, with an initial learning rate of 10 −10 reduced by a factor of 2 at 49th and 59th epochs. EXPERIMENTS In this section, we provide baselines for the tasks of sign language recognition (Sec. 5.1), sentence alignment (Sec. 5.2) and translation (Sec. 5.3). Sign recognition Evaluation protocol. We evaluate on SIGN-TEST and report both top-1 and top-5 classification accuracy to better account for the extent of sign ambiguities that can be solved in context. We compute per-instance accuracy averaged over all test instances. We also measure per-class accuracy where we average over the sign categories present in the test set. The latter metric is helpful due to the unbalanced nature of the dataset. Baselines. We present results for three sign recognition baseline methods on SIGN-TEST: the simple Pose→Sign model, together with I3D ingesting either RGB or optical flow inputs. We report the results in Tab. 5. We observe that of the three methods, RGB-I3D performs best. Nevertheless, there is considerable room for improvement in performance, especially for the per-class accuracy, underlining the challenging nature of the benchmark. Sign language sentence alignment Evaluation protocol. We evaluate on SENT-TEST and measure frame accuracy and F1-score as in [32]. For the F1-score, hits and misses of sentence alignment of sign language video are counted under three temporal overlap thresholds (0.1, 0.25, 0.5) between the predicted and ground truth signing-aligned sentences. For SAT, we select a search window of length 20 seconds centred around the shifted sentence location S + audio . Baselines. We report the performance of baseline sign language alignment methods in Tab. 6 on SENT-TEST: (i) the original audio-aligned subtitles (S audio ), (ii) the shifted (by +2.7 seconds) audio-aligned subtitles (S + audio ) and (iii) SAT model [32]. We observe that SAT performs best. Results differ from those reported in [32], as we use sentences rather than subtitle texts. Moreover, we pretrain using word-level boundaries from SIGN-TRAIN M,D and finetune the model on sentence-level boundaries from SENT-TRAIN SF and SENT-TRAIN H , rather than training only on BSL-1K and BSL-1K aligned and evaluating on the subtitle version of SENT-TEST. Sign language translation Evaluation protocol. We evaluate on SENT-TEST (without any vocabulary filtering of the ground truth sentences as in training) and measure recall of the model's predictions for each word by computing whether a word is correctly predicted, averaging over the total number of words in all sequences. We also measure standard translation metrics such as BLEU-1, BLEU-4 and ROUGE. Baseline. We report the performance of our baseline sign language translation method on SENT-TEST in Tab. 7 and provide qualitative examples in Fig. 18. These results highlight the challenges of achieving large-vocabulary sign language translation from videos to spoken language. Given the significant room for improvement, we hope this baseline underscores the need for future sign language translation research in large-vocabulary scenarios. OPPORTUNITIES AND LIMITATIONS OF THE DATA In this section we discuss some of the opportunities and limitations of the data from several perspectives: sign linguistics (Sec. 6.1), applications (Sec. 6.2), annotator observations (Sec. 6.3) and dataset bias (Sec. 6.4). A sign linguistics perspective The availability of this dataset represents a positive advance for enabling studies from a linguistics perspective. One challenge with existing technologically-focused research on sign languages is that it has made use of small databases, with few signers, limited content and limited naturalness. The present dataset is large-scale, with a broad range of content, and produced by signers of recognised high levels of proficiency. Nevertheless, there are limitations that should be recognised. First among these is that although this is a relatively large dataset, it includes only 39 signers, all using the same formal linguistic register, and-because the signing is in the context of broadcast television-little of the well-documented regional lexical variation in BSL [63] is apparent. Secondly, all of the material is translated from English. There is research evidence of systematic differences between interpreted and non-interpreted language [64]. with evidence that differences in forms of language are reduced in interpreted texts. Finally, as an additional observation, we note that there is some evidence of differences between the output of hearing and deaf interpreters [65], which may manifest in the BOBSL data. Applications perspective An important consideration when undertaking research in this area is how useful/practical applications and outcomes can be produced for deaf communities. It should not be assumed that the views of hearing researchers and deaf community members are fully aligned. Consequently, to meet this objective, the involvement of deaf researchers and perspectives play a critical role in defining target applications and outcomes. Here we note two example applications that have been highlighted as being of particular interest to deaf communities: enabling indexing and efficient searchability of videos, and providing sign-reading functionality comparable to voice-control for interaction with various devices through applications like Siri and Alexa [3]. For the latter, note that communication with virtual assistants through purely text-based interfaces have significant practical limitations [66], and even in cases when voices of DHH (Deaf and Hard of Hearing) individuals are identified as highly understandable by professional speech pathologists and native hearing individuals, modern automatic speech recognition systems struggle [67]. Prior work has shown that DHH ASL signers preferred commands that were ASL-based over generic gestures for virtual assistant interaction [68]. By providing large-scale training data for computer vision models, there is also an opportunity to improve automatic sign recognition to support a signing interface to virtual assistants in BSL, as well as to improve further applications such as search interfaces for sign language dictionaries, for which retrieval quality correlates strongly with user satisfaction [69]. Finally, we note that while the development of improved sign language technology has potential for positive impact, it is valuable to be aware of historical context: previous developments in sign language technology have struggled to deliver practical value [3], [70]. Sign language processing remains highly challenging, and there remain significant research challenges to achieving robust machine understanding of signing content [71]. Observations from the annotation process During the process of constructing the dataset, several observations arose from the annotation process that provide useful additional context for working with BOBSL. First, it was highlighted that it is frequently the case that not all words present in the subtitles are captured by the signing of the BSL interpreter. Instances when this occurs are tagged and provided as part of the manually aligned sentence annotations to support further analysis. Second, it was noted that a small number of signs are used that would no longer be considered appropriate in modern BSL. These signs have been identified in the manually verified spottings of the test set, and are excluded from evaluation. However, we note that there are likely to be other occurrences of such signs in the rest of the data. We highlight this property to researchers working with the dataset, with particular relevance for research that uses the data to train sign language production models. Data bias While there are several promising research opportunities associated with BOBSL, it is important to also recognise the limitations of the dataset. Here we note factors that may have implications for the generalisation of models trained on this data. First, the data was gathered from TV broadcast footage: consequently, the content of the signing reflects the content of TV shows, rather than spontaneous, conversational signing. A second consequence is that the distribution of interpreters follows that of the original broadcasts, in which not all demographics are equally represented. A third consequence of using broadcast interpretations is that the interpreters may choose not to convey information from the audio stream that they consider to be redundant to the visual stream of the footage. Additional potential sources of bias stem from our use of automatic annotation: (1) First, the distribution of signs that were annotated by spotting mouthings skew towards signs that are more commonly associated with mouthing patterns, as well as towards interpreters who sign with more pronounced spoken components. (2) Second, by constructing benchmark test sets for sign classification through human verification of automatic sign proposals, the distribution of test set signs will exhibit higher similarity to the training set distribution than would be expected if the test set was annotated without automatic proposals. There is a trade-off here: our semi-automatic "propose and verify" pipeline has the benefit of significantly enhanced annotator efficiency (enabling the creation of much larger and more comprehensive test sets than would otherwise be possible). However, as a consequence of the bias introduced by the propose and verify pipeline, researchers and practitioners should note the gap that remains between evaluation performance on the BOBSL test sets and expected performance on real world signing. Noting these important caveats, we nevertheless hope that BOBSL forms a useful, large-scale benchmark to spur progress within the research community. CONCLUSION We introduced BOBSL, a large-scale dataset of British Sign Language. We hope that this dataset will provide a useful resource for researchers in the computer vision, natural language processing and sign linguistics communities. in preparing identity embeddings, and Abhishek Dutta for his tireless support of the VIA annotation tool development that was so essential for this project. We would also like to thank Ashish Thandavan, David Miguel Susano Pinto and Ivan Johnson for their help in supporting the dataset release, and Necati Cihan Camgöz for helpful suggestions on dataset distribution. This work was supported by EPSRC grant ExTol, and a Royal Society Research Professorship.
2021-11-08T02:16:22.129Z
2021-11-05T00:00:00.000
{ "year": 2021, "sha1": "3826b47d3e8a7ef99d8c757c778e7d11633df84e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3826b47d3e8a7ef99d8c757c778e7d11633df84e", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
8446912
pes2o/s2orc
v3-fos-license
SL(2;R) Duality of the Noncommutative DBI Lagrangian We study the action of the $SL(2;R)$ group on the noncommutative DBI Lagrangian. The symmetry conditions of this theory under the above group will be obtained. These conditions determine the extra U(1) gauge field. By introducing some consistent relations we observe that the noncommutative (or ordinary) DBI Lagrangian and its $SL(2;R)$ dual theory are dual of each other. Therefore, we find some $SL(2;R)$ invariant equations. In this case the noncommutativity parameter, its $T$-dual and its $SL(2;R)$ dual versions are expressed in terms of each other. Furthermore, we show that on the effective variables, $T$-duality and $SL(2;R)$ duality do not commute. We also study the effects of the $SL(2;R)$ group on the noncommutative Chern-Simons action. Introduction SL(2; R) duality generalizes strong-weak coupling duality. There is an SL(2; R) symmetry manifest in the low energy action, which is broken down to SL(2; Z) in string theory. Also there is considerable evidence in favor of this duality being an exact symmetry of the full string theory [1,2,3]. In fact, the SL(2; R) group and its subgroup SL(2; Z) act as symmetry groups of many theories [4,5,6]. Among these theories, the noncommutative theories and the Dirac-Born-Infeld (DBI) theory are more important, for example see the Refs. [6,7]. Consider the SL(2; R) symmetry of the type IIB superstring theory [1,2,3]. In the type IIB theory the R-R zero-form χ and the dilaton φ of the NS-NS sector define a complex variable λ = χ + ie −φ . Under the SL(2; R) duality this variable and also the NS-NS and R-R two-forms B µν and C µν transform as in the following In addition, the Einstein metric g (E) µν = e −φ/2 g µν remains invariant. Therefore, the string coupling constant g s = e φ and the string metric g µν transform as follows g s →g s = η 2 g s , g µν →g µν = ηg µν , η ≡ |cλ + d| . ( For slowly varying fields, the effective Lagrangian of the open string theory is the DBI Lagrangian. For a review of this theory see Ref. [7] and references therein. The equivalence of the noncommutative and ordinary DBI theories has been proven [8]. We shall concentrate on both of these theories. In section 2, we shall present an SL(2; R) invariant argument for the ordinary and noncommutative DBI Lagrangians. Therefore, for special C µν a Dp-brane with ordinary worldvolume, but modified tension will be obtained. In addition, we obtain the auxiliary U (1) gauge field strengthF µν [9] in terms of the other variables. This field with the U(1) field strength F µν form an SL(2; R) doublet. In section 3, by introducing a consistent relation between B µν andB µν , a useful rule will be obtained. That is, the DBI theory and its SL(2; R) dual theory are duals of each other. In other words, twice dualizing of the DBI theory leaves it invariant. This reflection also holds for the noncommutative DBI theory. In section In section 5, the noncommutativity parameter is related to its T -dual and its SL(2; R) dual versions. We shall see that on the effective variables, T -duality and SL(2; R) duality do not commute. In addition, the invariance of the quantity Gs gs , under the T -duality and SL(2; R) duality will be shown. In section 6, we study the Chern-Simons (CS) action. For its commutative theory, for example, see Ref. [10] and for its noncommutative version, e.g. see Ref. [11,12]. The effects of the SL(2; R) group on the noncommutative CS action will be studied. We observe that under twice dualization this action remains invariant. 2 Noncommutative DBI Lagrangian and its SL(2; R) duality Now we study the action of the SL(2; R) group on the noncommutative DBI Lagrangian. We consider an arbitrary Dp-brane. Consider the noncommutative DBI Lagrangian [8] where the index zero shows the cases with zero extra modulus, i.e. Φ = 0. From the definitions of the open string variables G (0)µν , G (0) s and the noncommutativity parameter θ µν 0 , in terms of the closed string variables g µν , B µν and g s (whit µ, ν = 0, 1, ..., p), one can find their SL(2; R) transformations. We also require transformation of F µν . According to the following relation [8] transformation of the noncommutative field strength F µν can be obtained from the transformations of θ µν and the ordinary field strength F µν . It has been discussed by Townsend [9] that for D-string there are two U(1) gauge fields F µν andF µν , which form an SL(2; R) doublet, related to the doublet Ref. [13]. We assume that the field strengthF µν can be applied to any Dp-brane. Therefore, F µν andF µν can be interpreted as DBI fields, but not both simultaneously. Thus, the ordinary gauge field strengths F µν andF µν transform in the same way as the fields B µν and C µν , Imposing the SL(2; R) invariance on the ordinary (noncommutative) DBI theory, givesF µν in terms of F µν (F µν and θ µν ). Commutative result Consider the case C µν = d c B µν . This means that the fieldB µν is zero. In other words, the transformed theory is not noncommutative. Therefore, the SL(2; R) transformation of the (7) is proportional to the DBI Lagrangian, i.e., This equation can be interpreted as follows. The LagrangianL describes the same Dp-brane which is described by L DBI , but with the modified tension For the D3-brane, theory is symmetric, i.e.L = L DBI , as expected. For the strong coupling of strings g s → ∞, the modified tensionT p goes to zero. For the weak coupling constant g s → 0, this tension for D-particle goes to zero, for D-string is finite and for Dp-brane with p ≥ 2 approaches to infinity. We also can writē Noncommutative result with Φ = 0 Now we find the conditions for the invariance of the noncommutative theory with zero modulus Φ. Consider the following relations between the scalars and 2-forms which are equivalent to η = 1 andB µν = B µν , respectively. These assumptions lead to the In addition, the field strength should be selfdual or anti-selfdual, i.e., F = ± F . Therefore, the noncommutative theory (3) becomes According to the equation (5), the condition on the field strength F givesF and conse-quentlyF in terms of F and θ 0 , This is a way to determine the auxiliary field strengthF µν . For the selfdual case (i.e. the upper signs) the field strengthF is proportional to F . For the anti-selfdual case (i.e. the lower signs) we have This means that, −θ −1 0 is harmonic mean between F andF . The equation (13) for the commutative case gives an anti-selfdual F , i.e.F = −F . Noncommutative result including Φ The noncommutative DBI Lagrangian with arbitrary noncommutativity parameter has the dual form where the effective parametersG,Φ andG s have been given by the equations For the equations (11) and selfdual θ, the dual Lagrangian (14) is equal to the noncommutative Lagrangian L (i.e., equation (14) without tildes) if F is selfdual or To show invariance under this condition, again use the identity det M = det M T . According to the equation (17) and dual form of the equation (5), the field strengthF is where the matrix ω is As expected, the equation (18) for Φ = 0 reduces to the equation (12) with plus signs. Duality of the dual theories Define the matrixΛ asΛ Therefore, we can write Also let the parameterη beη This gives That is, in some equations if we change the dual quantities with the initial quantities the resulted equations also hold. With this rule, the equations (21) and (23) directly can be obtained from the equations (1) and (2). For generalization of the above rule let the 2-form C µν be proportional to B µν as in the This leads to the relationB or equivalentlyC µν = 1−aη d−η C µν . These equations also hold under the exchange of the dual quantities with the initial quantities. In other words, we have (d−η)B µν =cC µν , B µν =ηB µν and C µν = 1−ãη d−ηC µν . According to the equations (2), (4) and (25), for the zero modulus Φ, the transformations of G (0) , θ 0 and G (0) s are as in the following On the other hand, these equations also obey from the above rule. SinceΛ ∈ SL(2; R), we conclude that the initial theory also is SL(2; R) transformed of the dual theory. Therefore, the mentioned rule can be written as Initial theory Λ −→ SL(2; R) dual theory, SL(2; R) dual theoryΛ −→ Initial theory. In other words, twice dualization leaves the theory (and related equations) invariant. Note that "initial theory" refers to the type IIB theory or DBI theory. In the next sections, we shall see that the rule (27) will be repeated. For example, it also holds for the noncommutative DBI theory, ordinary and noncommutative Chern-Simons actions. The statement (27) for the ordinary DBI theory is obvious, i.e., Relations between the effective variables The noncommutative DBI Lagrangian and the SL(2; R) duality of it can be described more generally, such that the noncommutativity parameters θ andθ become arbitrary [8]. Therefore, the extra moduli Φ andΦ are not zero (for example, the dual theory was given by the equation (14)). The equations (26) guide us to introduce the following relations between the effective metrics and the extra moduliG According to the equations (15) and (29) we obtaiñ This implies that, if the effective theory is noncommutative (ordinary) the dual theory of it also is noncommutative (ordinary). Note that if we introduce the equation (30) then we obtain the equations (29). The equations (16) and (30) give the following relation between the effective string couplingsG s and G s , The equations (29) the equations (29) and (30) where Q 1 , Q 2 ∈ {G, Φ, θ, g, B}. That is, the quantities Q µν (1) Q (2)ρσ andQ µν (1) Q (2)ρσ are SL(2; R) invariant. On the other hand, since we haveQ (i) = Q (i) for i = 1, 2, the equations (33) are consistent with the rule (27). According to the equations (29) and (31), for the following values of the dual field F , the dual Lagrangian (14) takes the form After substituding (34) in (14), for the lower signs one should perform transpose on the matrices in (14). Again use the identity det M = det M T . This Lagrangian describes the same noncommutative Dp-brane, which also is given by L, but with the modified tension, i.e.,¯ T p = η p−3 2 T p . For the D3-brane the theory is invariant. For the commutative case, the equation (35) reduces to the equation (8), as expected. The equation (34) with lower signs, is generalization of the equation (17). The field strengthF , extracted from the equation (34), is where the matrix Ω is Since the equation (34) The actions of SL(2; R) duality on these equations give Application of T -duality on the first and second equations of (26) and then comparison of the results with the equations (40), lead to the relations where η ′ is T -duality of η. These equations imply that, on the open string metric and noncommutativity parameter, unless η = η ′ , T -duality and SL(2; R) duality do not commute with each other. We shall show that for the nonzero modulus Φ, these equations also hold. In the presence of the extra modulus Φ, we have the following relation [14] G Therefore, there is the following relation between the noncommutativity parameter θ µν and its T -duality θ ′ µν , According to this equation and equation (30) we obtain That is, the T -duality and SL(2; R) duality versions of the noncommutativity parameter are related to each other. Action of the SL(2; R) duality on G ′ , Φ ′ and θ ′ of the equations (42) and (43) and also action of T -duality on G, Φ and θ of the equations (29) and (30), and then comparison of the results, give where Q ∈ {G, Φ}. Let us denote the dualities of Q as Q ′ ≡ T Q andQ ≡ SQ. Thus, the equations (45) take the forms Similarly, for the effective string coupling G s there is Therefore, on the variables G, Φ, θ and G s T -duality and SL(2; R) duality do not commute. In other words, the commutator of these dualities, is proportional to the effects of T -duality. The T -duality of the effective string coupling is . This implies Gs gs is a T -duality invariant quantity [14]. From this and the equation (31) we conclude that That is, the ratio Gs gs also is invariant under the SL(2; R) duality. Therefore, on the quantity where C (n) denotes the n-form R-R potential. The exponential should be expanded so that the total forms have the rank of the worldvolume of brane. In fact, this action is for a single BPS Dp-brane. The noncommutative Chern-Simons action for constant fields can be written as in the following [11] also see Ref. [12]. This action holds for general modulus Φ. It describes the R-R couplings to a noncommutative Dp-brane. Now we study the effects of the SL(2; R) group on this action. We can apply F from (34). For simplicity, choose the upper signs for F . In addition, the equations (25) and (30) can be used forB andθ. Adding all these together, we obtain Therefore, we should determine the dual fields {C (n) }. Since our attention is on the type IIB theory,C (n) is an even form. The dual fieldsC (0) ≡χ andC (2) ≡C have been given by the transformations (1). The field C (4) corresponds to the D3-brane. It was shown in [3,4] that the invariance of the equations of motion, extracted from the total action S DBI + S CS , under the SL(2; R) group, gives the transformations (1) and For the formsC (6) ,C (8) andC (10) one may use the Hodge duals of the formsC (4) ,C (2) and C (0) , which are available. However, we have the following results at least for n ≤ 4. Conclusions We studied the action of the SL(2; R) group on the noncommutative DBI theory with zero and nonzero extra modulus Φ. The invariance of the theory determines the corresponding noncommutative field strength F µν . As a consequence, the auxiliary field strengthF µν has been obtained. For a special value of the R-R 2-form, the SL(2; R) group on the noncommutative DBI Lagrangian produces a theory which describes an ordinary brane with the modified tension. For the D3-brane the resulted ordinary theory is DBI theory, as expected. We observed that the extracted equations of the ordinary DBI and noncommutative DBI theories under the exchange of the variables with their dual variables are invariant. In other words, twice dualizing of these theories and the corresponding variables and equations, does not change them. This implies that these theories and their SL(2; R) transformations, are dual of each other. By introducing some relations (which are consistent with the rule (27)) between the effective variables and their duals, we obtained some other equations that are SL(2; R) invariant. Therefore, another solution for the auxiliary gauge field was found. In this case, SL(2; R) duality of the noncommutative DBI theory is proportional to the noncommutative DBI theory. For the D3-brane the theory is selfdual. We showed that the noncommutativity parameter, its T -dual and its SL(2; R) dual have relations with each other. We found that on the open string metric, noncommutativity parameter, the extra modulus Φ and the effective string coupling, T -duality and SL(2; R) duality do not commute. We also observed that the ratio of the effective string coupling to the string coupling under the above dualities is invariant. Finally, we studied the effects of the SL(2; R) group on the noncommutative Chern-Simons action. Under two successive dualizations, similar the DBI theory, this action remains invariant. This also occurs for the ordinary Chern-Simons action.
2014-10-01T00:00:00.000Z
2002-07-06T00:00:00.000
{ "year": 2002, "sha1": "e2cb6c7915b2801463b7287bb07e96db9d6a95b2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0207055", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "32ebb9d5aadbd6809faa5bfc92c2494c8eacf420", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
119533380
pes2o/s2orc
v3-fos-license
Adiabatic radio frequency potentials for the coherent manipulation of matter waves Adiabatic dressed state potentials are created when magnetic sub-states of trapped atoms are coupled by a radio frequency field. We discuss their theoretical foundations and point out fundamental advantages over potentials purely based on static fields. The enhanced flexibility enables one to implement numerous novel configurations, including double wells, Mach-Zehnder and Sagnac interferometers which even allows for internal state-dependent atom manipulation. These can be realized using simple and highly integrated wire geometries on atom chips. I. INTRODUCTION Magnetic fields are powerful tools to control and manipulate the motion of neutral atoms [1,2]. These fields can either be created by (macroscopic) coils [3], free standing wires [4,5,6] or -as a result of the growing effort for miniaturization and integration -by surface-mounted micro fabricated structures, so-called atom chips [7]. Compared to macroscopic setups, atom chips provide high magnetic field gradients [8] and therefore enable the realization of tightly confining traps. The flexibility of designing complex current and charge patterns on the chip allows for considerable freedom to engineer 'potential landscapes' for neutral atoms. This has resulted in numerous designs of atom-optical elements such as traps, guides, beams splitters and interferometers [9,10,11,12,13] with possible applications ranging from quantum information processing [14,15] to high precision measurements [16]. Even though many of these designs have been demonstrated experimentally [7,17,18], there have been enormous difficulties to realize a coherent beam splitter using microscopically tailored static or slowly varying fields [19]. Most of these difficulties stem from the fact that Maxwell's equations constrain the freedom to design static magnetic potentials. One consequence is that the number of potential minima created is less or equal to the number of wires used [20]. Whereas regular strongly confining potential minima are created from quadrupole fields, the merging and splitting relies on higher order multipoles, usually the hexapole component, and thus results in a significantly weaker confinement. Consequently any dynamic splitting of a potential passes through a weakly confining region and creates an additional unwanted minimum, a loss channel. This splitting two in two makes the central splitting region very unstable and therefore truly adiabatic manipulations are hard to perform [21]. These deficiencies can be overcome by using not only static fields but combining them with oscillating radio frequency (RF) or micro-wave near fields. The adiabatic dressed state potentials created in this way do not show the unwanted loss channels, keep the confinement tight during the splitting process and consequently allow for a smooth transition from a single to two channels. Well controlled coherent splitting and simultaneous tight confinement of the atomic motion can be achieved even far away from the chip surface [22]. In addition adiabatic potentials permit the creation of non-trivial topologies like, for example, twodimensional traps [23,24], closed path interferometers and ring geometries. Also smooth transformations between different potential forms can be achieved. In this paper we first discuss the theoretical foundations of the underlying coupling creating the adiabatic potentials and present their advantages. These are then applied to create basic atom optical elements such as a double-well, a Mach-Zehnder interferometer and a ring trap. We also outline the implementation of a state-dependent splitting scheme for atomic clouds. II. THEORETICAL DESCRIPTION OF DRESSED RF POTENTIALS We develop the theory by starting with the initial approach by Zobay and Garraway [23] and extending it to fully account for the vector properties of the magnetic fields involved. Only accounting for the latter leads to a complete description of the underlying couplings and the increased versatility of the resulting potentials. We consider an alkali atom in a hyper-fine level designated by the quantum number F . Assuming that F remains a good quantum number even in the presence of a magnetic field, the atomic dynamics is governed by the Hamiltonian Here g F is the g-factor of the hyper-fine level and F the angular momentum operator. We assume B(r, t) to consist of a static part B S (r) and an oscillatory part of the form As a first step we use the unitary transformation U S to transform the Hamiltonian into a frame where the interaction of the atom with B S (r) is diagonal, i.e. Here we have exploited that rotating the operator F by using U S is equivalent to rotating the magnetic field vector B S (r) by applying the appropriate rotation matrix R S . The operator F z can be represented as a diagonal matrix with the entries −F ≤ m F ≤ F and m F denoting the magnetic hyper-fine sub-levels. We proceed by applying another unitary operation U R = exp −i g F |g F | F z ωt which effectuates a transformation into a frame that rotates with the angular velocity ω around the local direction of the static field e S = B S (r) |B S (r)| . The application of U R leads to the emergence of additional terms that oscillate with the frequency 2ω. In the so-called rotating wave approximation -which we employ in the following -the oscillating terms are neglected. The now time-independent Hamiltonian reads The term proportional to ωF z arises from the transformation of the time derivative in the time-dependent Schrödinger equation. The fieldB is given bȳ where the matrix R δ performs a rotation around the axis e S by the angle We want to emphasize that the sign of the rotation angle δ depends on the sign of the g-factor. Therefore atoms in different hyperfine states will see different RF potentials even if they have the same magnetic moment µ = m F × g F . The adiabatic potentials are the eigenvalues of the Hamiltonian (4): In the case of zero phaseshift (γ = 0), i.e. a linear polarized RF field, the last term of the radical can be rewritten in a more convenient form: Here it is immediately apparent that only the RF field components being perpendicular to the static field contribute. III. REALIZATION OF ATOM OPTICAL ELEMENTS A. Linear RF polarization -A double well As a first example we consider the creation of a double-well potential starting from a Ioffe-Pritchard trap [7,31] which is one of the most commonly used trapping field configuration. Its magnetic field is given by Here G is the gradient of the quadrupole field and B I the homogeneous Ioffe field strength. We superimpose a homogeneous oscillatory RF field perpendicular to the Ioffe field. Without loss of generality we take B A RF (r) = B RF e x and B B RF (r) = 0. The unitary transformation which diagonalizes the static part of the Hamiltonian (1) is given by with cos β = B I |B S (r)| and sin β = − Gρ |B S (r)| and |B S (r)| = G 2 ρ 2 + B 2 I . After the transformation into the rotated frame the adiabatic potential evaluates according to equation (7) to Its minima are located at φ 1 = 0 and with the position of the potential minimum the potential bottom the critical field strength (14) and the detuning In order to arrive at the last term of equation (13) with m being the mass of the atom considered. There are several advantages of a RF interferometer over a static two-wire configuration [11,25,26]: 1. The capability of performing a smooth transition from a true single well to a doublewell, by varying any of the parameters △, B RF and B I . In contrast, in the static case one encounters a transition from two vertically to two horizontally split minima, if the strength of a homogeneous bias field is modulated [11]. In the vicinity of the splitting region this leads to unwanted tunneling processes into the second vertical (loss) channel just before the intended splitting sets in [25]. This poses a severe obstacle for establishing an adiabatic process. In addition the RF realization of the double well conserves the parabolic confinement perpendicular to the splitting direction even in the vicinity of the splitting region. Here the confinement in the static configuration in both directions relies solely on a quartic potential. 2. In the static realization the distance between the potential minima scales according to Here b is the strength of a homogenous magnetic field which eventually gives rise to the splitting of the hexapole into two quadrupole minima [21]. However, in order to have a stable splitting distance one has to precisely control the field strength b, i.e. keep its fluctuations △b small. This is extremely hard in the vicinity of b = 0 since △ρ 0 /△b ∝ b −1/2 . Unlike this the splitting distance in the RF case obeys ρ 0 ∝ B RF (for zero detuning). Thus we find ρ 0 to be much less sensitive with respect to fluctuations in B RF particularly if B RF is small. 3. The RF adiabatic potential keeps much tighter confining wells even far away from the field generating structures, i.e. the chip surface. This can be illustrated considering an atom chip with structure size d. For the sake of simplicity the quadrupole for the RF setup shall be created by a sideguide configuration [7] in a distance d above the chip surface. The static implementation of the double-well consists of two wires separated by 2d [21]. Provided that the wire current I and B I are equal for both setups and assuming for simplicity △ = 0 the trap frequencies and the height of the barrier between the wells obey The essence of these expressions is their scaling with respect to the parameter d which refers not only to the structure size but also to the distance of the potential wells to the chip surface. Compared to the static two-wire case, a similar RF trap allows for realizing the same confinement with larger structures and thus farther away from the chip surface. The latter is of particular importance as hereby coherence-destroying surface interactions [27,28] are strongly inhibited. The stronger increase of the potential barrier in the RF case is advantageous as it permits a true spatial separation of trapped atom clouds even for small splitting distances The potential of the RF technique to coherently control the motion of atoms has recently enabled the demonstration of coherent splitting of a Bose-Einstein Condensate (BEC) on an atom chip [22]. In figure 1a we present how a highly integrated realization of a RF double-well could look like. The quadrupole field is generated by a four-wire structure carrying counter-propagating DC currents. In-between these wires there is a broad wire flown through by an AC current. Sufficiently close to this wire, the resultant RF field can be considered to be homogeneous. The Ioffe field pointing into the plane of view is generated by additional wires which are not shown here [7]. The potential bottom of the RF double-well increases proportional to (B RF − B C ) 2 . This provides an convenient mechanism to achieve confinement in the longitudinal direction. A z-dependence of the RF amplitude, i.e. B RF = B RF (z), can be achieved by shaping the RF wire [30]. For example a wire geometry creating a symmetric increase of the current density around z = 0, consequently, will lead to a symmetric increase of the RF amplitude (see figure 1c). Hence, depending on the actual value of B RF a three-dimensionally potential bottom can be compensated by applying either a spatially varying Ioffe field or an additional external potential. The latter can be realized for instance by placing a charged wire underneath the chip [29]. The corresponding electric potential reads U el (r) = − α 2 |E(r)| 2 [7]. B. Arbitrary RF polarization -A ring interferometer As a second example we construct a more complex trapping geometry by employing two phase-shifted RF fields. We consider two orthogonal RF fields of the form B A RF (r) = B RF √ 2 e x and B B RF (r) = B RF √ 2 e y , which are superimposed on the static B S (r). According to equation (7) the corresponding adiabatic potential evaluates to For cos δ > 0 we find the minima and maxima at of the potential at φ min = 3 4 π, 7 4 π and φ max = 1 4 π, 5 4 π, respectively. If cos δ < 0 the positions of the minima and maxima simply exchange. Assuming ρ ≪ B I /G the radial position of these extrema is Hence for cos δ > 0 and B RF < 2 1+cos δ+sin δ B C or cos δ < 0 and B RF < 2 1−cos δ+sin δ B C solely a single minimum can be achieved. For δ = 3 2 π in any case only a single minimum is found. In figure 2a we present how such setup can be realized in a highly integrated way. The static quadrupole field is generated by a three wire setup. The two outer wires also serve as RF sources that are positioned such that two orthogonally polarized homogeneous fields in the vicinity of the quadrupole are created. The versatility of the potential (19) lies in the fact that by simply varying the phase shift δ, i.e. changing the polarization of the RF field, one can either accomplish a single well, a double-well or a ring configuration. Even a rotating double-well is achievable by appropriately tuning the phase and the RF amplitude. The double-well configuration with the strongest confinement is achieved for γ = 0, i.e. vanishing relative phase shift of the RF fields. Increasing the phase shift from δ = 0 to δ = π 2 , i.e. from linear to circular polarization, results in a smooth transition to a ring-shaped potential of adjustable radius. This transition is shown in figure 2b. The potentials shown are calculated for the typical set of experimental parameters B I = 1 Gauss, G = 0.2 Gauss/µm, B RF = 1.3 Gauss and ω = 2π × 1.26 MHz . In order to generate a confinement also in the longitudinal (z-)direction we impose a modulation of the RF amplitude of the form B RF (z) = (B RF + G 2 RF z 2 ). In figure 2c the m F g F E = k B × 1.1 µK isosurface for G 2 RF = 0.05 Gauss/m 2 is depicted. The ringshaped potential is thus capable of confining BECs as the typical energy scale associated to such matter waves is in the nK-regime. The setup allows one to examine the collective ground state of ultra-cold atoms trapped on a ring [32]. Also building a ring interferometer (Sagnac-interferometer) for matter waves is possible. Coherence preserving loading of the latter could be done by preparing an ultracold atomic ensemble in the static wire trap. Switching on the RF fields thereafter, and establishing the appropriate phase shift δ leads to a well controlled transition to the ringshaped potential. Such traps are particularly suited for building gyroscopes or rotationsensors. Gupta et al. [33] have recently succeeded in loading a ring-shaped waveguide with a BEC. Their setup consists of millimeter-sized coils forming a magnetic quadrupole ring with diameters ranging from 1.2 to 3 mm. However, generating BECs which are phase coherent over the entire ring is extremely difficult in such a macroscopic trap. In order to avoid the necessity of cooling to extremely low temperatures it is beneficial to use small rings with diameters of a few micrometers. Also internal state-dependent manipulation of atoms can be achieved by using the potential (19). Let us consider for instance the two hyperfine states |1 and |2 with the same magnetic moment µ = m F g F µ B . Consequently in a static field atoms in either of these states are subjected to the same trapping potential. Suppose now the RF field is switched on adiabatically such that m ′ F = m F . In the case when g F |1 = −g F |2 we have δ |1 = −δ |2 . Thus if one picks δ |1 = π 2 atoms being in state |1 see a ring whereas atoms in state |2 are confined to a single centered potential minimum as seen in figure 2b. IV. CONCLUSION In conclusion dressed RF adiabatic potentials are versatilely applicable to build atom optical elements and offer a number of significant advantages over their static implementations. RF-based traps provide tight confinement even at large surface distances and allow for a smooth transition from a single to a double-well. Moreover, a RF double-well is more robust against experimental fluctuations against its static counterpart which is certainly advantageous for performing tunneling experiments. This technique paves the way to the realization of complex coherence preserving potentials on a micro scale by using simple and highly integrated setups. This is of particular importance for such demanding applications as quantum information processing and high precision measurements based on matter wave interference. After submission of this manuscript several other works appeared which discuss applications of RF potential [34,35,36]. We acknowledge financial support from the European Union, contract numbers IST-
2019-04-14T03:14:36.739Z
2005-10-10T00:00:00.000
{ "year": 2005, "sha1": "72166cc4834bcff933a35931a894f7a8ef3bb05d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0510076", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fa32d44a992f6433a12b0e8dfb55e395f6cbdde9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232118618
pes2o/s2orc
v3-fos-license
Cymbopogon citratus 80% Methanolic Leaf Extract Inhibit Acetyl- Cholinesterase on Mice Model Oxygen is a unique element indispensable for life. Free radicals are forms as a consequence of ATP (adenosine triphosphate) production by the mitochondria in oxidative phosphorylation due to use of oxygen by cells to generate energy. Most of the by-products form as a result of this biochemical changes are reactive oxygen species (ROS) as well as reactive nitrogen species (RNS). Delicate balance between the two antagonistic effects is among the vital aspect of life ROS and RNS exert beneficial effects on cellular responses and immune function at low or moderate concentration. Therefore, the aim of this study is to evaluate the cholinesterase inhibitory effect of Cymbopogon citratus 80% (leaf) extract. The leaf was extracted with 80% methanol. A toxicity study was carried out on mice. Cholinesterase inhibitory activities were also evaluated on the same mice using Ellman’s method. Result revealed high cholinesterase inhibitory activities of the crude extract with high significant differences at P<0.001) between the group that were treated with crude extract only, group treated with crude extract and exposed to arsenic and group that were exposed to arsenic only as well as group that are maintained in complete media. It can be concluded that low toxicity and high cholinesterase inhibitory effect of the crude extract is responsible for it therapeutic effects of this crude extract. Toxicity screening of this crude extract on a mammal such as rat to reaffirm their toxicity profile are recommended. Antioxidant screening as well as isolation of bioactive compounds present in this plant part is strongly recommended. INTRODUCTION Arsenic is an environmental contaminant and it present in water can lead to environmental pollution [1]. Long time exposure to arsenic can predisposed to immense health distress as it accounts for the increased risk of various disorders such as cardiovascular abnormalities, diabetes mellitus, neurotoxicity, and nephrotoxicity [2]. Similarly, arsenic intoxication is reported to be among the serious causes of hepatic dysfunction and hepato-cellular toxicity [3]. In addition, carcinogen induction particularly skin, bladder, lung, hepatic and brain tumor were shown to be caused as a result of long time exposure to arsenic [4]. Several researches also documented the potential of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase present in the plasma membrane of vascular endothelial cells and vascular smooth muscle cells (VSMC) to be stimulated by arsenic [5]. Over stimulation of NADPH oxidase causes increase reactive oxygen species (ROS) such as superoxides and hydrogen peroxide generation leading to cytotoxicity effect [6]. To find solution to these problems, medicinal plant Cymbopogon citmiceus leaf was obtained, extracted and evaluated for its effects on preventing diseases causes by accumulation of free radicals' Cymbopogon citmiceus, Stap (Lemon grass) is a widely used herb in tropical countries, especially in Southeast Africa [7]. The essential oil of the plant is used in aromatherapy. The compounds identified in Cymbopogon citmiceus are mainly terpenes, alcohols, ketones, aldehyde and esters. Some of the reported phytoconstituents are essential oils that contain Citral α, Citral β, Nerol Geraniol, Citronellal, Terpinolene, Geranyl acetate, Myrecene and Terpinol Methylheptenone [8]. The plant also contains reported phytoconstituents such as flavonoids and phenolic compounds, which consist of luteolin, isoorientin 2'-Orhamnoside, quercetin, kaempferol and apiginin [9]. Studies indicate that Cymbopogon citmiceus possesses various pharmacological activities such as antiamoebic, antibacterial, antidiarrheal, antifilarial, antifungal and anti-inflammatory properties [10]. Various other effects like antimalarial, antimutagenicity, antimycobacterial, antioxidants, hypoglycemic and neurobehavioral have also been studied. These results are very encouraging and indicate that this herb should be studied more extensively to confirm these results and reveal other potential therapeutic effects. Therefore, the aim of this research is to evaluate the free radical protective effects of Cymbopogon citmiceus 80% leaf methanol extract by studying it effect on acetyl-cholinesterase enzyme on mice brain. To achieve these mice which is a mammalian vertebral fish and physiologically closer to human being is used in this research. Mice have been use as model organisms to study human biology due to the genetic and physiological similarities between the two species [11]. Moreover, mice and humans have evolved in and become adapted to different environments and so, despite their phylogenetic relatedness, they have become very different organisms [12]. Mice often respond to experimental interventions in ways that differ strikingly from humans [13]. Ethical Approval This study was scrutinized and approved by the University Committee on Medical and Scientific Research Ethics, in accordance with National Health Research Ethics Committee of Nigeria (NHREC) directive. Plants collection and identification The plant leaf was obtained from Department of Veterinary Pharmacology and Toxicology garden at city campus Usmanu Danfodiyo University Sokoto (UDUS). Botanists from Biological department UDUS authenticate the plant leaf and allocate voucher number. Plant Extraction The stem cleaned and cut into small pieces with anvil pruner, (UK). The leaf was allowed to air dried for two weeks at room temperature (26±1 o C) in lab and crushed to smaller sizes. Two hundred (200) g of sample (leaf) was soaked for 3 days in 1000 ml 80% methanol in flat bottom flasks (Sigma Aldrich, USA). Crude mixture was shaken every day at 26 o C to ensure adequate dilution and extraction. The process was repeated three times to extract most of the bioactive compound present in the leaf. The extract was then filtered with Whatman filter paper (1.5 Sigma Aldrich, USA) and then concentration to semi-solid form at 42 °C with a rotary evaporator (IKA ® RV 10, USA). The resultant crude extract obtained was then weighed and transferred into sample bottles and stored at 4 °C until required. Percentage yield was calculated as the weight of the filtrate divided by the total weight of the ground powder in percentage. Yield (%) = [wt of extract (g)/wt of plant material (g)] x 100. Plants sample dilution and dose prepamiceion Stock solution was prepared by dissolving 100 mg of the crude extract into 1 ml of 100% DMSO (100 mg/L). DMSO was use to solubilize the crude extract in aqua solvent. Preparation of sub-stocks in mg/ml was carried out by diluting the stock solution with distilled water to the concentration of interest using two-fold serial dilution (mg/ml) in sample bottle (Sigma Aldrich, USA). DMSO (vehicle) was maintained at 0.1% in all concentration of extract. Toxicity Study Healthy adult male and female mice, aged 1-2 month, weighing 80-100g, were purchase from animal's house at Faculty of pharmaceutical science Usmanu Danfodiyo University Sokoto as reported by the OECD guideline for testing of chemicals (OECD, 2013). The animals were acclimatized for 2 weeks at Biochemistry lab Faculty of Veterinary Medicine (26 ± 2ºC; 12: 12 hour dark/light cycle), fed with commercial feed ad libitum. Acute toxicity test of arsenic on mice Group-I -Five mice were exposed to arsenic for 60 minute for 2 days using nebulizer. Group-II -Five mice were exposed to arsenic for 40 minute for 2 days using nebulizer. Group-III -Five mice were exposed to arsenic for 20 minute for 2 days using nebulizer. Group-IV -Five mice were exposed to arsenic for 1 minute for 2 days using nebulizer. Group-V -Five normal control mice were not were exposed to arsenic. Chronic toxicity test of arsenic on mice Group-I -Ten mice were exposed to arsenic for 40 minutes for 7 days using nebulizer Group-II -Ten mice were exposed to arsenic for 30 minutes for 7 days using nebulizer Group-III -Ten mice were exposed to arsenic for 20 minutes for 7 days using nebulizer Group-IV -Ten mice were exposed to arsenic for 10 minutes for 7 days using nebulizer Group-V -Ten normal control mice were not were exposed to arsenic. Acute toxicity test of crude extract on mice Group-I -Five mice were given 2000 mg/kg of crude extract orally for 2 days Group-II -Five mice were given 1000 mg/kg of crude leaf extract orally for 2 days Group-III -Five mice were given 500 mg/kg of crude leaf extract orally for 2 days Group-IV -Five normal control mice were given commercial feed and distilled water for 2 days Chronic toxicity test of crude extract on mice Group-I -Five mice were given 1000 mg/kg of crude extract orally for 14 days Group-II -Five mice were given 500 mg/kg of crude extract orally for 14 days Group-III -Five mice were given 250 mg/kg of crude extract orally for 14 days Group-IV -Five mice were given 125 mg/kg of crude extract orally for 14 days Group-V -Five mice were given 62.5 mg/kg of crude extract orally for 14 days Group-VI -Five normal control mice were given commercial feed and distilled water for 2 days At the end of the experiment, animals were sacrificed with aid of chloroform in aqueous solution. Each mice was placed on dorsal recumbence incision was made using scalpel blade and the skull was dissected and brain was gentle removed and washed with distilled water. Total Protein Estimation The protein content was determined in homogenize mice brain using Bradford method in which bovine serum albumin (BSA) (7.81-1000 µg/mL) was used as standard. Tris-HCl buffer prepared with 1% Triton X and 0.1 PMSF pH 7.4, BSA, Broad ford reagent, 96-well micro-titer plates, Micro-pipetter and tips, Dissecting needle, Plate reader with 590 or 595nm filter Berthold Technologies GmbH & Co. KG. Standard curve, using BSA was ploted.1 ml stock solutions of 1000 µg BSA/200µl Tris-HCl (10 mg/200 ml) and freeze, until needed. Thaw and dilute with Tris-HCl and diluted at different concentration from 0 to 1000 µg/mL Add 200 ul of sterile phosphate saline (PBS) minus the volume of extract to each well. After running the assay, the standard curve was used to determine the concentration according to OD values. Determination of cholinesterase activity of plant extract Screening of cholinesterase activities of the crude extract on mice brain was carried out. Animals were sacrificed with aid of chloroform in aqueous solution. Each mice was placed on dorsal recumbence incision was made using scalpel blade and the skull was dissected and brain was gentle removed and washed with 50 mM Tris-HCl buffer, weighed and homogenized in a scope bottle with the aid of (Polytron PT-6100, tissue homogenizer USA). Tris-HCl buffer was prepared with 1% Triton X and 0.1% phenylmethylsulfonyl fluoride (PMSF) (Sigma Aldrich) and used as homogenizing solvent. Sample was centrifuged at 12,000 x g for 20 minutes with (GRACE High Speed Refrigemiceed Centrifuge India). The supernatant was transfer into separate tube and used as enzyme source. Based on Ellman's method, anti-cholinesterase (ChE) activity was measured using a modified 96-well microplate assay. The hydrolysis of the substmicee acetylthiocholine by enzyme reaction results in the production of thiocholine. Thiocholine reacts with Ellman's reagent (DTNB) to produce 2-nitrobenzoate-5-mercaptothiocholine which was measured at 405 nm. 50 mM Tris-HCl pH 7.4 was used as a buffer throughout the experiment. Acetylcholesterase (Ache) used in the assay was from the homogenized exposed mice brain. RESULT Crude extract yield of The result of the percentage yield of Cymbopogon citratus leaf after extraction, evaporation and concentration with a rotary evaporator at 42 o C was 13.42%. Acute toxicity study of arsenic The result of sub-acute toxicity test of arsenic on mice shows high mortality in groups exposed arsenic for 10 minutes. Up to 80% of the mice survived at 24 hours post exposure and only 13.3% survived at 48 hours post exposure in groups that were exposed for 20 minutes of arsenic. Survival analysis of groups that were exposed for 40 minutes shows only 20% of the mice survived at day 1 post exposure and all mice die at 48 hour post exposure. All mice exposed for 60 minutes of arsenic die at 24 hour post exposure. Comparison of Survival Curves using Log-rank (Mantel-Cox) Test and Gehan-Breslow-Wilcoxon Test at degree of confident (df) 1 shows high significant difference with p< 0.001 between the groups that were maintained in 0.1% DMSO and the groups that were exposed to different concentration of the arsenic (Figure- Chronic toxicity study of arsenic The result of sub-chronic toxicity test of arsenic on Mice shows high mortality with high concentration of the arsenic. Up to 66.6% of the mice survived at day 6 of the experiment and all die at day 7 post exposures in groups that were exposed for 10 minutes. In groups exposed for 20 minutes, 75% of the mice survived at day 4, 25% survived at day 5 of the experiment and all die at day 6 post exposure. Survival analysis of groups that were exposed to 0.75 mM shows 85.7 % survived at day 2, 71.4% survived at day 3, 14.3% survived at day 4 of the experiment and all mice die at die at day 5 post exposure. All mice exposed to 1.75 mM arsenic die before 24 hour post exposure. Comparison of Survival Curves using Log-rank (Mantel-Cox) Test at of df 1 and Gehan-Breslow-Wilcoxon Test at of df 4 shows high significant difference with p< 0.001 between the groups that were maintained in 0.1% DMSO and the groups that were exposed to different concentration of the arsenic (Figure-2). Acute toxicity study of the crude extract The result of acute toxicity test of crude on mice shows that all mice that were not exposed to the crude extract survived at day 2 of the experiment. Up to 100% of the mice survived at 84 hours post exposure in group exposed to 500 mg/kg. About 1000% of the mice survived the first 24 hours and only 20% survived at 48 hours post exposure in groups that were exposed to 1000 mg/kg of the crude extract. Survival analysis also shows that all the mice exposed to 2000 mg/kg die at 24 hours of the experiment. Comparison of Survival Curves using Log-rank (Mantel-Cox) Test at df 5 shows slight significant different at p<0.01 between the group that were maintained in 0.1 DMSO and those that were exposed to different concentration of the crude extract. Gehan-Breslow-Wilcoxon Test to compare the survival curves at df 1 shows high significant difference with p< 0.0001 between the groups that were maintained in 0.1% DMSO and the groups that were exposed to different concentration of the extract (Figure-3). Chronic toxicity study of crude extract The result of chronic toxicity test of crude extract on mice shows up to 100% of the mice exposed to 62.5 mg/kg concentration survived the last 14 day of the experiment. In group exposed to 125 mg/kg, 80% survived at day 12, 60% survived at day 14 of the experiment. In groups that were exposed to 250 mg/L crude extract concentration, 80% survived at day 10, 40% survived at day 11, 20% survived at day 12, and the remaining die at day 13 of the experiment respectively. The result also shows up to 80% survived at day 7, 60% survived at day 8 and the remaining mice die at day 10 of the experiment on the groups exposed to 500 mg/kg of the crude extract. Survival analysis of the groups exposed to 1000 mg/kg crude extract also revealed 80% of the mice survived at day 4, 40% survived at day 5 all remaining mice die at day 7 of the experiment. Comparison of Survival Curves using Logrank (Mantel-Cox) Test at df 7 and Gehan-Breslow-Wilcoxon Test to compare the survival curves at df 1 shows high significant difference with p< 0.0001 between the groups that were maintained in 0.1% DMSO and the groups that were exposed to different concentration of the arsenic (Figure-4). Total Protein Content Result of total protein content shows high protein content in the control groups followed by the groups that were treated with 62.5 mg/kg crude extract only then groups that were treated with 62.5 mg/kg crude extract and exposed to arsenic for 20 minutes compared to the groups that were exposed to arsenic 20 minutes only. There was significant difference at p<0.001 between the control groups and the groups that were treated with crude extract only compare to the groups that were exposed to arsenic only. There is no significant different between mice that were exposed to arsenic only and those that were treated with the crude extract and then exposed to arsenic (Figure-5). Acetyl-cholinesterase inhibitory assay Result of acetyl-cholinesterase inhibition shows high activities in control groups. Increased activities was also observed in groups that were treated with crude extract only as well as groups that were treated with crude extract and exposed to arsenic compared to the groups that were exposed to arsenic only. There was significant difference at p<0.001 between the mice that were treated with arsenic compare to the control group, those that were treated with the extract only as well as those that were treated with the extract and exposed to arsenic (Figure-6). DISCUSSION Commonly used lab solvents for extraction of bioactive plant component are methanol, ethanol, acetone and ethyl acetate [14]. Previous studies reported high of phenolic compounds with acetone compared to remaining solvents in an experimental extraction of fruit and vegetable [15]. However, high phenolic yield with methanol in leaf extracts compare to acetone, hot water and chloroform leaf extracts has also been documented [16]. Variation in polarity may be among the major factors that affect the increase affinity and extraction of bioactive compound from plant [17]. Hence, considering the high ability of polar solvent to extracting large quantities of bioactive compounds and its adopted used by herbalist, 80% methanol was choosing as extraction solving. The result of percentage yield obtained after extraction of 200 g was 13.42%. This result is not in agreement with the finding reported by Lay et al., 2014 which shows high yield of 331 g (6.55% w/w) following extraction of Lophira lanceolata leaf with 80% methanol. The result of sub-acute toxicity study of arsenic on mice shows that only groups that were not exposed to arsenic (control) and the groups that were treated with arsenic low dose (10 minute exposure) survived. Deaths were recorded in groups that were exposed for 40, 60 minutes of arsenic before the last day of the experiment. There is a significant difference (p<0.001) among the groups that were exposed arsenic at different time and the control group. Chronic toxicity study also revealed above 50% survival rate in groups exposed for 10 and 20 minutes compare to the other groups. There is significant difference (p<0.001) among the groups that were exposed to arsenic at different time and the control groups. There was no sufficient literature on the experimental effect of arsenic on mice, but several scientists reported it toxic effects in human and rat. Arsenic has been discovered as one of human carcinogen especially in area where metal pollutant naturally occur in groundwater and unnaturally in mine waste sites and agricultural runoff [18]. Many studies have also reported arsenic's effect on embryological development and altered signaling pathways [19]. Studies have suggested that arsenic causes teratogenic defects, and it has been shown to cross the mammalian placenta, affecting developing embryos whose mothers undergo exposure [20]. Result of sub chronic toxicity effect show that, all mice exposed to 250 mg/kg die befor day 12 of the experiment. Mice exposed to 62.5 and 125 mg/kg survived with little mortality. The calculated LC 50 is 41.29 ± 0.9 mg/kg and There is significant difference (p<0.001) among the groups that were exposed to different concentration of crude extract and the control group. There was no literature available on the toxicity effect of Cymbopogon citratus on mice, but Altaf et al., 2013 reported no toxicity effect on acute experimental study with the same plant [21]. There were no observed histological changes in liver, no periportal necrosis of the hepatocytes, inflammation of lymphocytes and macrophages in both control and treated groups. No difference was observed in glomeruli or any other segment of kidney tubules when compared with their respective normal rats [22]. Toxicity effects of crude extracts are associated with the nature and concentration of secondary metabolites present [23]. Effects of crude extract 50 mg/mL and arsenic (0.15 M) on total brain protein content, acetylcholinesterase inhibition, butyryl-cholinesterase inhibition and propionyl-cholinesterase inhibition on mice was measured with fluorescent micro-plate reader. There are high significant differences between groups that were exposed to arsenic, groups that were treated with crude extract and the groups that were treated crude extract for 24 hours and exposed to arsenic with p< 0.001. There was no published article on inhibitory effect of plant extracts on acetyl cholinesterase in mice. Cholinesterase or choline esterase is known to metabolize choline-based esters (acetyl choline) which function as neurotransmitter at neuromuscular junction to choline and acetic acid [24]. This allowed contraction and relaxation of muscle and impulse transmission across the nerves and invaded muscle. Imbalance of cholinesterase enzyme affects normal physiological activities of the nerve and muscles [25]. High concentration of this enzyme causes increase contraction of the muscle and overstimulation of nerves [26]. CONCLUSION AND RECOMMENDATION It can be concluded that low toxicity and high cholinesterase inhibitory effect of the crude extract is responsible for the therapeutic effects of this crude extract. Toxicity screening of this crude extract on other mammal such rat to reaffirm their toxicity profile are recommended. Antioxidant screening as well as isolation of bioactive compounds present in this plant part is strongly recommended.
2021-03-05T04:00:29.476Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d4a93629348c790b482bef8580e3507224c8a533", "oa_license": null, "oa_url": "https://doi.org/10.36348/sijb.2021.v04i01.002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d4a93629348c790b482bef8580e3507224c8a533", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
24224692
pes2o/s2orc
v3-fos-license
Identification of Phosphorylation Sites in the Repetitive Carboxyl-terminal Domain of the Mouse RNA Polymerase I1 Largest Subunit* The largest subunit of eukaryotic RNA polymerase I1 contains a carboxyl-terminal domain (CTD) which is comprised of repetitive heptapeptides with a consen- sus sequence Tyr’-Ser2-Pro3-Thr4-Ser6-Pros-Ser7. We demonstrate here that the mouse CTD expressed in and purified from Escherichia coli can be phosphorylated in vitro by a p34cdc2~CDC28-~ontaining CTD kinase from mouse ascites tumor cells. The product of this reaction, a phosphorylated form of the CTD, contains phospho- serine and phosphothreonine, but not phosphotyrosine. The same phosphoamino acid content is observed in the in vivo phosphorylated CTD from a mouse cell line. Synthetic peptides with naturally occurring non-con-sensus heptapeptide sequences can also be phosphoryl- ated by CTD kinase in vitro. Phosphoamino acid analysis of these non-consensus heptapeptides together with direct sequencing of a phosphorylated heptapeptide reveals that serines (or threonines) at positions two and five are the sites phosphorylated by mouse CTD kinase. Thus, the -Ser(Thr)-Pro- motif common to p34cdc2/CDC28-c~ntaining protein kinases is the recogni- tion site for mouse CTD kinase. polymerase I1 function. The CTD, rich in serine and threonine, is known to be highly phosphorylated in vivo (Cadena and Dahmus, 1987) and this modification is thought to be important in CTD function. I n vitro transcription experiments have shown that RNA polymerase IIA becomes phosphorylated after formation of the preinitiation complex (Kim and Dahmus, 1989) but before the initiation of transcription (Laybourn and Dahmus, 1990). Thus, the shift from form IIA to IIo may be involved in the transition between initiation and elongation. We have reported the isolation of a CTD kinase from mouse ascites cells that contains two subunits (58 and 34 kDa), the smaller of which is a mouse p34cd'2/CDC28 homolo$ and the larger (58 kDa) is as yet uncharacterized (Cisek and Corden, 1989). More recently, we have identified a second mouse CTD kinase that contains, in addition to the p34cdc2/CDC28 subunit, a 62-kDa subunit that is recognized by anti-cyclin B antibodies (Cisek and Corden, 1990). This second enzyme elutes first from the DEAE column that fractionates the two activities, and we have therefore designated it CTD kinase El. The previously identified enzyme elutes second from DEAE and has been designated CTD kinase E2. Preliminary experiments have indicated that these two mouse CTD kinases have identical substrate specificities; in particular, their activities on CTD substrates are indistinguishable. Because the E2 enzyme is available in a more pure form and is more stable we have concentrated our studies on this enzyme. We show here that mouse CTD kinase E2 is able to phosphorylate a complete mouse CTD substrate. The end product of this phosphorylation reaction has an electrophoretic mobility in SDS gels similar to the CTD excised from the in vivo phosphorylated IIo subunit. Phosphoserine and phosphothreonine are detected in the in vitro phosphorylated CTD in the same ratio as seen in the in vivo phosphorylated CTD. Finally, we show that the target site of the mouse CTD kinase E2 is serine or threonine followed by a proline residue. This observation is consistent with the known specificity of p34'dc2/cDc28 kinases (Moreno and Nurse, 1990). In the accompanying paper we show that CTD phosphorylation produces a conformational change that greatly extends the CTD. EXPERIMENTAL PROCEDURES Polyacrylnmide Gel Electrophoresis and Immunoblots-Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was carried out according to the method of Laemmli (1970) with a modification by Dreyfuss et al. (1984). Electrophoretic transfer of proteins on SDS gels to nitrocellulose filter (Schleicher & Schuell) was accomplished by the procedure of Towbin et al. (1979) as modified by Gerace et al. (1982). We followed the immunoblot procedure by Snow et d . (1987). The blotted nitrocellulose filter was initially incubated for 30 min at room temperature in 0.5% (v/v) Triton X-100, 2% bovine serum albumin (Sigma) in phosphate-buffered saline (Tritonbovine serum albumin/PBS). The blots were sequentially incubated for 1 h each in Tritonbovine serum albumin/PBS solution with 5 pg/ml of 8WG16, a monoclonal antibody (IgG) directed against the heptapeptide repeats in CTD (Thompson et d., 1989),2 pg/ml goat anti-mouse IgG (Cappel, Malvern, PA), and finally about lo6 cpm/ml '*'I-Protein A (Du Pont-New England Nuclear). Between each of the incubations, blots were washed four times for 10 min with 0.5% Triton X-I00 in PBS. The immunoblots were briefly air-dried and exposed to x-ray film. Construction of CTDa Expression Plasmid-Cloning procedures were performed according to Maniatis et d . (1982). The H~QII-HwIII DNA fragment, which encompasses exon 28 of the largest subunit gene of mouse RNA polymerase I1 (Ahearn et QL, 1987), was filled in to blunt the HpaII-recessed end. This fragment was then subcloned into the filled in EcoRI site of pGEM-2 vector (Stratagene, La Jolla, CA) by blunt-end ligation to generate pGEM-CTDa. The EcoRI DNA fragment coding for the CTD was then cut out with EcoRI and subcloned into the EcoRI site of pRX-1 (Rimm and Pollard, 1988) to generate pRX-CTDa (Fig. la). An mRNA whose expression is under the control of the trpE promoter was produced after transforming pRX-CTDa into Escherichia coli strain CAG-456 (lac am trp am pho am supCt* rpsL(SmR) mal am ntpR165, Baker et QL, 1984) and inducing the cells with indoacrylic acid (Sigma). Translation of this mRNA results in 30 amino acid trp E peptide that terminates at a stop codon in the 5' end of the CTD fragment. Translation apparently reinitiates at a methionine residue immediately preceding the heptapeptide repeats, generating a protein containing the entire 52 heptapeptide repeats of the CTD (Fig. Ib). Expression and Purification of CTDa-Growth and induction of CAG-456 cells containing pRX-CTDa plasmid were according to the procedure of Rimm and Pollard (1988) with the following modifications. Briefly, freshly transformed CAG-456 cells (slow in growth) were grown to stationary phase overnight at 30 "C in 0.5 liter of LB broth with 30 pg/ml tryptophan and 100 pg/ml carbenicillin (Sigma). The cells were then pelleted and resuspended into 2.5 liters of M9-CA minimal medium (Maniatis et aL, 1982) with 100 pg/ml carbenicillin. After 3 h at 30 "C, the cells were induced with indoacrylic acid at a final concentration of 10 pg/ml and continuously grown for another 5 h. At the end of growth and induction described above, the cells were harvested. The rest of the purification steps, except those indicated, were carried out at 4 "C. The pellets were transferred to 50 ml of lysis buffer consisting of 10 mM Tris-HC1, pH 8.0, 5 mM EDTA, 8% (w/v) sucrose, 0.5% (v/v) Triton X-100 with freshly added 0.2 mg/ml of lysozyme, 0.2 mM of phenylmethylsulfonyl fluoride, and 2 pg/ml each of chymostatin, leupeptin, pepstatin, and trypsin inhibitor (all from Sigma). The cells were lysed by incubating on ice for 15 min followed by sonication for 3 min (Sonifier 450, VWR Scientific, Philadelphia, PA). The whole cell extract was centrifuged at 22,530 X g for 25 min (SS34 rotor) to remove insoluble material. Into the supernatant, 12.5 ml of saturated ammonium sulfate ((NH&S04) solution was added dropwise to a final concentration of 20%. Protein fractionation was achieved by collecting the pellet of the (NH4),S04 mixture at 22,350 X g for 25 min. The CTD-containing precipitate was resuspended into 1.6 ml of 0.2% SDS, 50 mM HCI, and transferred to a 4-ml glass vial (VWR Scientific) for cyanogen bromide (CNBr) cleavage (Cadena and Dahmus, 1987;Jay, 1984). Solid CNBr (0.2 g, Pierce Chemical Co.) was added to the vial, and the sample was slowly vortexed at room temperature for 16 h. The CNBr cleavage reaction was terminated by the addition of 0.2 ml of 2-mercaptoethanol (Sigma). The CNBr-treated mixture was spun at 5,000 X g for 10 min (Microspin 24S, Sorvall Instruments) and the CNBr supernatant was immediately chromatographed on two Superose 12 HR 10/30 columns in series at a rate of 0.3 ml/min with 100 mM ammonium bicarbonate (NH4HCOa, pH 7.7) as elution buffer. Aliquots of each Superose 12 fraction and of the previous purification steps were assayed by immunoblot (see above) to determine the presence of the CTD (Fig. 2, Q and b). Positive fractions were pooled and further purified by reverse-phase chromatography on a CI8 column (4.6 X 220 mm, 10 pm, Pierce Chemical Co.). A 30-min multiphasic gradient using 0.06% trifluoroacetic acid as buffer A and 0.06% trifluoroacetic acid, 80% acetonitrile as buffer B was conducted at a rate of 0.6 ml/min and an oven temperature of 40 "C while monitoring absorbance at 274 nm. The gradient is started with 99.8% buffer A, 0.2% buffer B followed by: a step increase to 50% buffer A, 50% buffer B at 5 rnin, a linear increase of 3% buffer B/min from 5 to 15 min, 20% buffer B/min from 15 to 16 min, 100% buffer B from 16 to 20 min, and 99.8% buffer A, 0.2% buffer B at 20 min. The concentration of the CTD was measured by its optical density at 274 nm (A27J (assuming 52 X 1.4 x lo3 cm" M-' as the extinction coefficient for the CTDa, 52 times the molar extinction coefficient of tyrosine). Amino Acid Composition-60 pg of CTD from the HPLC peak was lyophilized and completely hydrolyzed in 100 pl of 6 N HCI (constant boiling, Pierce Chemical Co.) with 0.1% (v/v) phenol at 110 "C. The composition analysis of 10 pl of the hydrolysate was performed on a PICO.TAG amino acid analysis system (Millipore Waters, Taunton, MA). In Vitro Phosphorylation of CTDQ-A typical 10-pl radioactive labeling reaction consisted of 60 mM KCl, 50 mM Tris-HC1, pH 7.8, 10 mM MgC12, 0.1 mM dithiothreitol, 1 mM [Y-~*P]ATP (about 50 mCi/mmol, Du Pont-New England Nuclear), 50 ng of CTD kinase E2 complex (purified by Dr. L. Cisek), and 2 pmol of purified CTD. The reaction mixture was incubated at 30 "C for 4 h and stopped by 10 pl of 2 X SDS sample buffer containing 20 mM EDTA. Samples were boiled for 3 min and loaded onto SDS gels. After SDS-PAGE separation, gels were dried on two sheets of 3MM paper (Whatman, Maidstone, United Kingdom) and exposed to Kodak x-ray film. The intensities of bands on the autoradiogram were quantitated using LKB Ultrascan XL laser densitometer. Determination of Phosphoamino Acid Content-Synthetic heptapeptide substrates were prepared on an Applied Biosystems model 430A peptide synthesizer using t-Boc chemistry. Peptide concentration was determined by as described above for the CTD. Radioactive labeling of synthetic peptides was essentially the same as described above for CTD phosphorylation, except that the specific activity of [Y-~*P]ATP was 10 Ci/mmol with a final concentration of 4 mM and a peptide concentration of 1 mM. To determine the phosphoamino acid content, either one-or two-dimensional paper electrophoresis was carried out according to Bylund and Huang (1976). An aliquot of 32P-labeled peptide or CTDo was lyophilized, resuspended in 20 pl 6 N HCI in an Eppendorf tube and hydrolyzed at 110 "C for 3 h. The hydrolysate was dried under vacuum over NaOH and dissolved in 5 p1 of pH 1.9 buffer (1:1089 of formic acid/ acetic acid/water) containing 1 pg each of phosphoserine, phosphothreonine, and phosphotyrosine (all from Sigma). 2 p l of sample was applied to a 20 X 20-cm thin layer cellulose plate (Kodak) and electrophoresed according to Hunter and Sefton (1979). Some samples were electrophoresed on a second dimension in pH 3.5 buffer (1:10189 of pyridine/acetic acid/water). The air-dried plates were sprayed with 0.2% (w/v) ninhydrin in ethanol and exposed to x-ray film. Phosphopeptide Sequencing-Synthetic peptide T (see Fig. 5a for sequence) was 32P-labeled as described above. After labeling, 0.5 pg of sequencing grade trypsin (Boehringer Mannheim) was added for a 4-h 37 "C digestion. The trypsin-cleaved [3'P]phosphopeptide T was separated from free [y3*P]ATP over a 5 X 1000-mm Bio-Gel P2 column with 1 M HAC as running buffer. The phosphopeptide T fractions were dried and resuspended in deionized distilled water three times. Equal amounts of [''Plphosphopeptide (about 200 cpm/ pmol, 400 pmol) were loaded onto each of six precut sequencing filters. Sequencing was performed according to Wang et al. (1988) on an Applied Biosystems model 475A sequenator employing a proline program at cycles 3 and 6. Phenylthiohydantoin amino acids were detected with a model 120A phenylthiohydantoin analyzer. After each cycle, the sequenator was interrupted, a piece of filter was taken out, a piece of blank filter was replaced, and the sequencing was resumed. Free phosphates generated during sequencing and the phosphopeptides retained on each piece of filter were extracted by immersing the filter in 200 pl of deionized distilled water and gently vortexing for 3 h. Approximately equal counts of extract for each cycle were loaded onto a thin layer cellulose plate and electrophoresed at pH 1.9 as described above with [J2P]orthophosphoric acid (Du Pont-New England Nuclear) as a standard marker. After autoradiography, the free phosphate and phosphopeptide areas on the plate were cut out using the autoradiogram as a template. Samples were counted in a scintillation counter to quantitate the percentage of free phosphate released at each cycle. Purification of in Vivo "'P-Labeled CTDo-Mouse thymidine kinase-deficient cells (Ltk-) were grown as monolayer cultures at 37 "C in minimal essential media (GIBCO) supplemented with 10% (v/v) fetal bovine serum. Ten 160-cm' flasks (Falcon Plastics, Oxnard, CA) of 80% confluent Ltk-cells were rinsed with PBS, trypsinized, and a. pooled into two flasks with 20 ml of phosphate-free minimal essential medium each. Cells were starved in phosphate-free media for 20 min before [32P]orthophosphoric acid was added to a final radioactivity of 100 mCi/liter. After labeling for 2 h with occasional rocking, cells were collected and washed twice with PBS. This 3ZP-labeled Ltk-cell pellet was mixed with another PBS-washed cell pellet from ten 160-cm2 flasks of unlabeled Spodoptera frugiperda (Sf9) culture cell 2.5 days post-infection by a polyhedrin-CTD recombinant baculovirus expressing the phosphorylated form of CTD.3 The mixed cell pellets were resuspended in 50 ml of lysis buffer consisting of 200 mM LiC1, 20 mM Tris-HC1, pH 8.0, 1 mM EDTA, 0.5% (v/v) Nonidet P-40, with freshly added 0.2 mM dithiothreitol, 0.1 mM phenylmethylsulfonyl fluoride, and 2 pg/ml each of chymostatin, leupeptin, pepstatin, and trypsin inhibitor. This cell lysis suspension was rotated at 4 "C for 20 min and then cell debris was removed by centrifuging at 22,530 X g for 25 min. The subsequent purification steps, (NH4),S04 precipitation, CNBr cleavage, and fast protein liquid chromatography Superose 12 fractionation were the same as described above for purification of unphosphorylated CTD from E. coli, except that the final concentration of (NH4)k304 precipitation was 40% instead of 20%. The '"P-CTD, fraction after Superose 12 gel filtration was lyophilized and subjected to phosphoamino acid analysis as described above. Expression and Purification of CTDa-The pRX-CTDa/ CAG-456 system described in Fig. 1 was used to express the mouse CTD. This expression system produces between 100 pg and 1 mg of CTDa protein/liter of culture. Fig. 2a shows an immunoblot of the bacterially expressed protein in the initial stages of purification. The CTD is soluble in the cell lysis buffer (lane 2 ) . After precipitation in 20% (NH4)S04, the pellet is resuspended in 1.6 ml of CNBr cleavage solution. There are no other methionine residues in the mouse CTD '' J. Zhang and J. Corden, unpublished results. besides the reinitiation methionine just prior to the repetitive structure (see Fig. lb). This property has been exploited in the purification of the repetitive domain by using CNBr cleavage, which degrades any other methionine-containing proteins in the extract to smaller peptide fragments. Low recovery of the CTD from the CNBr reaction due to poor solubility is compensated by the substantial purification achieved when the supernatant of the CNBr cleavage reaction is fractionated by Superose 12 gel filtration. The CTDa with a calculated molecular mass of 39,883 is well separated from most of the CNBr-degradation products that are in the included volume during gel filtration (Fig. 2b). In the final purification step, the CTDa is purified to homogeneity by HPLC reverse-phase chromatography (Fig. 2c). The purity of the CTD is based on several lines of evidence. First, CTDa is eluted as a single peak on the C18 column with an absorbance maximum of 274 nm, as expected for a protein that contains tyrosine as the only aromatic amino acid (Fig. 2c). Second, the material in the Cls peak has an observed amino acid composition quite close to the calculated value for the mouse CTD (Table I) with a standard deviation, u2 value4 of 0.61 for all of the 15 amino acids present in CTD. Finally, conventional silver staining (Morrissey, 1981) reveals no bands when several kg of pure CTD is electrophoresed on a 12.5% SDS gel (data not shown). Failure of the CTD to stain is likely due to its unusual amino acid composition. The failure to detect contaminant bands indicates that the CTD is highly pure. Purified mouse CTDa has an aberrant mobility on SDS-PAGE, probably due to its high content of proline. Furthermore, CTDa migrates at a different position compared with standard M, markers on a different percentage SDS gels. The apparent M, of 65 kDa on a 10% SDS gel for mouse CTDa is in close agreement with the mobility of the HeLa cell and calf-thymus CTDa excised from RNA polymerase IIa by CNBr cleavage (Cadena and Dahmus, 1987). Phosphorylation of CTDa by CTD Kinase E2 Causes a Shift to CTDo-Incubation of CTD kinase E2 with CTDa in the presence of [y3'P]ATP results in a phosphorylated form of CTD that migrates at a position of 95 kDa on a 10% SDS gel (Fig. 3). The mobility shift from unphosphorylated CTD (CTDa) to phosphorylated CTD (CTDo) is similar to the mobility difference between IIa and IIo forms of RNA polymerase 11. Cadena and Dahmus (1987) have reported that CNBr cleavage of in uitro 32P-labeled calf-thymus RNA polymerase IIo or in vivo 32P-labeled HeLa cell RNA polymerase I10 releases a phosphopeptide ranging from 75 to 90 kDa. The single band at 95 kDa we observe here is probably the completely phosphorylated form of the CTD. When mouse Ltkcells are 32P-labeled in uivo, SDS-PAGE of 32P-CTD purified with carrier Sf9 cells infected by a polyhedrin-CTD recombinant baculovirus (see under "Experimental Procedures") results in a single 95-kDa band on 10% gel (data not shown). Taken together, these results indicate that CTD kinase E2 produces the same shift in CTD mobility seen in uiuo. This point is addressed in more detail in the accompanying paper. Phosphoamino Acid Contents of in Vitro and in Vivo La- beled CTD-As a first step in assessing the specificity of in uitro phosphorylation by mouse CTD kinase E2, we determined the phosphoamino acid content of the CTD labeled in uiuo or in uitro. Two-dimensional paper electrophoresis was performed to determine if tyrosine, serine, or threonine are phosphorylated in CTD after in uiuo labeling of mouse tissue culture cells or in vitro 32P-labeling with CTD kinase E2. The Two ml of the 2-mercaptoethanol neutralized CNBr-cleavage supernatant was loaded on two Superose 12 columns in series and eluted with 100 mM NH,HCOs, pH 7.7. 30 pl of each of the 1.5-ml fractions were electrophoresed on 10% SDS gel for immunoblotting by 8WG16 antibody. The immunoblot autoradiogram of CTD-containing fractions is shown in the inset. e, chromatography profile of the HPLC reverse-phase CIR column is shown by the absorbance of 274 nm as a function of elution time. The CTD fraction from Superose 12 chromatography was loaded on a CIS column (4.6 X 230 mm) pre-equilibrated with 0.06% trifluoroacetic acid and the proteins were eluted with a trifluoroacetic acid/acetonitrile gradient as described under "Experimental Procedures." The ultraviolet absorption spectrum of the major peak (12.3 min) is shown in the inset. content of phosphoserine and phosphothreonine is quite similar for in uiuo and in uitro phosphorylated CTD (compare Fig. 4, a and b ) . Neither in uiuo nor in uitro phosphorylated CTD has any detectable amount of phosphotyrosine; the autoradiograph of the in uiuo labeled sample has been intentionally over-exposed to demonstrate the lack of phosphotyrosine. In both in uitro and in uiuo cases, serine is the major phosphorylation site, while threonine a minor one. Dahmus (1981) reported a similar pattern of phosphoamino acid composition for in uiuo labeled HeLa RNA polymerase 110. Phosphorylation of Non-consensus Heptapeptides at Posi- tions 2 and 5-It has been shown that CTD kinase E2 phosphorylates only serines in a 28-amino acid synthetic peptide with consensus repeats, (Ser2-Pro3-Thr4-Ser5-Pro6-Ser7-T Y~' )~ (Cisek and Corden, 1989). Threonine in this consensus peptide is not phosphorylated. Therefore, phosphothreonines in the phosphorylated CTD are least likely derived from threonine at position 4. The mouse CTD contains several non-consensus heptapeptide repeats in which threonine replaces serine in positions 2 or 5. To determine which threonines (positions 2, 5, or 7) can act as phosphate acceptor, a series of 28 amino acid peptides were synthesized, all of which correspond to non-consensus variants occurring naturally in the mouse CTD, especially at its carboxyl terminus (Fig. 5a). These synthetic substrates were phosphorylated with CTD kinase E2 in the presence of [y3'P]ATP. The 32P-labeled ["'P]CTDo phosphorylated in vivo or in uitro (see "Experimental Procedures") was partially hydrolyzed according to Bylund and Huang (1975), and the phosphoamino acids were separated by two-dimensional electrophoresis according to Hunter and Sefton (1979) FIG. 5. In vitro phosphorylation of threonines at positions 2 and 5. A series of synthetic heptapeptides were 32P-labeled by CTD kinase E2, and the [R2P]pho~phopeptide~ were partially hydrolyzed in 6 N HCI at 110 "C for paper electrophoresis at pH 1.9. Panel a lists the sequences for synthetic peptides N, R, K, V, T, with their correspondent repeat numbers in CTD (see Fig. 5 in Corden et al., 1985) and the non-consensus amino acids in the peptides are underlined, Panel b shows the autoradiograms of phosphoamino acid contents of [R2P]phosphopeptides after hydrolysis and electrophoresis separation. Positions of phosphoserine and phosphothreonine are indicated at right. Lanes N , R, and V were exposed 4 h and lanes K and T 16 h. phosphopeptides were then subjected to acid hydrolysis and phosphoserine and phosphothreonine were separated by onedimensional paper electrophoresis at pH 1.9. As shown in Fig. 5b, there is no detectable phosphothreonine in phosphopeptide T even after prolonged time of exposure, indicating that the non-consensus threonine at position 7 is not phosphorylated. All of the rest of the phosphopeptides shown in Fig. 5 contain phosphothreonine. Phosphothreonine in peptides K and V can only derive from the threonine at position 5 since the possibility of phosphorylation of threonine 7 has been ruled out. For the same reason, phosphothreonines in phosphopeptides N and R are derived from threonine at position 2. The weak phosphorylation of threonine in phosphopeptide K is probably due to its location in the last repeat of the peptide making it a less favorable phosphorylation site. Taken together, these results are consistent with phosphorylation at both the second and fifth positions in the heptapeptide repeat. -Ser(Thr)-Pro-Is the Recognition Site for CTD Kinase E2- The above phosphoamino acid composition analysis demonstrates that threonines at positions 2 and 5, but not 7, can be phosphorylated, strongly suggesting that serines 2 and 5, but not 7 are the phosphorylation sites for CTD kinase E2. In order to directly confirm this conjecture, [32P]phosphopeptide T has been directly sequenced using a method developed by Wang et al. (1988). Because the first and last repeats in peptide T tend to be poorly phosphorylated (see for example phosphopeptide K in Fig. 5b), the 32P-labeled peptide T was first cut with trypsin to generate two phosphopeptides with exactly the same sequence. Trypsin cleavage exposes the third repeat in peptide T directly at the amino terminus for sequencing. At each of the six cycles of sequencing, one piece of precut sample filter was taken out of the sequenator. The phosphate and phosphopeptides on the filter were extracted and separated by electrophoresis. Fig. 6 shows the percentage of released phosphate at each sequencing cycle. Prior to sequencing and after the first cycle there is only a background of nonspecific phosphate release. Edman degradation at the second position serine causes a release of 9.4% of the total phosphate as compared with a 1.6% background, indicating that this serine is phosphorylated. The minor increase of phosphate released at cycles three and four (to 11.2 and 11.7%, respectively) is mainly due to carry-over since neither phosphoserine nor proline are cleaved completely. The lack of major phosphate release at cycle four is consistent with the earlier result that no threonine at position 4 is phosphorylated. The next significant increase of phosphate release, from 11.7% at cycle four to 20.4% at cycle five, comes when serine at position 5 is degraded, indicating that serine 5 is another phosphorylation site for CTD kinase E2. These results directly demonstrate that positions 2 and 5 are the phosphorylation sites for CTD kinase E2. -Ser-Pro-Lys(Arg)-Is a Better Substrate for Phosphoryls- -Y S P T S P Amino Acid FIG. 6. Sequence of phosphorylated heptapeptide. "P-Labeled peptide T was cleaved with trypsin, separated from free [Y-~'P] ATP by Bio-Gel P2 chromatography, and loaded onto six pieces of precut filter for sequencing, following a method of Wang et al., 1988. At each of the six sequencing cycles, a piece of filter was taken out, and phosphopeptides and phosphate were extracted. The phosphopeptides and phosphate were separated by electrophoresis at pH 1.9 and quantitated by scintillation counting. Shown here is the percentage of counts released as orthophosphate at each cycle of sequencing. tion Than -Ser-Pro-Ser-"The identification of serines at positions 2 and 5 as the recognition sites of CTD kinase E2 is consistent with the known preference of p34cdc2/CDC28 kinases for this sequence (Moreno and Nurse, 1990). To see if residues other than the dipeptide -Ser-Pro-are important in the recognition of the heptapeptide by CTD kinase E2, we compared the different peptides described in Fig. 5 for their abilities to compete with the full-length CTDa in mixed substrate reactions. It is clear from the results shown in Fig. 7 that the basic peptides (R and K) compete more effectively than the consensus (peptide S). Peptide N does not seem to compete in this reaction. These results indicate that among the heptapeptides present in the CTD, mouse CTD kinase E2 prefers the nine' that contain non-consensus basic residues. Such a preference is consistent with the observation that several of the known p34cdc2/CDC28 recognition sites contain basic residues (Moreno and Nurse, 1990). DISCUSSION We have previously reported the identification and purification of two protein kinases that phosphorylate CTD peptides in vitro Corden 1989,1990). In this study we show that a mouse CTD kinase (E2) phosphorylates a complete CTD substrate resulting in a shift in electrophoretic mobility of the CTD characteristic of the in vivo shift from IIa to IIo. This result not only supports the argument that CTD kinase E2 participates in generating IIo in vivo, but also demonstrates that the generation of form IIo, which has been used by others (Guilfoyle, 1989;Lee and Greenleaf, 1989;Payne et al., 1989) to identify and purify CTD kinases, is also a property of mouse CTD kinase E2. In the accompanying paper we show that the phosphorylation of the mouse CTD by CTD kinase E2 is associated with a profound conformational change. One criterion for judging the in vivo relevance of a protein kinase purified by an in vitro phosphorylation assay is to show that the same substrate residues are phosphorylated in vivo and in uitro. As a first step in this analysis we have shown here that the ratio of phosphoserine to phosphothreonine in the in vivo and in vitro labeled CTD is similar (Fig. 4). We have identified the sites of phosphorylation in vitro both by examination of the phosphoamino acids in labeled non-consensus peptides, and through direct sequencing of a labeled phosphopeptide. These results demonstrate that mouse CTD kinase E2 phosphorylates positions 2 and 5 in the consensus -S R K N Peptlde See Fig. 5a for sequences of peptides K, R, and N peptide S has a sequence of (Ser2-Pro3-Thr4-SerS-Pro6-Ser7-Tyr'-).+ 32P-CTD labeled under these conditions was run on a 10% SDS gel. The autoradiogram of the CTD region of the gel is shown with the relative intensities for each band below (determined by densitometry). sequence Tyr'-Ser2-Pro3-Thr4-Ser'-Pro6-Ser7-. While we have not sequenced the in vivo labeled CTD, several pieces of evidence suggest that positions 2 and 5 are phosphorylated in uiuo. The presence of threonine at positions 2 or 5 in some of the 52 repeats in the mouse CTD (Corden et al., 1985) leads to the prediction that the ratio of phosphoserine to phosphothreonine should be -1O:l (there are 94 serines and 9 threonines at positions 2 and 5). This approximate ratio observed both in the in vivo and in vitro labeled CTDo is consistent with the idea that positions 2 and 5 are phosphorylated in viva. A second argument that mouse CTD kinase E2 phosphorylates the CTD in uiuo comes from comparing the recognition sites of other p34'dc2~CDC28-~ontaining kinases: a serine (or threonine) residue followed by a proline residue. -Ser(Thr)-Pro-sites in histone H1, laminin, and several other substrates have been clearly demonstrated to be phosphorylated by of the CTD recognition site, and the identification of p34'dc2/CDC28 as a component of CTD kinase E2 in mouse strongly support the idea that CTD kinase E2 phosphorylates the CTD in uiuo. The mouse CTD kinase E2 used in these studies consists of p34'dc2/CDC28 complexed with a 58-kDa subunit. Recently, we (Cisek and Corden, 1990) have described a second CTD kinase El activity from mouse cells. This enzyme contains similar to M-phase-promoting factor (Nurse, 1990). Thus, in mouse cells there are at least two activities that may phosphorylate the CTD. Lee and Greenleaf have described a yeast CTD kinase, designated CTKI, that contains subunits of 58, 38, and 32 kDa. The 58 kDa subunit contains homologies to the p34cdc2/CDC28 family of kinase catalytic subunits while the 32 and 38 kDa subunits are not recognized by anti-CDC28 antibodies.' This result indicates that, at least in yeast, another class of CTD kinases exists. This apparent multiplicity of CTD kinases may explain why mutations in the CDC28 (Kolodziej et aE., 1990) or CTK15 do not result in decreased RNA polymerase I1 phosphorylation in uiuo. One interesting property of p34cdc2/CDC28 kinases is their variation in activity during the cell cycle (Nurse, 1990). Resulting changes in the phosphorylation state of p34cdc2'CDC28 substrates are thought to be important for the transition between G2 and M-phase and between G1 and S-phase. How phosphorylation of RNA polymerase I1 might affect transcription and how such transcriptional regulation might contribute to cell-cycle transitions is currently under study. p34cdc2/CDC28 k inases (Moreno and Nurse, 1990). The similarity p34cdc2/CDC28 complexed with cyclin B to form a complex
2018-04-03T01:57:04.548Z
1991-02-05T00:00:00.000
{ "year": 1991, "sha1": "d2416e16c109919c96db1bb58d8efc454cdb400e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)52242-0", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7d9f87c58a6b14a88de3d602210a467426f8ad4f", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14155024
pes2o/s2orc
v3-fos-license
Hopf Structure and Green Ansatz of Deformed Parastatistics Algebras Deformed parabose and parafermi algebras are revised and endowed with Hopf structure in a natural way. The noncocommutative coproduct allows for construction of parastatistics Fock-like representations, built out of the simplest deformed bose and fermi representations. The construction gives rise to quadratic algebras of deformed anomalous commutation relations which define the generalized Green ansatz. Introduction Wigner was the first to remark that the canonical quantization was not the most general quantization scheme consistent with the Heisenberg equations of motions [1]. Parastatistics was introduced by Green [2] as a general quantization method of quantum field theory different from the cannonical Bose and Fermi quantization. This generalized statistics is based on two types of algebras with trilinear exchange relations, namely the parafermi and parabose algebras. The representations of the parafermi and parabose algebras are labelled by a non-negative integer p -the order of parastatistics. The simplest non-trivial representations arise for p = 1 and coincide with the usual Bose(Fermi) Fock representations. The states in a Bose(Fermi) Fock space are totally symmetric(antisymmetric), i.e., they transform according to the one dimensional representions of the symmetric group. Fock-like representations of parastatistics of order p ≥ 2 correspond to higher-dimensional representations of the symmetric group in the Hilbert space of multicomponent fields. At the core of the interest in generalized statistics is (twodimensional) statistical mechanics of phenomena such as fractional Hall effect, high-T c superconductivity. The experiments on quantum Hall effect confirm the existence of fractionally charged excitations [3]. Models with fractional statistics and infinite statistics have been explored, termed as anyon statistics [4] and quon statistics [5]. The attempts to develop nonstandard quantum statistics evolved naturally to the study of deformed parastatistics algebras. The guiding principle in these developments is the isomorphism between the parabose algebra pB(n), parafermi algebra pF(n) (with n degrees of freedom) and the universal enveloping algebra of the orthosymplectic algebra osp(1|2n), resp. orthogonal algebra so(2n + 1). The quantum counteparts pB q (n) and pF q (n) were defined to be isomorphic as algebras to the quantized universal enveloping algebras (QUEA) U q (osp(1|2n)) [6], resp. U q (so(2n + 1)) [7]. In the present work we write a complete basis of defining relations of the algebras pB q (n) and pF q (n)(see Theorem 1) extending what has been done in [6,7,8]. The novelty with respect to the known definition of deformed parastatistics is the system of homogeneous relations (9,10). They allow to continue the isomorphism of the algebras as Hopf algebra morphism (see Theorem 2) which endows the deformed parastatistics algebra at hand with natural Hopf structure. With the defined Hopf structure the parastatistics algebras pB q (n) and pF q (n) become isomorphic as Hopf algebras to the QUEA U q (osp(1|2n)) and U q (so(2n + 1)), respectively. The Green ansatz is intimately related to the coproduct on the parastatistics algebras; it was realized that every parastatistics algebra representation of arbitrary order p arises through the iterated coproduct [9](see also [10]). We make use of the noncocommutative coproduct on the Hopf parastatistics algebras pB q (n) and pF q (n) to construct a quadratic algebra which is a deformation of the Green ansatz for the classical algebras pB(n) and pF(n). The paper is organized as follows. In section 2 we define the relations of the quantized parastatistics. Section 3 is devoted to the analysis of the Hopf algebra structure of the proposed quantized parastatistics algebras. In Section 4 we show that the q-deformed bosonic (fermionic) oscillator algebra arises as the simplest nontrivial representation of the deformed parastatistics. Further in Section 5 the Green ansatz is generalized for the deformed parastatistics algebras pB q (n) and pF q (n). Throughout the text by an associative algebra we mean an associative algebra with unit 1 over the complex numbers C. Deformed Parastatistics Algebras We first recall the definitions of the parastatistics algebras introduced by Green [2] as a generalization of the Bose-Fermi alternative. The proofs of the algebra isomorphisms pB q ≃ U q (osp(1|2n)) [6] and pF q (n) ≃ U q (so(2n + 1)) [7] has shown the equivalence of the paraoscillator definition of the U q (osp(1|2n)) and U q (so(2n + 1)) with their definition in terms of Chevalley generators. In this way a minimal set of relations (a counterpart of the Chevalley-Serre relations) is obtained providing an algebraic (but not linear) basis of the defining ideal of the QUEA at hand. We are interested in a complete description of the defining ideal for the parastatistics algebras (i.e., the counterpart of the Cartan-Weyl definition of the QUEA). This is not only a question of pure academic interest, our motivation came from the study of the Hopf algebraic structure on the parastatistics algebras which to the best of our knowledge was studied only for some particular cases (see [17] for pB q (2)). The complete basis of relations is generated from the known algebraic one and allows for endowing the pF q (n) and pB q (n) with a Hopf algebra structure. We now sketch the procedure of deriving the complete U q -linear basis for the parastatistics algebras. The Lie superalgebra osp(1|2n), denoted as B(0|n) in the Kac table [18] has the same Cartan matrix as the simple B n algebra so(2n+1). The Chevalley-Serre relations of QUEAs U q (so(2n + 1)) and U q (osp(1|2n)) with generators q ±Hi ≡ q ±Hα i and E ±i ≡ E ±αi , corresponding to the simple roots α i , read xy − qyx is the q-commutator, α n is the only odd simple root of osp(1|2n) and a ij = (α i , α j ) is the symmetrized Cartan matrix (same for both cases) given by The essential point in the proof of the isomorphism is the change of basis for the QUEA by choosing the orthogonal system of roots ε i as an alternative of the simple roots. The ladder operators E +εi and E −εi related to the roots ε i are the parastatistics creation and annihilation operators a + i and a − j [6,8] and the change With the help of the inverse change α i = ε i − ε i+1 , i < n, and α n = ε n the corresponding change of basis on the Cartan subalgebra reads By construction q hi q hj = q hj q hi . The inverse change of basis allows to express the Chevalley ladder generators as The complete basis (over C(q)) of relations defining the deformed parastatistics algebras, i.e., the analog of (1) is given by the following THEOREM 1 The deformed parafermionic pF q (n) (parabosonic pB q (n)) algebra is the associative (super)algebra generated by the creation and annihilation operators a ± i and Cartan generators q ±hi for i = 1, . . . , n subject to the relations together with the analogues of the Serre relations where all the generators a ± i are taken to be even, The deformed parastatistics algebras admit an anti-involution * induced by the anti-involution on the Chevalley basis ( 2) ǫ is the Levi-Civita symbol with ǫ ij = 1 for i < j. The tensor θ i,j;k = −θ j,i;k is vanishing except for i < k < j and i > k > j when it takes values +1 and −1, respectively. To prove the theorem we make use of the R-matrix FRT-formalism for QUEA U q (g) of a simple (super-)Lie algebra g (see [15], [16]), introducing the L-functionals for U q (g) in the form of upper (lower)-triangular matrices L (+) (L (−) ) where L ij , 1 ≤ i, j ≤ n + 1 of the (2n + 1) × (2n + 1) matrix L (+) for the QUEA U q (so(2n + 1)) and U q (osp(1|2n)) is very simple when expressed in terms of the generators a ± where ω = q ji . The relations (6), (7), (8) involving the entries of the minors of L (±) ij (13) for 1 ≤ i, j ≤ n + 1 follow directly from the RLL-relations (12) with the corresponding R-matrix upon restricting the indices from 1 to n + 1. The restriction is possible due to the ice condition [19]. We label the LHS (up to scalars in C(q)) of the homogeneous relations (9,10) by The QUEA U q (gl n ) has a natural inclusion in U q (so(2n + 1)) and U q (osp(1|2n)) being generated by the Chevalley generators E ±i , 1 ≤ i ≤ n − 1 and q ±hi . (associated with the A n−1 subdiagram in the B n Dynkin diagram). The inclusions U q (gl n ) ֒→ pF q (n) and U q (gl n ) ֒→ pB q (n) define an adjoint U q (gl n )-action on pF q (n) and pB q (n) (for i ≤ n − 1) Let L denote the space of states Λ andΛ where by states we mean the cubic polynomials determined from (14) up to multiplication with scalars C(q). The homogeneous relations (9,10) are U q (gl n )-covariant with respect to the adjoint action. More precisely one has the following and thus Λ n−1,n n has to be set to zero in pF q (n) and pB q (n). Hence the whole representation L built through the U q (gl n )-adjoint action on the lowest weight Λ n−1,n n is trivial which proves the homogeneous relations (9,10) for a + i . The ones for a − i follow by conjugation. Proof: The Hopf structure on the elements of L (+) and L (−) compatible with the Drinfeld structure (15) (defined on the Chevalley basis) is given by the coproduct ∆L ± , the counit ǫ(L (±) ) and the antipode S(L (±) ) [15] ∆L (i) For the diagonal elements L (+) ii = q hi the coproduct formula in (20) yields The coproduct of the elements L (+) i n+1 when 1 ≤ i ≤ n has the form where we have used the triangularity of L (+) and L which completes the proof of (16) in view of (13). Then ∆a − i = (∆a + i ) * . (ii) It follows from the definition of the counit in (20). i n+1 ) = cS(a + i ) this is a linear triangular system for S(a + i ) which after normalisation takes the form and the solution of this system yields eq.(18). The antipodes S(a − i ) (19) are obtained through the conjugation, S(a − i ) = (S(a + i )) * . This theorem is interesting in its own because it defines the Hopf structure on another basis of generators for QUEA of the algebra so(2n + 1) and the superalgebra osp(1|2n). The oscillator representations The unitary representations π p of the parastatistics algebras pB(n) and pF(n) (eq. 1) with unique vacuum state are indexed by a non-negative integer p [20] (see also [21] and references therein). The representation π p is the lowest weight representation with a unique vacuum state |0 annihilated by all a − i and labelled by the order of parastatistics p where the vacuum representation, i.e., the trivial one corresponds to the counit ǫ of the Hopf parastatistics algebra. In the representation π p (26) of the nondeformed parastatistics algebras (1) where the upper (lower) sign is for parafermions (parabosons). In the representation π p of the deformed parastatistics algebras the quantum analogue of the relation (27) holds which implies the deformed analogue of the π p defining condition (26) The constant ∓[p]/ [2] plays the role of energy of the vacuum as the constant ∓p/2 in (27) for the nondeformed algebras. The algebra of the q-deformed fermionic (bosonic) oscillators F q (n) (B q (n)) arises as a representation π of order p = 1 of the pF q (n)(pB q (n)) We have adopted the notaion π(x) = x and use N i = h i ∓ 1 2 . The analysis [22] of the positivity of the norm for the pB q (n) and pF q (n) representations in the simplest case p = 1 shows that such unitary representations (realized as finite dimensional factor representions) exist only for q being a root of unity. Remark. Unlike the case of pB q (n) and pF q (n), the deformed relations of bosonic and fermionic oscillator algebras (29) do not define Hopf ideals. Green Ansatz The Green ansantz was introduced by Green in the same paper [2] in which he defined parastatistics. We briefly recall it and then bring it in a form convenient for deformation. Let us consider a system with n degrees of freedom quantized in accordance with the parafermi or parabose statistics of order p, i.e., a system of n paraoscilators which is a particular representation π p (of order p) of the parastatistics algebra with trilinear exchange relations (1). The Green ansatz states that the parafermi (parabose) oscillators a + i and a − i can be represented as sums of p fermi (bose) oscillators satisfying quadratic commutation relations of the same type (i.e., fermi for parafermi and bose for parabose) for equal indices (r) and of the opposite type for the different indices The upper (lower) signs stay for the parafermi (parabose) case. The coproduct endows the tensor product of A-modules of the Hopf algebra A with the structure of an A-module. Thus one can use the coproduct for constructing a representation out of simple ones. The simplest representations of the parastatistics algebras are the oscillator representations π (with p = 1). Higher representations π p of parastatistics of order p ≥ 2 arise through the iterated coproduct [9]. Let us denote the (p-fold) iteration of the coproduct by (4 and π denotes the projection from the (deformed) parafermi and parabose algebra onto the (deformed) fermionic F (F q ) and bosonic B (B q ) Fock representation, respectively. PROPOSITION 1 The Green ansatz is equivalent to the commutativity of the following diagrams Proof: Using the coproduct of the Theorem 2 for q = 1 and projecting on the Fock representation we can choose the components of the Green ansatz to be the summands in the expressions The check that the Green components a ±(r) i satisfy the bilinear commutation relations (31) and (32) is direct, however one has to keep in mind that the tensor product is Z 2graded in the parabose case and non-graded in the parafermi case, which explains why 4) The definition of ∆ (p) extended with the counit ǫ is consistent with π 0 = ǫ the anomalous commutation relations (32) appear. We emphasize that the grading of the tensor product turns out to be the opposite to the (independent) grading of the bose or fermi algebra which appears on each site (r). The diagrams (34) are commutative if and only if which is exactly the statement of the Green ansatz (30). We are now in a position to extend the Green ansatz to the deformed parafermi pF q (n) and parabose pB q (n) algebras. The simplest representation of pF q (n) and pB q (n) of parastatistics order p = 1, are the deformed fermionic F q and bosonic B q Fock representations, respectively and let π be the projection on these Fock spaces. DEFINITION 2 The system of quadratic exchange relations stemming from the commutativity of the diagrams is the deformed Green ansatz of parastatistics of order p. Here ∆ (p) stays for the pfold non-cocommutative coproduct (33) on the Hopf algebras pF q (n) and pB q (n) (see Theorem 2). Let us show the consistency of the condition (28) with the deformed Green ansatz. The vacuum state |0 (p) of the representation π p is to be identified with the tensor power of the oscillator (p = 1) vacuum, |0 (p) = |0 ⊗p . Evaluating the iterated graded commutator (6) on the vacuum state |0 ⊗p in the oscillator representations π ⊗p we get the defining condition (28) of the deformed π p since π(q hi ) = q Ni∓ 1 2 , which proves the consistency. The Green components a ±(r) i in a pF q (n) or pB q (n) representation π p of parastatistics of order p will be chosen to be Note that the conjugation * acts as reflection on the Green indices (r) More explicitly the Green components look like a +(r) i = k1,...,kr where the upper (lower) triangularity of the matrices L (+) (L (−) ) infers that only the terms subject to the inequalities i ≤ k 1 ≤ . . . ≤ k r ≤ n are non-zero ( respectively n ≥ k 1 ≥ . . . ≥ k p−r ≥ j ). Unlike the non-deformed case each Green component a ±(r) i in the deformed Green ansatz is a sum of many terms resulting from the mapping π ⊗p • ∆ (p) . To present the results in a more concise form we introduce the operators One readily sees that (Q We now summarize the deformed quadratic algebra anomalous commutation rules. For different Green indices the Green components (40) quommute ( [x, y] ±q = xy ± qyx) as follows (we suppose r > s) The states Λ i,n j andΛ i,j n for all admissible i, j (14) arise through the adjoint action of the raising U q (gl n ) generators as seen from the diagram in which the decorated arrows denote the adjoint actions ad Ei 2 is the highest weight of L. One can check that the adjoint U q (gl n )-action does not bring out of L which completes the proof. The U q (gl n )-module L is a smooth deformation of a Schur module associated with the Young diagram λ = (2, 1) [23]. The states Λ i,k j andΛ i,j k in L are labelled with semistandard Young tableaux. Hence the dimension is dim L = (n+1)n(n−1) 3 .
2014-10-01T00:00:00.000Z
2004-12-06T00:00:00.000
{ "year": 2004, "sha1": "42b10f0c234b907bdcf62fcc620e2fc9bc831c6e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/0412016", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e88559fae8a99a019af374ae6f13a302f90c279b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
247769048
pes2o/s2orc
v3-fos-license
Sirtuin 2 Alleviates Chronic Neuropathic Pain by Suppressing Ferroptosis in Rats Neuropathic pain (NP) is chronic and associated with poor effects of general analgesia. It affects patients’ health and quality of life. The apoptotic process of lipid peroxidation caused by iron overload is called ferroptosis, which may be associated with nervous system disease. A recent study has found that sirtuin 2 (SIRT2) achieves a neuroprotective effect by suppressing ferroptosis. Herein, we aimed to examine whether SIRT2 regulated spared nerve injury (SNI)-induced NP by suppressing ferroptosis in rats. A rat model of NP was induced in adult male Sprague-Dawley rats weighing 200–250 g. Mechanical allodynia was observed from the first day after SNI and continued for 14 days. Compared with age-matched control rats, the expression of SIRT2 and ferroportin 1 (FPN1) decreased in the L4-6 spinal cord of the SNI-induced NP rats. In addition, we observed that the levels of both iron and anti-acyl-coenzyme A synthetase long-chain family member 4 (ACSL4) were significantly increased in the spinal cord after SNI, while the expression of glutathione peroxidase 4 (GPX4) was decreased. Furthermore, an intrathecal injection of SIRT2 overexpressed recombinant adenovirus, which upregulated the expression of SIRT2, attenuated mechanical allodynia, enhanced the level of FPN1, inhibited intracellular iron accumulation, and reduced oxidant stress levels, thereby reversing the changes to ACSL4 and GPX4 expression in the SNI rats. This evidence suggests that SIRT2-targeted therapeutics may help relieve the symptoms of chronic NP. INTRODUCTION Neuropathic pain (NP) refers to the pain caused by neuropathy and damage related to hyperalgesia and paresthesia observed in any part of the peripheral or central nervous systems. Trigeminal neuralgia, postherpetic neuralgia, and diabetic peripheral neuropathy are common chronic pain types that are presented in the clinic. NP is a chronic condition affecting patients' general health and quality of life; in addition, it is associated with a high economic burden (Carrasco et al., 2018). Despite recent research advances, the pathophysiological mechanism of NP remains unclear, and available treatments are still not satisfactory. Several preclinical and clinical studies have shown that NP mechanisms are related to ferroptosis (Guo et al., 2021;Jia et al., 2021;Wang et al., 2021) Ferroptosis is different from cell apoptosis in that it highly depends on iron. The cytomembrane is not damaged during apoptosis; however, with the increase in the levels of reactive oxygen species (ROS) during ferroptosis, the integrity of the cell membrane is destroyed (Qiu et al., 2020). There are two types of antioxidant systems that eliminate ROS in the body; these systems are divided into enzyme-and non-enzyme-related types; the enzyme-related types include superoxide dismutase (SOD), catalase, and glutathione peroxidase (GPx) systems; the non-enzyme-related types mainly include the reduction of glutathione (GSH) and vitamin C/E systems. High levels of GSH, GPx, and SOD can downregulate ROS production. In this context, the deficiency of GSH and functional loss of glutathione peroxidase 4 (GPX4, the only member of GPx family that has an antagonistic effect on membrane lipid peroxidation) causes ferroptosis (Brigelius-Flohé and Maiorino, 2013;Yang et al., 2014). Anti-acyl-coenzyme A synthetase long-chain family member 4 (ACSL4) is considered to play a crucial role in ferroptosis (Ding et al., 2021). In the absence of ACSL4, lipid peroxidation substrate levels decrease in vivo, which prevents ferroptosis (Yuan et al., 2016a;Doll et al., 2017). Ferroportin 1 (FPN1) is a type of integral membrane protein that exports iron from cells to plasma (Abboud and Haile, 2000;Donovan et al., 2000;McKie et al., 2000). The importance of FPN1 in cellular iron homeostasis and some other diseases such as cancer has attracted more and more attention. Consequently, if the levels of FPN1 are reduced, iron cannot be transported out of the cell, leading to intracellular iron accumulation, which facilitates ferroptosis. Sirtuin is a highly conserved deacetylase. The sirtuin family has seven members, SIRT1-SIRT7 (Kong et al., 2017). They are involved in regulating the oxidative stress response in many diseases (Singh et al., 2018). Sirtuin 2 (SIRT2, the only member of sirtuin family that is found in the cytoplasm) is involved in regulating neurological diseases. Previous studies have found that SIRT2 has a neuroprotective effect in rats with chronic constriction injury (CCI) (Zhang and Chi, 2018). Moreover, SIRT2 regulates chronic NP through the nuclear factor erythroid 2-related factor 2 (NRF2) in rats . A recent study has shown that inhibiting ferroptosis attenuated NP in rats (Guo et al., 2021). Meanwhile, another recent study has demonstrated that SIRT2 alleviated nerve injury by inhibiting p53-mediated ferroptosis in a mouse model of traumatic brain injury (TBI) (Gao et al., 2021). Overall, this evidence suggests that SIRT2 may alleviate NP by inhibiting ferroptosis; consequently, understanding the physiological mechanism of SIRT2 may help inform clinical treatment of NP. Herein, we aimed to examine whether SIRT2 regulated spared nerve injury (SNI)-induced NP by suppressing ferroptosis in rats. Animals About 70 adult male Sprague-Dawley rats weighing 200-250 g were used in the animal model. These rats were purchased from Beijing Charles River Experimental Animal Technology Co., Ltd. The rats were raised in separate cages at approximately 22°C and 50-60% humidity with free access to enough water and food. The animal experiment protocol was reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of China Medical University. Establishment of the Neuropathic Pain Model The rat model of NP was induced by conducting unilateral SNI, as previously described (Decosterd and Woolf, 2000;Guo et al., 2019). In short, rats were anesthetized with 5% isoflurane by a small animal anesthesia machine and maintained with 2-3% isoflurane. The rats were placed in a left decubitus position; approximately 1 cm below the right iliac bone, the three branches of the sciatic nerve were exposed: the tibial nerve, common peroneal nerve, and sural nerve. The thicker nerves, common peroneal nerve, and tibial nerve were ligated and removed. During this process, care was taken to preserve the integrity of the sural nerve. The muscle and skin layers were sutured with 3.0 silk, and the surgical incision was disinfected. In the sham operation, the sciatic nerve and its three branches were exposed but not ligated. Rats subjected to the sham operation were used as the control groups. Administration of Recombinant Adenoviruses Recombinant adenoviruses overexpressing SIRT2 (Ad-SIRT2) were purchased from Shanghai GeneChem Co., Ltd. In brief, rats were anesthetized with 5% isoflurane inhalation and maintained with 2-3% isoflurane. The rats were placed prone on the operating table, about 20 µl of 1 × 10 8 pfu of Ad-SIRT2 or Ad-control was injected vertically into the interspace of L5-6 the spinous process with a microinjection syringe. Correct intrathecal injection was identified by the tail-flicking movement when the needle was inserted into the subarachnoid space. For biological testing, the L4-6 lumbar enlargement was removed and tested. Protocol I To detect the changes in pain behavior and the expression of SIRT2 and FPN1 in rat spinal cord after SNI, the rats were divided into the sham group (n = 5) and SNI group (n = 30). SNI was performed in the SNI group, whereas the sham group had their nerves exposed but did not undergo further intervention. Behavioral tests were performed on days 0, 1, 3, 7, 10, and 14 post-SNI. Five rats in the SNI group were killed after PWT was measured at each time point, and rats in the sham group were killed at the last time point. Protein was removed from the L4-6 lumbar enlargement of the spinal cord for biological tests on day 14 post-SNI. In addition, rats in the sham group and SNI group (n = 5 each) were used for immunofluorescence study to detect the localization and changes in SIRT2 and FPN1 in the spinal cord. Protocol II In order to further explore the role of SIRT in regulating neuropathic pain and ferroptosis in rats, the rats were divided into four groups (n = 5 each). SNI was performed in the SNI group, whereas the sham group had their nerves exposed but did not undergo further intervention. One day before the SNI, Ad-SIRT2 or Ad-control was injected intrathecally with a microinjection syringe in the Ad-SIRT2 group or Ad-control group respectively. Behavioral tests were performed on days 0, 1, 3, and 7 post-SNI. Tissue samples were harvested on day 7 post-SNI. Behavioral Assessment Mechanical allodynia was measured by detecting the paw withdrawal threshold (PWT) stimulated by von Frey filaments, as previously described (Chaplan et al., 1994;Guo et al., 2019). Pain thresholds were measured blindly ( Figure 1). Rats were housed separately in a cage with a wire mesh floor for at least 15 min before mechanical allodynia was measured. We stimulated the mid-plantar skin of the right hind paw of the rats with a medium-strength von Frey fiber filament (0.16, 0.2, 0.4, 1.0, 1.4, 2.0, 4.0, 6.0, 8.0, 10.0, 15.0 g). Acute retraction of the rat hind paw is considered to be a positive reaction. If the rat does not have this positive reaction, the filament was increased by one level; if a positive reaction occurs, a smaller level of filament was applied. The lowest level required to cause a positive reaction was recorded as PWT (g). The interval between detections was at least 30 s to eliminate the effects of the previous stimulus. Iron Content Assay Tissue iron assay kits (A039-2-1, Jiancheng Bioengineering Institute, Nanjing, China) were applied to measure the iron content in the spinal cord according to the manufacturer's instructions. The tissues were weighed accurately, and precooled physiological saline was added at a weight (g) to volume (ml) ratio of 1:10. The centrifuge was used at 2,500 rpm for 10 min. The samples were collected according to the manufacturer's instructions, and the absorbent optical density (OD) value of each sample was examined at a wavelength of 520 nm. Lipid Peroxidation Assays Lipid peroxidation kits were used to measure the levels of malondialdehyde (MDA, A003-1, Jiancheng Bioengineering Institute, Nanjing, China) and SOD (A001-1, Jiancheng Bioengineering Institute, Nanjing, China). The tissues were weighed accurately, treated with a high-speed grinder, and centrifuged for 10 min at 2,500 rpm, and the supernatant was then taken for examination according to the manufacturer's instructions. We detected the absorbent OD value of MDA at a wavelength of 532 nm and that of SOD at a wavelength of 550 nm. Statistical Analysis All data are expressed as the mean ± standard error (mean ± SE), and GraphPad Prism 8 software was used for statistical analysis. Differences between groups were analyzed by one-way or twoway analysis of variance (ANOVA), followed by Tukey's multiple comparison tests. Differences were considered statistically significant at p-values of <0.05. RESULTS The PWT was Significantly Decreased and SIRT2 and FPN1 Levels were Downregulated in SNI Rats The rates of mechanical allodynia were similar across the groups at baseline (Figure 2A). The PWT values of the SNI group began to decrease from day 1 post-SNI; this decrease in values lasted for up to 2 weeks, relative to the values observed in the sham control group and SNI group at baseline. The lowest PWT value was measured on day 7 in the SNI group. The Western blot analysis showed that the expression of SIRT2 in the SNI group significantly decreased on days 10 and 14 compared with that in the sham group ( Figure 2B). In addition, the expression of FPN1 in the SNI group decreased gradually from day 1 to day 14 compared with that in the sham group. A significant reduction in FPN1 expression in the SNI group was observed on day 7, 10, and 14 compared with the values observed in the sham group. This decrease was most significant on day 7 ( Figure 2C). Immunofluorescence was applied to detect the cellular localization of SIRT2 and FPN1. The expression of SIRT2 was significantly decreased in the microglia in the spinal dorsal horn of the SNI rats ( Figure 3A). In addition, the expression of FPN1 decreased in the microglia and neurons of the spinal cord horn in the SNI rats ( Figures 3B,C). SIRT2 Adenovirus Attenuates Mechanical Allodynia, Upregulates FPN1 Expression, and Reduces Iron Accumulation To better understand the mechanism of SIRT2 in NP regulation, the expression of SIRT2 in the spinal cord was upregulated by intrathecal injection of Ad-SIRT2, which significantly alleviated mechanical allodynia. The intrathecal injection of Ad-control did not alleviate mechanical allodynia in the SNI rats ( Figure 4A). The Western blot analysis confirmed that compared with sham operation rats or SNI rats without intrathecal injection, the intrathecal injection of Ad-SIRT2 significantly enhanced the expression level of Ad-SIRT2 in the spinal cord of SNI rats ( Figure 4B). These findings suggest that the overexpression of SIRT2 may relieve NP induced by SNI. Next, we investigated the impact of SIRT2 on FPN1, a transmembrane protein that controls the efflux of iron. The overexpression of SIRT2 markedly increased the expression of FPN1 protein in the spinal cord of the SNI rats ( Figure 4C). These results suggest that SIRT2 may alleviate NP by upregulating the expression of FPN1. Moreover, compared to those in the sham or Ad-control rats, iron concentration in the spinal cord was significantly elevated in the SNI rats. In contrast, the iron content in the Ad-SIRT2 group was markedly reduced ( Figure 4D). These findings suggest that FPN1 may participate in the regulation of intracellular iron content. The overexpression of SIRT2 could reverse the reduction of FPN1 caused by SNI and then allow the efflux of iron accumulated in the cell induced by SNI, thereby reducing the intracellular iron content and alleviating NP. SIRT2 Adenovirus Decreased Lipid Peroxidation Induced by SNI in the Spinal Cord The expression of NRF2 and concentration of MDA in the SNI group were lower and higher than those in the sham group, respectively ( Figures 5A,B). Meanwhile, the concentration of SOD matched that of NRF2 in the SNI rats ( Figure 5C). The overexpression of SIRT2 increased the expression of NRF2 and SOD ( Figures 5A,C) and reduced the concentration of MDA in the spinal cord in the Ad-SIRT2 group ( Figure 5B). These findings suggest that lipid peroxidation induced by SNI can be inhibited by SIRT2 overexpression. SIRT2 Adenovirus Upregulated the Level of GPX4 and Decreased the Level of ACSL4 Western blotting results showed that the expression levels of GPX4 decreased in the SNI group ( Figure 6A), and those of ACSL4 in the spinal cord increased ( Figure 6B). Meanwhile, the intrathecal injection of Ad-SIRT2 upregulated GPX4 expression ( Figure 6A) and downregulated ACSL4 expression in the SNI rats ( Figure 6B). There was no significant change in the expression of GPX4 and ACSL4 in the SNI group and Ad-control group. DISCUSSION The main findings from this study are presented as follows: 1) The expression of SIRT2 and FPN1 decreased in SNI rats. 2) Both SIRT2 and FPN1 expression decreased in the spinal dorsal horn microglia of SNI rats. In addition, the expression of FPN1 decreased in the spinal dorsal horn neurons of SNI rats. 3) An intrathecal injection of Ad-SIRT2 upregulated the expression of SIRT2 and FPN1 in the spinal cord and reduced iron accumulation, alleviating mechanical allodynia in SNI rats. 4) The overexpression of SIRT2 inhibited lipid peroxidation and reversed the changes to ACSL4 and GPX4 levels in SNI rats, thereby suppressing ferroptosis. The number of studies on ferroptosis has increased in recent years. Although the relationship between ferroptosis and NP has been described, some molecules affecting ferroptosis remain subject to research. SIRT2 may regulate oxidative stress (Singh et al., 2018); it is the only sirtuin protein that is mainly found in the cytoplasm although it may also be present in the mitochondria and nucleus (Vaquero et al., 2006). Recently, accumulating evidence has revealed a relationship between SIRT2 and nervous system diseases (Yuan et al., 2016b;Zhang and Chi, 2018;Zhao et al., 2021). SIRT2 can exert a neuroprotective effect on TBI by inhibiting ferroptosis (Gao et al., 2021). This study is the first to propose that SIRT2 may relieve NP by inhibiting ferroptosis in SNI rats. Iron is very important for life function; it participates in many human body biosynthesis processes, such as the synthesis of hemoglobin and myoglobin; it affects respiration and energy metabolism and is closely related to immunity (Steinbicker and Muckenthaler, 2013). However, iron overload may directly produce excess ROS, causing cell death and inducing a series of harmful reactions, such as oxidative stress (Steinbicker and Muckenthaler, 2013;Sheng et al., 2020). Therefore, rehabilitating iron homeostasis may help treat NP. FPN1, also called iron regulatory transporter 1 or metal transporter protein 1, is a transmembrane iron export protein. It is widely distributed in various tissues of the body; it is also present on the surface of macrophages, hepatocytes, intestinal enterocytes, and placental cells (Yang et al., 2002;Donovan et al., 2005). FPN1 mRNA is expressed in the small intestine, placenta, spleen, liver, kidney, heart, muscle, lung, and brain (Abboud and Haile, 2000). Thus, the expression of FPN1 is likely related to intracellular iron accumulation. In the present study, the expression of FPN1 decreased significantly on days 7 and 10 following SNI, and intracellular iron content increased in SNI rats, whereas an intrathecal injection of Ad-SIRT2 reversed these changes and alleviated mechanical allodynia induced by SNI. NRF2 plays a key regulatory role in antioxidant stress (Sun et al., 2016). Most antioxidant stress processes in the human body require transcriptional regulation of NRF2 (Kuang et al., 2020). The activation of NRF2 may counteract the oxidative stress caused by ROS. In addition, several studies have shown that the downregulation or inactivation of NRF2 promotes ferroptosis (Sun et al., 2016;Song and Long, 2020). Therefore, NRF2 may be considered an important indicator of oxidative stress. The process of cell death caused by iron overload and lipid peroxidation is called ferroptosis (Tang et al., 2021). Lipid peroxidation is mainly induced by polyunsaturated fatty acids (PUFA). During ferroptosis, lipid peroxidation further produces some substances, including the MDA and 4-hydroxynonenal (Tang et al., 2021). Studies have confirmed that erastin (Dolma et al., 2003) and RSL3 (Yang and Stockwell, 2008) are the main inducers of ferroptosis. Erastin indirectly inhibits GPX4 by suppressing system xc − and consuming GSH, while RSL3 directly inhibits GPX4 to promote apoptosis (Yang and Stockwell, 2016). Thus, GPX4 plays an important role in the mechanism of ferroptosis. Ding and his group have shown that the ubiquitination of GPX4 caused its degradation and activated ferroptosis (Ding et al., 2021). ACSL4 is essential in the synthesis and metabolism of PFUAs because the upregulation of ACSL4 increases the concentration of PUFAs, which creates conditions for the occurrence of ferroptosis (Doll et al., 2017;Tang et al., 2021). The inhibition of ACSL4 prevented brain ischemia in mice, while the upregulation of ACSL4 deteriorated ischemic brain injury resulting from ferroptosis (Cui et al., 2021). ACSL4 promotes neuronal death by promoting lipid peroxidation, thereby triggering ferroptosis (Cui et al., 2021). Therefore, GPX4 and ACSL4 are important regulators of ferroptosis. In fact, the level of GPX4 reduced and the level of ACSL4 increased dramatically in NP rats induced by CCI . This study has some limitations. The PWT in the SNI group, compared with the sham group, was significantly decreased 1 day after SNI, and this decrease persisted for the entire observation period. However, significant reductions in protein levels of SIRT2 in the spinal cord were observed 10 days after SNI. This mismatch in timecourse suggests that downregulated SIRT2 in the spinal cord mainly accounts for later stage NP (chronic NP); other mechanisms might contribute to early stage NP (acute NP). Proteins which function downstream of SIRT2, such as NF-κB p65, NF-κB p53 and NRF2, and upstream regulatory proteins targeting SIRT2, such as AMPK, NADH, AK-1 (SIRT2 inhibitor), can be the scope for future studies. SIRT is a highly conserved deacetylase. Whether its regulation of NP is upregulated or blocked by acetylation remains unclear. The present study has demonstrated that SIRT2 can alleviate neuroinflammation in SNI rats; significant upregulation of GPX4 expression and significant downregulation of ACSL4 expression were observed in SNI rats, which were reversed by the intrathecal injection of Ad-SIRT2. The present study has revealed the potential of SIRT2 as a new therapeutic target for alleviating NP. In conclusion, nerve damage causes the downregulation of SIRT2 expression in the spinal cord and decreases the expression of FPN1, which causes iron ions to accumulate in the cell, as they cannot be transported to the outside of the cell. This mechanism causes oxidative stress induced by iron overload, which stimulates ferroptosis. The overexpression of SIRT2 alleviates NP induced by SNI by upregulating the expression level of FPN1 in the spinal cord of rats, reducing lipid peroxidation caused by iron accumulation, and reserving the changes of GPX4 and ACSL4 levels to suppress ferroptosis in the spinal cord. This evidence suggests that SIRT2-targeted therapeutics may help relieve the symptoms of chronic NP in clinical practice. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of China Medical University. AUTHOR CONTRIBUTIONS XZ and TS designed the study. XZ conducted the experiments, acquired, and analyzed data, and wrote the manuscript. XZ conducted the experiments. MZ, XT, BZ, CS, PW, KW, and LZ acquired the data. The study was supervised by TS, who wrote the manuscript. The final version of the manuscript was approved by all authors. FUNDING The present study was funded by the National Natural Science Foundation of China (No: 81271371, to TS), and the Key Project of Natural Science Foundation of Liaoning Province (grant no. 20180530063 to TS). We would like to thank Editage (www. editage.cn) for English language editing.
2022-03-29T13:58:24.515Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "ec7c54f607882e4e0b8a0bbf4a3653d0fd4abd07", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ec7c54f607882e4e0b8a0bbf4a3653d0fd4abd07", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
9763353
pes2o/s2orc
v3-fos-license
The I-Band Tully-Fisher Relation and the Hubble Constant The application of the I band Tully--Fisher relation towards determining the Hubble constant is reviewed, with particular attention to the impact of scatter and bias corrections in the relation. A template relation is derived from galaxies in 24 clusters. A subset of 14 clusters with cz ~ 4000 to 9000 km/s is used as an inertial frame to define the velocity zero point of the relation. Twelve galaxies with Cepheid distances are used to establish the absolute magnitude scale of the Tully--Fisher relation, and thus to determine a value of H_not = 70\pm5 km/s/Mpc. Estimates of the peculiar velocities of the Virgo and Fornax clusters are also given. Assuming that the distance to Fornax is 18.2 Mpc (N1365), H_not = 76\pm8 km/s/Mpc. Assuming that Virgo lies at 17.4 Mpc (M100, N4496, N4639), H_not = 67\pm8 km/s/Mpc. Introduction The luminosity-linewidth relation has become the most widely used technique for the determination of redshift-independent galaxy distances. While the relationship between kinematic and photometric parameters of spirals was investigated early on by Roberts (1969Roberts ( , 1975, and Ernst Oepik applied it to derive a distance of 450 kpc to M31 in 1922 -a much closer value to the true one than those around 200-250 kpc commonly adopted until the 1950's -, it is the merit of R.B. Tully and J.R. Fisher (1977) that of forcefully proposing the technique and underscoring its great potential for extragalactic astronomy and cosmology. The Tully-Fisher (TF) relation was first obtained by correlating blue luminosity and the velocity width of the 21cm line, as observed with telescopes that yield the integrated spectrum of the galaxy, rather than a map of the velocity field. Since the technique requires that target galaxies be fairly inclined in order to minimize the amplitude of the corrections needed to recover the disk's rotational speed, it was realized early that uncertain corrections for internal extinction of the stellar flux would impose a serious limitation on the method. Aaronson, Huchra and Mould (1980) adopted a photometric datum shifted to the H band, thus drastically reducing the amplitude of, and the error on internal extinction corrections. In the late 1980's, the advent of CCD devices stimulated the adoption of I and R band as the bandpasses of choice for TF work. Sky background at those wavelengths is relatively low (as compared to H and K bands), detectors have high efficiency and large fields and data acquisition is relatively fast even with small aperture telescopes. The population contributing most of the light at I band is comprised of solar-like stars, several Gyr old. Thus disks are well outlined but of smoother appearance than seen in the bluer parts of the visible spectrum, and their apparent inclinations to the line of sight can be reliably determined. Moreover, processes operating in clusters that may alter the star formation rate in galaxies will have a retarded effect on the red and infrared light of disks; thus, smallerif any -systematic differences are expected between the TF relation of cluster and that of field galaxies, if I or R band photometry is used (Pierce and Tully 1992). HI single-dish linewidths sample effectively the outer regions of disks and yield values close to twice the asymptotic value of the rotational speed. They are also impervious to uncertainties on major axis position angle. However, not all sky is reachable by large aperture radio telescopes, and velocity widths have also been extensively derived from single-slit spectra, principally targeted at nebular H α , N and S lines in the red part of the spectrum, and to a lesser extent from H α Fabry-Perot imaging spectroscopy. HI synthesis imaging has so far had a small impact on this field of work, although the advent of broad-band spectrographs in arrays may make cluster TF work an advantageous proposition. Much observational and theoretical work has been carried out on the TF relation, and a fair review of even the main individual works is impossible within the space constraints of this presentation. Recent CCD TF work includes the surveys of Tully (1988, 1992), Han and Mould (1992), Schommer et al. (1993), Mathewson et al. (1992), Courteau (1992), Willick (1990), Bernstein et al. (1994), Bureau et al. (1996) and Giovanelli et al. (1996, hereafter G96). By mid-1996, the TF distances of a few thousand galaxies had been estimated, principally with the purpose of determining the characteristics of the peculiar velocity field within cz ∼ 10, 000 km s −1 . The investigation of the peculiar velocity (V pec ) field using the TF technique does not require an absolute distance calibration of the luminosity-width relation. The observed radial velocity of a galaxy at distance d is where V pec is the peculiar velocity vector and V pec (0) its value at the observer's location. If the CMB dipole (Kogut et al. 1993) is interpreted as a Doppler shift, then in the CMB reference frame V pec (0) = 0. TF yields the distance H • d in km s −1 . Thus, a template with accurate slope and zero-point is required, such that any galaxy falling on the template can be considered at rest with respect to the comoving reference frame. A velocity calibration of the TF zero-point is necessary, which is introduced by assuming that the TF relation of one cluster, or the composite of a set of clusters define V pec = 0. Section 4 deals with the process of establishing such calibration. Once a reliable TF template is in hand, its luminosity scale can be absolutely calibrated by inspecting the location in the TF plane of galaxies with known distances, as obtained via primary indicators. This is equivalent to estimating H • . In this presentation, the physical basis of the TF relation will be reviewed in section 2, in terms of dynamical and scaling arguments. In section 3, the important issue of the scatter about the mean TF relation will be discussed: on it hinge not only the limits of applicability of the technique, but also the impact of bias. Differences in estimate of the latter are largely responsible for wide discrepancies in inferences of both the value of the Hubble constant and the amplitude of peculiar velocities, while the characteristics of scatter affect the issue of whether inverse formulations of the TF relation are bias-free tools. In section 4, the derivation of a template TF relation, from a sample of clusters spread between cz ∼ 1000 and 10,000 km s −1 , will be presented, with emphasis on the treatment of the correction for bias. In sections 5-7, the TF template relation (a) will be calibrated using a set of nearby calibrators with available Cepheid distances and (b) will be used to estimate the Virgo and Fornax clusters' peculiar velocities. These determinations will be used to infer values of H • . Throughout this paper, the parametrization H • = 100h km s −1 Mpc −1 will be used. The Physical Basis of the TF Relation There is no detailed physical understanding of the TF relation, the interpretation of which relies on simple scaling relations and dynamical arguments. In the scenario most recently discussed by Eisenstein and Loeb (1996), consider a galaxy of mass M collapsing from a spherical cloud at epoch t coll ; the turnaround radius R ta at that epoch is according to the spherical collapse model of Gunn and Gott (1972) for Λ = 0 cosmologies. The total energy of the object after virialization is where σ is its velocity dispersion. Assuming that a galaxy results from collapse at a single epoch and not from a succession of mergers, the combination of (2) and (3) yields so if all galaxies collapse in single events at the same epoch, and mass-to-light ratios don't vary significantly L ∝ σ 3 . Variations in the formation history of galaxy systems should be expected to introduce substantial scatter in the relation, as will be discussed in section 3. Invoking an alternative set of scaling relations, consider a pure exponential disk of central disk surface brightness I(0) and scale length r d ; its total luminosity is On the other hand, the mass internal to radius R is M (R) ∝ RV 2 , and if the rotation curve flattens in the outer regions of the disk as is usually the case for spiral galaxies, the total mass is Combining eqns. (5) and (6), we can write If a dark matter halo is present so that the disk mass is M d = ΓM tot , When a number of "standard" assumptions are made, i.e. that Γ ∼ const, M d /L d ∼ const and I(0) ∼ const (Freeman's law, 1970), L d ∝ V 4 max , which resembles the TF relation. In practice, none of the assumptions of constancy for M d /L d , Γ and I(0) apply; all those parameters exhibit mild dependencies on V max (or L d ), reducing the exponent to values n < 4, in a measure that depends on the adopted photometric band. Empirical calibrations of the TF relation yield power law behavior as with values of n ∼ 3 in the I band. Some workers (e.g. Aaronson et al. 1986;Willick 1990; have found significant departures from a single power law behavior, and quadratic or bilinear TF fits have been adopted. It has been argued that inappropriate extinction corrections (Giovanelli et al. 1995) or samples including a mixture of morphological types (G96) may result in TF departures from linearity. There is however no a priori reason to expect that the TF relation be strictly linear. The Scatter About the TF Relation In order to correctly infer the predictive power of the TF method and to adequately estimate the amplitude of its biases, a fair understanding of the nature of the associated scatter and its sources is necessary. The scatter in the TF relation arises from several sources: errors in the measurement of TF parameters and uncertainties associated with the corrections applied to them combine with variance in the galactic properties produced by different formation and evolutive histories, characteristic of each object. The latter is often referred to as the intrinsic contribution to scatter, which may appear in the form of velocity field distortions, deviations from disk planarity, other gravitational and disk asymmetries, etc. Several misconceptions regarding the nature of the TF scatter often appear in the literature: that it is well represented by a single number; that the (i) The total TF scatter cannot be represented by a single value; rather, it varies monotonically with the velocity width (luminosity) of the galaxy, as shown by the solid symbols and the heavy line in Figure 1. Over the range of velocities generally used for TF studies, the r.m.s. scatter in magnitudes varies by a factor of ∼ 2, between 0.5 and 0.25 mag (which translate in distance errors of respectively 25% and 12%). The use of low width objects is not particularly advantageous (see Hoffman & Salpeter 1996). Rather, it is the high width galaxies that generally yield the highest accuracy. (ii) Measurement and processing errors are important contributors to, but they cannot fully account for the amplitude of the total error. Average errors on the magnitude and on the widths are shown in Figure 1 as dotted lines (that on the width is multiplied by the TF slope so that it can be expressed in mags). It is clear that the intrinsic scatter makes an important contribution to the total error budget. Such contribution varies between ∼ 0.4 mag for low width objects and ∼ 0.2 mag for high width ones. In Figure 1, the total measurement and processing error, ǫ m , which is the result of the combination of magnitude (ǫ y ) and width (|b|ǫ x , where b is the TF slope) errors, is indicated by a thin, solid line. |b|ǫ x and ǫ y are not added in quadrature, because errors in the two coordinates are coupled via inclination corrections. (iii) Measurement and processing errors on the magnitude can be important drivers of the scatter, especially for luminous, highly inclined galaxies. Franx and de Zeeuw (1992) have found that the TF scatter poses strong constraints on the elongation of the gravitational potential in the disk plane of spirals. Their conclusion, that the average ellipticity of the potential in the plane of the disk must be smaller than about 0.06, is reinforced by the scatter amplitude revealed in Figure 1. On the basis of 2.2 µm photometry of 18 face-on spirals, Rix and Zaritsky (1995) find that departures from disk axisymmetry may contribute ∼ 0.15 mag to the TF scatter. Their conclusions apply principally to the inner regions of the disk, which are sampled by the K-band observations, and to TF scatter based on optical observations of the rotation field. When the TF relation is based on I band photometry and 21 cm spectroscopy, the Rix & Zaritsky effect becomes of more ambiguous interpretation, as both the light and the HI emission arise in outer regions of disks. Eisenstein and Loeb (1996) estimate that the scatter resulting from varying formation histories of galaxies should exceed 0.3 mag for a broad class of cosmological scenarios. The relatively low values found for the intrinsic scatter in Figure 1 suggest either an unexpectedly late epoch of galaxy formation or that a secular, regularizing feedback mechanism may be responsible for the tightness of the TF relation, as suggested by Silk (1996). Sandage and collaborators (1994a,b; advocate for a large value of the TF scatter, near or larger than 0.7 mag, as an explanation for the high values of the Hubble constant resulting from the use of the TF relation. If the scatter were as large as proposed by that group, large biases would result; their correction would change the zero-point of the TF template relation in the sense that the value of H • would be reduced. While the values of the scatter shown in Figure 1 are not as low as advocated by other groups, it appears unlikely that the scatter may be as large as suggested by Sandage et al. . It has been advocated that the use of an inverse fit for the TF relation -one where the "independent" variable is the magnitude rather than the velocity width -does away with the need to correct for incompleteness bias (e.g. Schechter 1980). The nature of the TF scatter, especially the fact that velocity width errors can be overshadowed by other sources, weakens the case for a bias-less inverse TF relation. TF Template Relation, Bias and Other Corrections The construction of a template relation is the most delicate aspect of any TF program. Providing a large number of objects located at a common distance, and therefore exempt from the vagaries introduced by an a priori unknown peculiar velocity field in a field galaxy sample, clusters of galaxies are favorite targets for the determination of the properties of the TF relation. They are, however, not exempt from the necessity to evaluate and apply important corrections that take into consideration the interplay between scatter and sample completeness, the non-negligible line-of-sight extent of the cluster, corrections for morphological type mix and others. It is moreover necessary to verify whether a cluster's environment alters the photometric and kinematical properties of galaxies, so that the derived TF relation for the cluster may not be applicable to the field. In this section, those issues are discussed principally in light of the results obtained by G96, using a sample of galaxies in 24 clusters at cz between 1,000 and 10,000 km s −1 . Single Cluster, Basket of Clusters One commonly-adopted approach to the determination of a TF template relation is to select a single cluster of galaxies as a reference, thereby equating the universal template with the TF relation defined by its members. There are several problems with this approach. In order for the cluster to constrain well the TF slope, a wide dynamic range in each of the TF parameters is desirable, which advocates for the use of a nearby cluster, in which less luminous galaxies can easily be targeted. Such a cluster would, however, yield a highly uncertain zero point, since even a modest V pec would result in a large magnitude offset. Conversely, a distant cluster, for which the latter problem would be minimized, would provide lax constraint for the TF slope. A single cluster sample is moreover unlikely to contain more than two or three dozens of objects, which would produce a template of very poor statistical definition. TF relations of very low scatter are sometimes found in such cases, which reflect far more the capriciousness of small-number statistics than exceptional data quality. In those cases, low scatter is seldom accompanied by accurate slope and zero-point. The alternative approach is that of using a set of clusters rather than a single one. More distant clusters in the set can effectively provide an estimate of the velocity zero point, while the nearby ones can help constrain the TF slope. The combination of the various data sets from several clusters does however require the simultaneous estimate of their relative V pec , as well as of the bias and other corrections that apply differentially to each. The procedures followed in obtaining such a combination are discussed as follows. The Incompleteness Bias Much has been written on the ways of estimating the incompleteness bias, which results from the interplay between the degree of completeness of a sample and the amplitude of the TF scatter relation (e.g. Bottinelli et al. 1986;Sandage 1994a;Sandage et al. 1995;Teerikorpi 1993 and refs. therein). The nature of the bias is illustrated in the simulation shown in Figure 2, where a 'cluster sample' is extracted from a population with a luminosity function (LF) as shown by the smooth curve plotted along the vertical axis. The extracted sample is however incomplete, as the histogram of magnitudes superimposed on the LF shows. Incompleteness exhibits a "soft" edge, indicated by the progressive departure of the histogram from the LF. For a flux-limited sample, the histogram would of course track the LF for magnitudes brighter than the limit, then level suddenly to zero. The TF law assigned to the simulated data (UTF) is represented by a dashed line, with a scatter twice as large as that illustrated in Figure 1, averaging about 0.7 mag. The magnified scatter serves to dramatize the bias. A heavy solid line connects filled symbols which identify mean values of the magnitude within bins of velocity width. Incompleteness affects the TF relation derived from the simulated sample in several important ways: (i) the derived slope is less steep than that of the UTF; (ii) the zero-point is brighter than that of the UTF; (iii) the scatter is underestimated with respect to that of the UTF. Correction recipes for the incompleteness bias have been given in a variety of studies, most recently those by Teerikorpi (1993 and refs. therein), Willick (1994), Sandage et al. (1995) and G96. Unlike most other works, which propose analytical or graphic solutions to the problem, G96 give a solution obtained via Monte Carlo simulations, which applies to the case when the TF relation is obtained by fitting magnitude on the logarithm of the velocity width, taking into simultaneous account errors on both coordinates. This case, referred to as the bivariate fit, differs fron the one usually referred to as the direct fit, in which only errors in magnitude are taken into account. The computation of bias corrections does of course depend on the chosen type of fit. The incompleteness bias generally increases with increasing distance to the cluster. It is principally driven by the amplitude of the TF scatter. An important source of uncertainty in its application is related to the shape of the LF of the galaxy population from which the cluster sample is extracted. G96 estimate that the effect of this uncertainty on that of the TF zero-point is ∼ 0.03 mag. TF Dependence on Morphology and Cluster Environment Early work by Roberts (1978), de Vaucouleurs et al. (1982) and others showed that in the blue part of the visible spectrum there are significant differences in TF behavior among galaxies of different morphological types. Aaronson and Mould (1983) found such differences to be imperceptible when H band photometry is used. At I band, G96 find a weak but significant difference in the zero point between early-and late-type spirals, Sa and Sab galaxies being less luminous, at a given width, than Sbc and Sc galaxies. Because cluster samples often include a sizable fraction of early-type spirals, an adjustment for the mixing needs to be made. In Figure 3, the residuals from the I band TF template relation are shown as a function of projected radial distance from the center of clusters. In panels (b) and (c) galaxies are separated between the high and low richness halves of the 24 cluster set of G96, while in panel (a) the total cluster sample is displayed at once. There is no significant evidence of differential TF behavior among spirals, on the basis of their projected cluster location. The I Band TF Template Relation Using 14 clusters with cz > 4, 000 km s −1 to define the zero-point and the whole set of 24 clusters to constrain the slope, G96 obtain a template relation based on the data shown in Figure 4, after the necessary corrections for bias, morphology, cluster extent and peculiar velocity have been applied. The plot includes 555 galaxies. The best bivariate fit is M + 5 log h = −21.00 ± 0.02 − (7.67 ± −0.11)(log W − 2.5) (10). The statistical errors on the coefficients are small, but as we shall see they are not the principal contributors to the uncertainty of the peculiar velocities inferred using the template. Absolute Calibration of the TF Relation The final step in the calibration of the TF relation towards deriving an estimate of H • relies on the availability of primary distances for galaxies with TF parameters of adequate quality. Such galaxies would be preferably fairly inclined, isolated late spirals, exhibiting no evidence of perturbation in either their light distribution or velocity field. Table 1 lists galaxies with Cepheid distances, that can be potentially used as TF calibrators. Names in various catalogues are listed in cols. (1) to (3) and the morphological type in the RC3 system ('2' stands for Sab, '3' for Sb, Fig. 3.-TF residuals versus projected radial distance of galaxies from cluster centers. In panel (a), all 555 galaxies in 24 clusters are shown, while in panels (b) and (c) galaxies are separated according to whether they lie in the richer or lower half of the cluster set. '5' for Sc) is given in col. (4). The adopted inclination of the disk is in col. (5), averaged from various estimates in the literature; the raw, total I band apparent magnitude I tot is listed in col. (6) and the assumed galactic extinction and internal extinction corrections, A M W and A int , are respectively in cols. (7) and (8). The morphological type correction β type , necessary to make the TF behavior of galaxies earlier than Sbc consistent with that of Sbc and Sc objects, is given in col. (9), following the recipe of G96. The adopted distance modulus DM and its estimated error are in col. (10), as given in several recent publications and as presented in this conference. The absolute magnitude, obtained via is in col. (11). Its uncertainty, in brackets on the last two significant figures of M I , has been computed assuming a magnitude measurement uncertainty of 0.1 mag and propagating errors on the extinction correction. The adopted value of the logarithm of the velocity width W and its estimated error are listed in col. (12). The width corresponds to approximate measurements at 50% HI peak flux intensity, after correction for resolution, turbulent motions and inclination effects. The raw I band magnitudes are from Pierce and Tully (1992) and , except for NGC 1365, for which we use the average of the Mathewson et al. (1992) and of the Bureau et al. (1996) values. The spectrospcopic data are from a wide variety of sources, too long to itemize here. The velocity widths of large, nearby objects pose accuracy problems of singular nature. Some the data are generally old and unavailable in digital form. In several of the objects, clear distortions are discernible: warps, tidal perturbations and other asymmetries reduce the reliability for TF calibration (e.g. M31, M33 and M81). Other systems are dwarf irregulars (e.g. NGC 2366 and NGC 3109), and thus ill-suited for use with the G96 template which is principally constructed using luminous spirals. Two galaxies (M100 and M101) have very low inclinations and require large and uncertain corrections to the observed widths. One system (NGC 4496) is not only nearly face-on, but a second galaxy is seen superimposed on its disk, making the extraction of photometric parameters highly uncertain. In practice, our estimates of TF parameters are based on a somewhat subjective synthesis of a large amount of heterogeneous material, sometimes involving the measurement of spectra on paper copies of published data figures. Our assignment of error bars to the data are thus reflective of this unorthodox method of parameter derivation, rather than of the original accuracy of the data. The Value of H • In Figure 5, the data of all galaxies in table 1 are plotted over a grid of renditions of the template relation, derived as discussed in section 4.4 and shifted according to values of the Hubble parameter ranging between h = 0.5 and h = 1.0. Keeping the slope of the TF template fixed, we compute the value of h that yields χ 2 minimization of residuals for the set of calibrators. (77) The statistical uncertainty of the derivation, based on the assigned errors to the velocity widths and absolute magnitudes of the calibrators, is small. If we use all the galaxies listed in Table 1, the resulting best fit value is h = 0.74 ± 0.02. However, several of the objects listed in Table 1 are very ill-suited for calibration, as discussed above. After exclusion of N4496 (interacting pair with companion superimposed), N2366 and N3109 (Irregular, low luminosity types), which are identified by unfilled symbols in Figure 5, the best fit yields The formal errors in the statistical analysis given above, of ∼ 0.06 mag, arise purely from the errors on the TF parameters of the calibrators, which are likely to be underestimated. Very large galaxies, for example, pose difficulties in the estimation of magnitudes because of problems associated with the determination of the sky brightness level, and the assumed 0.1 mag measurement error may in some cases be too small. Corrections to widths, necessary to bring them to an internally compatible system, as well as commensurable with the data used to obtain the template relation, are also quite uncertain, when the data originate in as heterogenous a set of data sources as in this case. A realistic appraisal of the error arising from measurements and corrections applied to the TF parameters may be significantly larger than the statistical estimates given above. More conservatively than in Table 1, we shall assume that uncertainties in the calibrators' TF parameters contribute ∆m cal ∼ 0.10 mag to the error budget. The error ∆m cal does not include the possibility of systematic bias in the zero point of the Cepheid period-luminosity relation and in the determination of the coefficients of the TF template. We deal with the latter uncertainty next. The statistical accuracy of the TF zero point deriving from the scatter of the 555 data points about the template shown in Figure 4, is ∼ 0.02 mag,as given in section 4.4. In addition, two other sources of uncertainty need to be considered: (a) that resulting from the choice of an LF for the computation of the incompleteness bias corrections, which in section 4.2 was estimated to contribute 0.03 mag, and (b) that resulting from the assumption that the set of clusters at cz > 4000 km s −1 defines a system with < V pec = 0 >. Part (b) can be estimated as follows. For N randomly selected clusters, located at a mean systemic velocity < cz > and characterized by a r.m.s. 1-d peculiar velocity < V 2 pec > 1/2 , the magnitude error on the TF zero point obtained by assuming that the set of clusters yields < V pec = 0 > is The value of < V 2 pec > 1/2 is quite uncertain. Taking a value intermediate between that suggested by the data of G96 and that required by flat CDM cosmological models (see discussions by Bahcall andOh 1996 andMoscardini et al. 1996), and allowing for a measure of correlation in cluster locations, we obtain ∆m ∼ 0.06 mag. The combined uncertainty on the TF zero point is thus ∆m zp = (0.06 2 + 0.03 2 ) 1/2 ≃ 0.07 mag. Combining now in quadrature ∆m cal ∼ 0.10 mag, ∆m zp ∼ 0.07 and an arbitrarily assumed uncertainty on the Cepheid P-L relation of 0.1 mag, we obtained a total uncertainty of 0.16 mag 7. The Peculiar Velocities of the Virgo and Fornax Clusters The peculiar velocities of Virgo and Fornax are of particular interest since several of the most recent Cepheid distance determinations correspond to galaxies thought to be members of those clusters. While the route to estimating a value of H • via the TF template relation followed in the preceding section is more accurate, it is useful to have at hand estimates of the V pec of the two clusters, which in combination with their systemic velocities and distances can yield local estimates of H • . Note that the V pec given here are referred to the same TF template relation used in the preceding section; thus the inferred values of H • given below are not fully independent on that given by (14). Fornax is part of the cluster set of G96. Its V pec , based on a TF sample of 26 galaxies thought to be cluster members and measured in the CMB reference frame, is −26 ± 71 km s −1 . By expanding the sample to include 13 additional objects thought to be in the cluster periphery, one obtains V pec = −103 ± 66 km s −1 . Both values assume a systemic recesssion velocity for the cluster of 1321 ± 45 km s −1 , measured again in the CMB reference frame. If Fornax lies at the distance of NGC 1365, of 18.2 ± 1.5 Mpc (Silbermann et al. 1996), and if its Hubble velocity H • d = 1385 km s −1 , then H • = 76 ± 8 km s −1 Mpc −1 . Virgo is not one of the clusters studied by G96. We have, however, used the I band data of Pierce and Tully (1992) and of Pierce (1988), and selected a sample of galaxies reputed to be members of the 'A' clump, according to the criteria of Binggeli et al. (1985;1993). Clump A is centered at the position of M87 and has a systemic velocity of 1378 ± 35 km s −1 (equivalent to a heliocentric velocity of of 1050 km s −1 as found by Binggeli et al. 1993). Twentythree galaxies are used to obtain V pec = 204 ± 65 km s −1 , following procedures analogous to those described in G96. If Virgo lies at the mean distance of M100, N4496 and N4639 of 17.4 ± 0.6 Mpc, then H • = 67 ± 8 km s −1 Mpc −1 . It is a pleasure to thank Dr. Michael Pierce for generously providing data in advance of publication, help in obtaining the latest Cepheid distances and enjoyable conversations, as well as Dr. M. S. Roberts for providing insights on the velocity field of M31. This research was partially supported by NSF grant AST94-20505.
2014-10-01T00:00:00.000Z
1996-10-15T00:00:00.000
{ "year": 1996, "sha1": "c5529317151111d0c02c2decaad9c527026445da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e4e77a4bc49956e38b3834da17c9df0a7bbdabd0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
215618473
pes2o/s2orc
v3-fos-license
Treatment with Modified Extracts of the Microalga Planktochlorella nurekis Attenuates the Development of Stress-Induced Senescence in Human Skin Cells. More recently, we have proposed a safe non-vector approach to modifying the biochemical profiles of the microalga Planktochlorella nurekis and obtained twelve clones with improved content of lipids and selected pigments and B vitamins and antioxidant activity compared to unaffected cells. In the present study, the biological activity of water and ethanolic extracts of modified clones is investigated in the context of their applications in the cosmetic industry and regenerative medicine. Extract-mediated effects on cell cycle progression, proliferation, migration, mitogenic response, apoptosis induction, and oxidative and nitrosative stress promotion were analyzed in normal human fibroblasts and keratinocytes in vitro. Microalgal extracts did not promote cell proliferation and were relatively non-cytotoxic when short-term treatment was considered. Long-term stimulation with selected microalgal extracts attenuated the development of oxidative stress-induced senescence in skin cells that, at least in part, was correlated with nitric oxide signaling and increased niacin and biotin levels compared to an unmodified microalgal clone. We postulate that selected microalgal extracts of Planktochlorella nurekis can be considered to be used in skin anti-aging therapy. WE1 to WE12) are shown. Control clone water extract is denoted as CWE. ERK1/2 activity was measured using Muse ® Cell Analyzer and Muse ® MAPK Activation Dual Detection Kit. Supplementary Figure 2. Extract-mediated effects on cell migration. Scratch wound healing assay. BJ cells (a) and HEK cells (b) were treated with 100 µg/ml water extracts for up to 72 h after wounding. Cell migration was evaluated under an inverted microscope. Representative microphotographs are shown. Scale bars 500 μm, objective 4x. Supplementary Figure 3. Pro-senescence activity of microalgal extracts in BJ cells (a, 100 µg/ml water extracts and 100 µg/ml ethanolic extracts) and HEK cells (b, 100 µg/ml water extracts and 1 µg/ml ethanolic extracts). The effects of water extracts (WE, twelve modified clones from WE1 to WE12) and ethanolic extracts (EE, twelve modified clones from EE1 to EE12) are shown. Control clone water extract is denoted as CWE and control clone ethanolic extract is denoted as CEE. Senescence-associated β-galactosidase activity. Representative microphotographs are shown. Scale bars 100 μm, objective 20x. To emphasize extract action, a red horizontal line is added. Bars indicate SD, n = 3, *** p < 0.001, ** p < 0.01, * p < 0.05 compared to the control (ANOVA and Dunnett's a posteriori test). Supplementary Figure 4. Extract-mediated changes in BJ (a) and HEK cell number (b) after 2 h stimulation with hydrogen peroxide and subsequent cell culture for 7 days in the presence of water (left) and ethanolic extracts (right), and the effect of 24 h treatment with water (left) and ethanolic (right) extracts on BJ (c) and HEK cell number (d) after subsequent cell culture for 7 days without microalgal extracts. Cell number was analyzed using TC10 ™ automated cell counter. To emphasize extract action, a red horizontal line is added. The effects of water extracts (WE, twelve modified clones from WE1 to WE12, left) and ethanolic extracts (EE, twelve modified clones from EE1 to EE12, right) are shown. Control clone water extract is denoted as CWE and control clone ethanolic extract is denoted as CEE. Bars indicate SD, n = 3. (a, b) Cell number after 2 h stimulation with hydrogen peroxide is considered as 100%. *** p < 0.001, * p < 0.05 compared to hydrogen peroxide treatment (ANOVA and Dunnett's a posteriori test). Supplementary Figure 5. Preliminary analysis of anticancer activity of water (a, 100 µg/ml) and ethanolic (b, 100 µg/ml) extracts against MDA-MB-231 breast cancer, U-2 OS osteosarcoma and U-251 MG glioblastoma cells. BJ fibroblasts were used as control normal human cells. The effects of water extracts (WE, twelve modified clones from WE1 to WE12) and ethanolic extracts (EE, twelve modified clones from EE1 to EE12) are shown. Control clone water extract is denoted as CWE and control clone ethanolic extract is denoted as CEE. To emphasize extract action, a red horizontal line is added. Extract-mediated changes in metabolic activity (MTT assay) of cancer and normal cells were investigated. Metabolic activity at standard growth conditions
2020-04-09T09:15:21.086Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "5124a60e35fcd79eb53027d5940a8abb5189449b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/4/1005/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "460049485a6ff723c7a80dd9cb21d458fccb57c3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
53867833
pes2o/s2orc
v3-fos-license
Specifications of the Quality of Granulated Activated Charcoal Used in Water Systems Treatment in Hemodialysis Centers in Brazil This book provides an overview of technical aspects in treatment of hemodialysis patients. Authors have contributed their most interesting findings in dealing with hemodialysis from the aspect of the tools and techniques used.Each chapter has been thoroughly revised and updated so the are acquainted with the latest data and observations in the area, where several aspects are to be considered. The book is comprehensive and not limited to a partial discussion of hemodialysis. To accomplish this we are pleased to have been able to summarize state of the art knowledge in each chapter of the book. Introduction According to the Brazilian Nephrology Society, in 2009, Brazil had approximately 600 Hemodialysis clinical centers. Currently, more than 77,000 Brazilians, who resort to specialized Hemodialysis services, are exposed to a volume of water of 18,000 to 36,000 L/year (Silva et al., 1996). Therefore, if the water used in these centers during the service is not duly treated, many chemical, toxic and bacteriological contaminants may be transferred to the patients, eliciting adverse effects, sometimes lethal (Buchanan et al., 1982;Arvanitidou et al., 2000). The water used in these Hemodialysis centers come mainly from the public supply, and it is known that in many water reservoirs that are aimed for the population supply and consume, as the ones located in the Brazilian states like São Paulo, Paraná and Pernambuco, there is a propensity towards cyanobacteria toxic growing (Mendonça et al., 1999). The first report of human death from hepatotoxins of cyanobacteria, more specifically the microcystins -LR, -YR and -AR, happened in intravenous exposition in a Hemodialysis clinic in the city of Caruaru, Pernambuco, in 1996 (Carmichael et al., 2001). In 2001, another incident involving Hemodialysis water contamination by microcystins was reported. Toxic growths of cyanobacteria with preponderance of the Microcystis sp. and Anabaena sp. were identified in the Funil reservoir and in the Guandu River, both used as water resources for the public supply in the city of Rio de Janeiro, RJ, Brazil. Thus, from that episode, microcystins concentration of the order of 4 g/L and 0.32 g/L, respectively were detected in the water and in the activated charcoal filter, used by the water treatment station of the Hemodialysis Center of the Clementino Fraga Filho Hospital of the Federal University of Rio de Janeiro, which is supplied by the water reservoir of Funil and the Guandu river. As a consequence of this incident, a total of 44 uremic patients who had received care in this Hemodialysis Center, were believed to be exposed to the microcystins found in the water used in the preparation of the dialysate, being until the present time, monitored as to evaluate a possible chronic exposition to those toxins (Soares et al., 2006). Considering thus the need to define the minimal criteria for the functioning and assessment of the public and private services which perform dialysis in outpatients, bearers of chronic Table 1. Assessed Activated Charcoals Characterization of activated charcoals For the characterization in liquid phase by means of use of methylene blue solution and in gaseous phase by N 2 , the charcoals were marked in mortar and pestle, grinded in a sieve with nominal opening mesh of 0.075mm. Whereas the ones used in the experiments in fixed bed column were grinded in sieves with nominal opening mesh of 0.50mm and 0.35mm, being the obtained material in this last sieve collected for the referred experiment. The charcoals were then dried in greenhouse at 150 ºC by 3 hours minimally, and then cooled in desiccators with silica gel until reached room temperature for its posterior use. Adsorption in gaseous phase: Specific surface area and distribution of pores size As the analysis procedures, all the ACs were degasified in vacuum at 150 ºC for 24 h. A software in interface with a gas analyzer (model NOVA-1200, Quantachrome Corp.) was used in the measures of specific surface area and distribution of pores size. Equilibrium data obtained from the isothermal of adsorption /desorption of gaseous nitrogen at 196 ºC were used to determine the specific surface area by means of application of the method developed by Brunauer, Emmet and Teller (BET). The micropores area was obtained by the "t-plot" method. The total volume of the pores was determined converting in liquid volume the nitrogen aborted volume in the saturation point (P/P 0 ~ 0.99). The micropores and primary micropores were calculated from the intercept point of the t-plot linear region after the saturation of the micropores and primary micropores respectively. The volume of the mesopores was calculated from the difference between the total volume of the pores and the volume of the micropores, also, the volume of the secondary micropores was calculated by the difference between the volume of the mesopores and the volume of the primary micropores. The distribution of the size of the pores in the micropore and mesopore regions in the ACs was obtained from the methods developed by Horvath-Kawazoe (HK) and Barrett-Joyner-Halenda (BJH), respectively (Webb & Orr, 1997). Batch experiments An isotherm study of adsorption equilibrium is important as to describe an interaction between adsorbate and adsorbents, and it is critical in the optimization of these materials for both studies in continuous or in batch process. Information regarding the distribution of the sizes of the AC pores were obtained from comparison of the adsorption characteristic for three different adsorbates: methylene blue and [D-Leucine 1 ]microcystin-LR ([D-Leu 1 ]MCYST-LR). The choice of these molecules is justified by their properties, forms and polarities, being the first commonly used for foretelling the capacity of the activated charcoal in adsorbing micropollutants in industrial effluents (Hsieh & Teng, 2000;Lussier et al., 1994), besides providing an estimate of the volumes in secondary micropores + mesopores, as foretold in previous works by Albuquerque Junior et al. (2005). The trihydrate methylene blue (99.95%, Merck, EUA) analytical grade was used in the solution preparation as to determine the Methylene Blue Index (MBI). The adsorption experiments were made in accordance with the norm JIS (Japanese Industrial Standard), JIS-K 1474JIS-K (1991. The methylene blue concentrations in the liquid phase after the equilibrium were determined indirectly from molecular adsorption spectrophotometry (spectrophotometer GBC UV/VIS -911 A) in the wave length of 665 nm. The experimental data were adjusted to the Freundlich's model, and the quantity of the methylene blue adsorbed by the charcoals (q) was calculated according to the equation 1. For the foretelling of the capacity and removal de microcystine in water by activated charcoal, an aqueous extract of [D-Leu 1 ]MCYST-LR of concentration around to 6000 g/L, prepared in drinkable water exempt of chloride, was used as adsorbate. This toxin has been already identified in growths in Lagoa dos Patos, Rio Grande do Sul, Brazil, (coordinates 31° 9'56.93"S 51°25'51.45"W) by Matthiensen et al. (2001) and in Lagoa de Jacarepaguá, Rio de Janeiro, Brasil (22°59'10.69"S 43°23'57.95"W) by Oliveira et al. (2004). The preparation of the respective extract, as in the quantification model of the referred toxin by High Performance Liquid Chromatography (HPLC) was fully discussed by Kuroda et al. (2005). The experimental data were adjusted to the Langmuir's model and the quantity of adsorbed toxin by the activated charcoals was measured from the equation 1. Experiments in fixed bed columns: Adsorption of [D-leucine 1 ]MCYST-LR In order to evaluate the microcystin dynamics in experiments in fixed bed column, an acrylic column was used of 2.5 cm de d i and height of up to 25 cm, adjusted by means of a piston. A distributor plate with five orifices of 1mm openings, made of stainless steel, was inserted in the base of this column. A net of 60 μm was used below the distributor and in the entrance of the piston through which the fluid that passed the column flowed, avoiding that possible loss of adsorbent by reflux (Fig. 1). The charcoal bed was continuously percolated by a solution having [D-Leu 1 ]MCYST-LR of varied concentration, from which effluent samples were intermittently collected until the bed saturation. The toxin concentration in the liquid phase was determined by HPLC and the evaluation of the continuous process of adsorption was made by means of breakthrough curves sizing, which is the relation between the ration of the initial concentration by the toxin concentration in the column effluent (C/C 0 ) vs time (t). Textual characteristics of activated charcoals The activated charcoals are formed by an interconnected net of pores, which according to IUPAC (International Union of Pure and Applied Chemistry) may be classified according to its diameters in different categories: macropores (di > 50 nm), mesopores (2 nm < di < 50 nm), primary micropores (di < 0,8 nm) and secondary micropores(0,8 nm < di < 2nm) (Everett, 1988). The activated charcoal porosity may be estimated from the form of the isotherm of Nitrogen adsorption according to the Brunauer, Deming, Deming and Teller (BDDT) (Gregg & Sing, 1982) classification. Therefore, from that classification, it was observed that the AC-B-F, AC has presented an isotherm characteristic of the type I, typical of micropores material, with relatively small external surface area. Nevertheless, loops characteristic from hysteresis in partial pressures (P/P 0 ) above 0,4 in other ACs, which indicate that these charcoals must present a small band of pores in the secondary micropore region and mesopores. Thus, these other charcoals have presented a combination between the isotherms I and II, the same observed for charcoals taken as reference ( fig. 2). The adsorption isotherms could also be analyzed from the hysteresis loop format according to the standard classification, which contains 4 types: H1-H4 (Girgis & Hendawy, 2002). The hysteresis loops appear in multilayer regions of isothermal of physiosorption, and are considered as being associated to capillary condensation. The hysteresis loop style found for the analyzed charcoals was of the H4 type, which was originated from the influence, even small, of the existing secondary micropores and mesopores in these materials. The charcoal sample AC-A-D has presented an unusual behavior from the obtained isothermal data of N 2 adsorption/desorption, as it has shown a very low Nitrogen adsorpted volume, around 0.27 cm 3 /g, when compared with those obtained by other sampled charcoals, which then Nitrogen adsorpted volumes were above 248 cm 3 /g ( Figure 2d). By the adsorption behavior of that charcoal, it is assumed that the same is an activated charcoal and, thus, could not be used in the water treatment station in the Hemodialysis Centers A (HC Campinas) for the removal of contaminants. The used ACs in the water treatment station of the Hemodialysis Center A have presented BET area (S BET ) between 764.9 m 2 /g e 1,017.4 m 2 /g, whereas the ones used in the Hemodialysis Center B have presented S BET between 632.8 m 2 /g and 789.5 m 2 /g ( A first observation of these results would imply in the choice of charcoals of any of the two centers, however, according to Quinlivan et al. (2005), the BET area (SBET) is a poor indicator of the adsorption capacity of activated charcoals, hence, the sampled charcoals quality cannot be assessed only by their BET (SBET) area data, so, other effectiveness parameters must be taken into consideration in order to choose a charcoal for a determined aim. Thus, beyond that parameter, the secondary micropores and mesopores volumetric fractions must also be considered in the choice of an activated charcoal for the use in water treatment, as these pores are significantly important in the adsorption of organic micropollutants like the microcystins by the activated charcoals according to Donati et al. (1994) and Pendleton et al. (2001). According to Donati et al. (1994), there is no correlation between the adsorption capacity of activated charcoals by microcystins and the BET area, the micropores volume and the number of Iodine. However, the mesopores presence in these adsorbents may favor the adsorption of the cyanobacterium toxin. Moreover, Pendleton et al. (2001), have shown that besides the mesopores volume, the adsorption capacity of that toxin, was also influenced by the volumetric fraction correspondent to the secondary micropores. In these charcoals, the secondary micropores and mesopores volumetric fractions have exceeded 37% e 28%, respectively, favoring the adsorption of microcystin in 200.0 g/mg. The activated charcoals of the water treatment in the hemodialysis Center (A and B) have presented secondary micropores and mesopores volumetric portions of 0.15 to 0.38 cm 3 /g (44.1 to 69.1%) and of 0.01 to 0.09 cm 3 /g (2.6 to 16.4%), respectively. These charcoals have presented secondary micropores portions higher than those found by Pendleton et al. (2001), however, they have also presented very low mesopores volumetric fractions, around 8% in average, which characterizes microporous charcoals. Nevertheless, those commercial activated charcoals obtained as reference by the authors, have presented secondary micropores volumetric fractions and mesopores of 59.2 to 63.6% and of 34.5 to 41.8%, respectively, which can be characterized as a good indicator for the AC for the treatment water use. A visualization of the pores size distribution can be obtained from the distribution function calculated by the HK and BJH (Figure 3) methods. The (A), (B) and (C) charcoals have shown a distribution function HK/BJH, with dW/dLo of 0.04; 0.07 and 0.06 cm 3 /nm/g consisting of pores average diameters of 3.81, 3.80 and 3.28 nm, respectively. Due to the low adsorption capacity of the D charcoal, it was not possible to obtain its distribution of the pores sizes. For the E charcoal, the distribution function dW/dLo has presented two peaks with 0.02 and 0.01 cm 3 /nm/g a 2.53 and 4.15 nm in pores average diameter; while the AC-B-F has presented dW/dLo ≈ 0.06 cm 3 /nm/g to an average pore diameter of 2.7 nm. In contrast with these results, the standards AC-R-G and AC-R-H have shown a distribution function a little higher: (AC-R-G) dW/dLo ≈ 0.3 cm 3 /nm/g with 1.61 nm of pore average diameter, and AC-R-H has presented two peaks with dW/dLo ≈ 0.17 and 0.08 cm 3 /nm/g with pore average diameter of approximately 1.71 and 3.00 nm, respectively. be taken as referential of the latter. Thus, taken the approximate dimensions of the MCYST-LR it is presumed that the [D-Leu 1 ]MCYST-LR is adsorbed in the activated charcoal pores with an internal diameter close to 2-3 nm. The distribution of the pores size ( Figure 3) estimated from the HK/BJK has allowed to observe that the sampled charcoals have a pores average diameter between 2.5-4 nm, which would make these charcoals potential adsorbents for their use in the removal of the water microcystin, but the low volume of the pores observed in this 2.5-4 nm band is smaller than 0.07 cm 3 /g, which makes these charcoals adsorbents of low adsorption capacity for such aim. Adsorption on liquid phase 3.2.1 Batch experiments Knowing the adsorption equilibrium represents the first step in investigating the possibilities for using an adsorbent in a determined separation process. Besides, additional information regarding the distribution of the sizes of the activated charcoal pores can be obtained by comparing the adsorption characteristics of adsorbate by taking those obtained from adsorption data in gaseous phase. Methylene blue (MB) has been widely used as an adsorbate to estimate the adsorption capacity of CA from continuous in fixed bed or batch experiments (Kumar & Sivanesan, 2006;Macedo et al., 2006;Zhang et al., 2006). Studies on MB adsorption equilibrium in activated charcoal can provide important information about the selectivity of these adsorbents regarding this molecule, given that the MB is accessible to the charcoal pores with an inner diameter greater than 1.5 nm, being important for the characterization of the secondary micropores (0.8 <di <2.0 nm) and mesopores (2 nm < di < 50 nm) mainly, besides being a model compost used for predicting the adsorption of organic contaminants found in industrial effluents such as textile dye and microcystines (Barton, 1987;Baçaoui et al., 2001). According to the JIS norm (1994), the Methylene Blue Index is operationally defined as the adsorbed amount of that molecule when its residual concentration in liquid phase after equilibrium is of 0.24 mg/L. This adsorption capacity was obtained from an equilibrium isotherm where the experimental data were adjusted to Freundlich's adsorption model. The correlation coefficients for linear regression from the adjustment of the experimental data to the respective linearized model along with its empirical parameters, K and 1/n, besides the charcoals' Methylene Blue Index (MBI), are displayed on Standard international and national norms regarding the quality of activated charcoal for water treatment bring specifications primarily concerning the minimal iodine adsorption limit (iodine number), 600 mg/g (ABNT -EB-2133, 1991 and AWWA -B600-05) and 900 mg/g (ASTM -D 4607-96), making no mention to the minimal limits of methylene blue adsorption. It is known that iodine is a small molecule of approximately 0.8 nm and being thus associated with micropore adsorption. Therefore, the iodine number cannot be the sole specification of quality standard adopted for an activated charcoal destined for water treatment, because it is a well-applied parameter for microporous charcoals. In Marroco, the activated charcoals destined for water treatment have in their specifications the minimal limits established for the methylene blue adsorption capacity of 180.0mg/g (Baçaoui et al., 2001). Thus, if we consider this minimal limit as a specification for the activated charcoal sampled in both the Hemodialysis Centers, we would see that none of these charcoals could be used for water treatment, because they are specifically microporous activated charcoals. can be explained by their low mesopore volumes, around 0.04 cm 3 /g in average, when compared to those of other charcoals taken as standards, AC-R-G and AC-R-H (0.20-0.39 cm 3 /g). Experiments in fixed bed column In the dinamic adsorption dynamic of the [D-Leu 1 ]MCYST-LR in a fixed bed of activated charcoal, the latter removed that toxin from a solution until the saturation of the bed, wherein the performance of this continuous adsorption process is affected, among other parameters, by the concentration of the entry solution (Gomes et al., 2001) and by operational conditionals such as particle size and fluid flow in the column (Inglezakis et al., 2002). According to Sag & Aktay (2001) and Barros et al. (2001), the solutions which are more concentrated saturate the bed faster, and small particles diminish the resistance to mass transfer. It is also known that an increase in the flow of the solution on the bed reduced its adsorption capacity, increasing the lenght of the mass transfer zone, because the phenomenon of mass transfer necessary for adsorption of [D-Leu 1 ]MCYST-LR might not be able to continue in higher mass transfer rates, brought forth by an increase in the flow of the fluid (Watson, 1999). Besides, the bed flow can be deviated from the ideal because of the flow channeling due to insufficient material wettability. Those problems can reduce the adsorption process efficiency, because it is important that the column operates as close as possible to the flow/runoff conditions as the one observed for a tubular-type "plug flow" reactor. Hence, in order to correctely plan and operate the continuous adsorption process of that toxin in a AC fixed bed, as the one found in many water treatment stations of hemodialysis centers, it is necessary to study the kinetics and adsorption equilibrium for that toxin, besides knowing its adsorption dynamics in a fixed bed through the sizing of the breakthrough curves. The first step in a adsorption project for [D-Leu 1 ]MCYST-LR in fixed bed column is the establishment of optimal conditions for preparing the process, that is, those that minimize the diffusional resistances both in the film and in the interior of adsorbent particles, thus favoring a greater interaction between the charcoals' active sites, accessible to adsorption, and the adsorbate. In this experiment, the previous studies of methylene blue adsorption in activated charcoal fixed bed have shown flows between approximately 8 and 12 mL/min and average particle size of 0.425 mm would be conditions that could reduce to minimum those mass transfer resistances without significant increase in charge loss on the bed, and thus be taken as a starting point for the planning of a continuous adsorption process for that toxin. Under these conditions, one should expect that the breakthrough curves come closer to a perfect degree, which is desirable (Mccabe et al., 2001). Thus, taking such conditions like particle size, flow and height of the bed, new experiments with solutions [D-Leu 1 ]MCYST-LR were carried out aiming to evaluate the removal of this toxin from the treated water using fixed bed columns of AC-A-B charcoals and AC-B-F, besides the AC taken as standards AC-R-G and AC-R-H (Table 4). Aiming to recover the solution initially containing [D-Leu 1 ]MCYST-LR kept in contact in a continuous fashion with an activated charcoal bed initially free from it, the concentration of this toxin in the exit of the bed was monitored, in function of the time, producing curves as shown in Figure 5 denominated breakthrough curves. Initially, the adsorbent layer located in the inferior part of the bed adsorbs the solution quickly and effectively thus reducing the concentration of that toxin in the exit of the column. The effluent on the top of the bed is practically free of solute. In this situation, the inferior layer of the bed is practically saturated and the adsorption occurs in an adsorption zone (Zad) that is relativelly narrow and the concentration changes rapidly. Continuing with the solution flow, the Adsorption Zone flow (Zad) moves in an ascendant way like a wave, at an ordinarily much slower rate than the linear fluid velocity through the bed. In a certain time, practically half of the bed is saturated with the solute, but the concentration in the effluent is still substantially zero. When the adsorption zone (Zad) has reached the top of the bed, and the solute concentration in the effluent increases sensibly, the system is said to initiate rupture, so the solute concentration in the effluent increases rapidly when the adsorption zone (Zad) passes through the bed bottom and the solute concentration is substantially equated to the concentration value in the initial solution (C 0 ). Based on the data obtained from the Breakthrough curves, it was possible to estimate the needed time for the C/C 0 = 0.05 ratio to be reached in the exit of the column, which denominated breakthrough time ( Comparing the results obtained from the activated charcoals AC-R-G and AC-R-H with the Bernezeau (1994) data (50 μg microcystin-LR/12 mg Powdered Activated Charcoal -PAC), the sugarcane bagasse based-activated charcoal showed an adsorption capacity 1.3 times bigger than the PAC studied by the aforementioned researcher, remembering that in our studies we have used an extract with four microcystins. According to Zambon (2002), qualitative information regarding the resistance to mass transfer can be obtained from the form of breakthrough. If the mass transfer zone is narrow, the breakthrough curve will be more inclined, whereas if the zone is wider, the curve will be more elongated. From Figure 6, it is possible to see that such zones differed from charcoal to charcoal according to the operational parameters established in each experiment such as initial concentration, flow and bed height. Hence, in the case of the Breakthrough curves obtained for the AC-R-G, AC-A-B and AC-R-H beds, a narrower S-shaped curved indicating a narrower mass transfer zone and, therefore, with mass transfer resistance to be considered. Nonetheless, the breakthrough curve obtained from the bed packaged with charcoal AC-B-F (Hemodialysis Center B) displayed a more elongated S-shaped curved, as seen on Figure 5, indicating a broader mass transfer zone and, therefore, little resistance to mass transfer. The supposition of the adsorption zone provides the basis for a method for a much simpler project, which makes it possible to scale up the experiments from a small laboratory scale. However, besides the Mass Transfer Zone (MTZ) concept, the estimative of parameters such as average residence time and dimensionless variance can help in the design of an adsorption column (activated charcoal filter). Conclusion Granulated Activated charcoals are largely used in adsorption continuous processes in fixed bed for the water treatment in Hemodialysis Centers. The quality of the water used in these centers is intrinsically associated with the quality of these charcoals. As in Brazil, there are no norms, and neither, entities which control the quality of these adsorbents; it is each day more important to find the correct parameters indicators of quality. Activated charcoals used in two water treatment stations in hemodialysis centers were sampled as to assess their qualities. The Specific Surface Area (S BET ) of the charcoal sampled in the water treatment stations in the two centers have presented values between 600 and 1000 m 2 /g approximately. S BET values around 800 m 2 /g are usually taken as reference value, for those who acquire such adsorbents, mainly those destined to the water treatment. From that principle, it is verified that the sampled charcoals of both Hemodialysis Centers are in accordance with this parameter, however, it was also observed, that the same charcoals have mesopores volume of (0.01-0.09 cm 3 /g), which are significantly below from those specified in the literature (0.40 cm 3 /g), thus, the charcoals from both centers must be rejected for such aim or used with caution. The blue methylene adsorption was proposed as adsorption capacity measure of activated charcoals, as this molecule has been used as to estimate the charcoal mesopores volume. It was observed that the activated charcoals which have presented higher mesopores+ secondary micropores volume have more adsorption capacity to that molecule, and, hence, may be taken as a model to estimate the referred pores region, which is important for the charcoals used in the water treatment. Regarding the first estimate of the adsorption capacity in batch of the activated charcoal in both hemodialysis centers for the cyanobacterium toxin [D-Leu 1 ]MCYST-LR, low removal efficiencies for that toxin were observed (close to 4%), compared with the activated charcoal from sugarcane bagasse (close 99%). The behavior in the adsorption of the sampled charcoals in the hemodialysis centers is associated with its low mesopores volumes, smaller than 0.04 cm 3 /g, contrary to the sugarcane bagasse based-activated charcoal whose mesopores volume is around 0.40 cm 3 /g. Preliminary studies regarding the dynamic adsorption of the [D-Leu 1 ]MCYST-LR in fixed beds of the activated charcoals in the Hemodialysis Centers have shown low adsorption capacity between 3.67 and 5.31 g/mg. Regarding the coconut shell and the sugarcane bagasse based-activated charcoals it was observed a 2.07 and 5.43 g/mg adsorption capacity of the respective beds.
2018-11-19T09:48:01.751Z
2011-12-07T00:00:00.000
{ "year": 2011, "sha1": "fb045164f71ee4064ee09d2d7bc5db100218101d", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/24617", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4f4b6ddbaebe19f67c4fb5515bb5194251822c0b", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207796593
pes2o/s2orc
v3-fos-license
Advances in the use of GABAergic interneurons for the treatment of epilepsy Abbreviations: 5HT3aR: serotonin Receptor 3a; AED: Anti-Epileptic Drug; CB: Calbindin; CGE: Caudal Ganglionic Eminence; CNS: Central Nervous System; CR: Calretinin; DCX: Doublecortin; ESC: Embryonic Stem Cells; fEPSP: fi eld Excitatory Postsynaptic Potential; GABA: Gamma-AminoButirric Acid; GFAP: Glial Fibrillary Acidic Protein; HFO: High Frequency Oscillation; IN: Inhibitory Neurons or InterNeuron; iPSC: induced Pluripotent Stem Cells; KA: Kainic Acid; LGE: Lateral Ganglionic Eminence; MES: Maximal Electroshock; MGE: Median Ganglionic Eminence; NeuN: Neuronal Nuclear Antigen; nNOS: Neuronal Nitric Oxide Synthase; NPY: Neuropeptide Y; NSC: Neural Stem Cells; PTZ: Pentylenetetrazole; PV: Parvalbumin; SE: Status Epilepticus; SRS: Spontaneous Recurrent Seizures; SST: Somatostatin; TLE: Temporal Lobe Epilepsy; VIP: Vasoactive Intestinal Peptide Introduction Introduction Epilepsy is a severe neurological disease affecting more than 70 million people worldwide. It is characterized by unpredictable and abnormal electrical discharges resulting in recurrent seizures [1]. Epilepsy has been causally linked to altered excitatory/inhibitory neuronal balance. Based on this theory, GABAergic interneurons (INs) are regarded as the primary inhibitory neurons whose activity failure results in hyperactivity of the epileptic circuitry [2]. Anti-epileptic drugs (AEDs) are the forefront treatment for seizure control [3]. About one third of patients with epilepsy, however, suffer from intractable seizures that are unresponsive to AEDs [4]. Furthermore, patients that respond to AEDs typically face serious adverse side effects [5]. Surgical removal of affected brain tissues or implanting neurostimulator devices are effective options only for a fraction of drug-refractory patients [6]. Furthermore, current treatments of epilepsy often enhance some of the de icits in cognitive functions typically associated with epilepsy, leading to poor compliance [7]. Thus there is urgent need to ind alternative strategies, especially for AED-refractory patients. Basic research on stem cells, their cellular and molecular properties, and their potential use in cell-based therapeutic approaches has greatly advanced, in recent years. Intense efforts have been placed on the establishment of protocols to ef iciently direct differentiation of stem and progenitor cells towards several lineages. Among these GABAergic INs, the main inhibitory neuronal type in the mammalian forebrain, have been obtained with good enrichment both from mouse and human stem cells, and hold promises for their potential use in clinic [8]. However, several problems and drawbacks arise, among which the highly heterogeneous neurochemical and networking properties of INs, the dif iculty to fully recapitulate their developmental trajectories, and their fate upon transplant in the long term. In spite of dif iculties, attempts to use inhibitory GABAergic INs in cell-based therapies are showing some ef icacy to attenuate the epileptic phenotype in selected animal models. Here we review the most relevant literature focussing on how to obtain and characterize genuine inhibitory INs and how these can be grafted in animal models (and one day possibly in human) of epileptic conditions, showing promising results that should foster future research in this area. Types, origin and functions of forebrain INs The mammalian cerebral cortex is typically a six-layered organization which includes two major classes of neurons: the excitatory pyramidal cells that project to cortical and subcortical targets, and the inhibitory non-pyramidal cells, the cortical INs. The hippocampus, that has a similar embryonic derivation, is equally composed of pyramidal projection and inhibitory neurons, although with a different layering organization and composition [9]. While pyramidal cells project their axons to distant region within or outside the cortex, and predominantly transmit signal using glutamate as neurotransmitter, INs have short, locally extending axons which connect to nearby neurons and use primarily -aminobutyric acid (GABA) as neurotransmitter. Within the mammalian cortex GABAergic INs represent 10-20% of the total cortical neuronal population, depending on the region examined, and are the only source of GABA [9]. Adult INs are highly heterogeneous from several points of view, and consequently several subclasses have been identiied whose classi ication relies on morphologic, histologic, molecular and neurochemical features or electrophysiological properties, or combination of these criteria [10,11]. In the rodent cerebral cortex and hippocampus, more than twenty GABAergic IN subtypes have been documented. The expression of three calcium-binding proteins calretinin (CR), calbindin (CB), parvalbumin (PV), or other markers such as somatostatin (SST), neuropeptide Y (NPY), cholecystokinin (CCK), serotonin receptor 3a (5HT3aR), vasoactive intestinal peptide (VIP), reelin, and neuronal nitric oxide synthase (NOS), coupled to their distinctive morphology, connectivity, synaptic properties, and intrinsic iring properties are features that help to differentiate the various IN subtypes [12,13]. Recent in depth molecular analyses using single-cell RNAseq con irm and extends the extraordinary variety of INs [11,14,15] further indicating that such diversity might be speci ied as early as the progenitor state [16]. The origin and development of forebrain INs has been thoroughly studied in the rodent brain: they derive from progenitors that reside in basal regions of the embryonic brain, namely the median ganglionic eminence (MGE) (for cortex and hyppocampus), the lateral ganglionic eminence (LGE) (for olfactory bulb INs) and the caudal ganglionic eminence (CGE) [17][18][19][20][21]. Cortical and hippocampal INs derive mainly from the MGE and the CGE, and the majority of them express SST or PV, and originate from Nkx2.1 + progenitors in the MGE and the preoptic area [22][23][24][25]. Following exit from the cell cycle, the immature INs migrate into the cortical plate following stereotyped trajectories and timing, reaching the cortex and hippocampus primordial where they mature and integrate into local circuits [12,26]. INs migration follows several tangential routes and is critical for the correct establishment and integration of INs during embryonic and early postnatal life in both humans and rodents [21,27]. In human, in addition to interneuron genesis occurring within the GEs [28], IN progenitors have been detected in cortical progenitor zones, particularly at the anterior brain regions [29,30]. Intracortical interneuron genesis may start earlier than the time when tangential migration of INs from the MGE occurs [29]. As a general feature, GABAergic INs play a fundamental role in controlling the neural circuitry and network activity of the central nervous system (CNS), forming numerous connections with local pyramidal neurons or other INs and participating in the construction of functional networks. Cortical INs exert an essential inhibitory function within local cortical circuitry, modulate and coordinate signal transmission from pyramidal cells, synchronize oscillatory activities and participate in computational functions [31,32]. Each distinct IN subtype shows peculiar morphological and electrophysiological properties [32][33][34] and thus it is generally believed that each subtype of IN carries out specialized tasks in the control of information low within the cortex. However, the speci ic function of each IN subtype, especially in the context of the global functioning of neuronal networks and their role in epileptic seizure, is still incompletely de ined. The involvement of INs and reduced GABAergic inhibition in epilepsy Epilepsy re lects abnormal and massive hypersynchronous discharges from large assembly of neurons in cortical networks [35,36]. Temporal lobe epilepsy (TLE) is the most common form of epilepsy, characterized by the presence of epileptic foci in the limbic system, initial precipitating injury event (which induces the initial brain damage), the presence of a latent period and hippocampal sclerosis which leads to a reorganization of the neuronal network [37]. The etiology of epilepsy is complex and heterogenous, comprising structural, genetic, infectious, metabolic, immune and/or other unknown causes [38]. Epileptic seizure might be caused by a variety of factors [39][40][41][42]. Altered excitation-inhibition balance is a proposed common pathophysiological feature of multiple seizure and neuropsychiatric disorders, including epilepsy, schizophrenia and autism [43][44][45][46][47][48][49][50][51]. Altered IN genesis, migration and function may logically contribute to modify the excitatory-inhibitory balance, and speci ically a reduced inhibition is the expected consequence of reduced number or hypoactive INs within cortical and hippocampal circuits. Failure or interference of IN migration leads to abnormal distribution of INs and alterations of the inhibitory control of the postnatal brain but also deprivation of the neurotrophic role of GABA in early development, resulting in epilepsies or other neurological disorders [26,[52][53][54][55][56]. One speci ic subtype of INs has been critically involved in epilepsy, i.e. the fast-spiking PV-expressing INs [57][58][59]. However, the involvement of other subtypes might be under evaluated, since they are less studied. Indeed, the role of INs and altered excitation/inhibition balance in epilepsy is far from being completely understood. Recent evidence argues for the context-dependent, possibly excitatory roles that GABAergic cells play in epileptic circuitry. The dynamic, context-dependent role of GABAergic INs in seizure requires further investigation of their functions at single cell and circuitry level [60]. The causes of INs dysfuctions associated with epilepsy range from genetic mutations, altered migration, altered lamination and ine histoanatomical organization, connectivity and plasticity defects. Indirectly, also metabolic and hormonal condition may in luence the function of IN, and last but not least the exposure of the developing brain to stress or substances with teratological activity [61]. Dysfunction of cortical INs has been implicated in a wide range of neurological and cognitive/behavioral disease [33,[62][63][64], leading to the term "interneuronopathy", now widely used. The variety of neurological and cognitive disturbances linked to altered IN development and functions is not surprising, considering also the infantile transitional stage in the maturation of the GABAergic system, including the migration of INs in the cortical cortex, development of GABAergic synapses and dendritic arbor, as well as the changing expression, composition, and function of GABA-A receptors, resulting from the depolarizing to hyperpolarizing switch in their responses. A broad set of evidence implicate INs disfunctions in epilepsy [65,66]; and these include: 1. Human neurodevelopmental conditions comprising epilepsy, associated with mutations in IN-relevant genes; such is the case of the West and the Dravet syndromes. Mutations in over 25 epilepsy-associated genes in human have been proposed to promote over-excitability, some of them acting by reducing inhibition [66]. In tissue from patients with TLE, hippocampal seizure foci were characterized by the loss of SST-expressing INs [67,68]. 2. The association between a spectrum of early-onset epilepsies and neurodevelopmental or other neurological disorders that manifest as interneuronopathies in animal models. Such is the case of mice with mutations in Arx, in Scn1A and in Apc. Linked to this, the reduced IN population in rodent models of genetic (Arx mutant) and induced infantile spasms [65]. On the same line of evidence, the genetic mutation of Snc1a disrupts IN function and physiology and recapitulates the Dravet syndrome, characterized by infantileonset drug-resistent epilepsy. In mice, the impairment of the activity of PV + and SST + INs leads to a disinhibition of the cortical network and a Dravet-like phenotype [69]. In a nonhuman primate model of neocortical focal epilepsy, a strong decrease in the number of GABAergic synapses was observed at epileptic foci [70]. 3. The association between altered high-frequency oscillations (HFO) (such as and oscillations) and epilepsy [71,72]. HFO are typically recorded from brain regions capable of generating TLE, and altered HFO have been recorded in models of other epileptic disorders, such as neocortical epilepsy, genetically-caused epilepsy and infantile spasms [72]. Recent improvements in recording technologies and the introduction of optogenetics into epilepsy research has led to a better elucidation of the cellular substrates of epileptic HFO and of the role of altered neuronal networking. Although the role of INs in the generation of HFO is not fully de ined, there are indications that a speci ic subpopulation of INs, the fastspiking PV + INs, are central to the emergence and control of  oscillation. Altered PV + neuron number and function have increasingly been associated to increase risk of epilepsy [73]. Likewise, INs have been implicated in the control of  frequency oscillations [71]. 4. Recent optogenetic studies have shown that enhancing the inhibitory function of GABAergic INs ef iciently leads to suppression of seizures, in accordance with the concept that the excitatory-inhibitory balance shifts towards the excitatory regime in epilepsy [60,74,75]. 5. Treatment with AEDs acting on the GABAergic synapse represents the most widely used clinical approach to treat epilepsy, and clinical data show that in these alleviate the manifestation during seizures, at least in the major of cases [76]. Treaments with AEDs however do not cure epilepsy, while on the other side systemic treatments with AED to prevent episodes may lead toserious side effects. More effective and local treatments are needed, that attempt to restore normal connectivity and excitation-inhibition balance. Deriving GABAergic neurons from progenitor cells in vitro As a most straight-forward approach, MGE embryonic cells can be dissociated directly from explants of the MGE tissue and maintained in vitro in adherent conditions. These cells can be plated in culture and can undergo maturation into INs expressing bona ide mature markers [77]. This approach has been widely exploited with mouse tissues for many experimental works in which single cells can be labeled with luorescent tracers and monitored for migration, neuritogenesis and synaptogenesis. However, this method is poorly applicable to human settings since the approach is limited by the scarceness of tissue (i.e. fetal MGE from aborted human embryos). To overcome this barrier, in the last decade several protocols to obtain bona-ide MGE cells from pluripotent stem cells (PSC) have been described [8]. Mouse and human PSCs have been shown to have a broad potential to generate multiple CNS neuronal subtypes [78]. The possibility to derive mature forebrain INs directly from PSC holds great promises for research and therapeutic applications. Since both embryonic stem cells (ESCs) and iPSC (induced PSC) have been successfully derived from rodents and human, these represent the ideal source of cell-based therapies therapies, raising hopes for their applications in the ield of neurodevelopmental or neurodegenerative disorders. Initial studies have been performed with ESCs or iPSC of mouse origin and then extended to the human counterpart. In general, these protocols are based on the exposure of PSCderived neuroectoderm tissue to speci ic morphogens/signals known to instruct embryonic MGE cells speci ication during development [79,80]. Watanabe and colleagues described that stage-speci ic Wnt pathway inhibition in mouse ESCs can induce the generation of rostral Foxg1 + (i.e. telencephalic) progenitors and that their subsequent exposure to sonic hedgehog (SHH) drives the regionalization to a ventral Nkx2.1 + forebrain MGE progenitor identity [81]. Although this study represented a milestone in the ield, the ef iciency was quite low and the cultures contained also Pax6 + telencephalic dorsal progenitors (i.e. progenitors of cortical excitatory neurons). Attempts to generate more pure cultures have been performed by exploiting reporter-based selection approaches. This strategy allows to better de ine factors for effective and more selective MGE regionalization and demonstrates that while FGF8 exposure can promote MGE identity, FGF15/19 had a negative effect, being effective in promoting CGE cell fates [82]. The most signi icative example of reporter-based selection approaches has been the use of Lhx6::GFP-expressing mouse ESCs, which allowed to select enriched populations of MGE progenitors in vitro [83]. This strategy showed to be successful (although with low ef iciency) in generating SST + and PV + GABAergic INs in vivo and also showed their in vivo relevance after intracerebral transplantation [83]. It was then shown that forced expression of Nkx2.1 resulted in higher and prolonged expression of Lhx6 and improved ef iciency of cortical IN generation, in the absence of SHH [45]. Likewise inducing the expression of Nkx2.1, Dlx2 and Lmo3 resulted in an ever-higher ef iciency of MGE-type IN generation, with a preference of PV + over SST + young neurons [84]. Transplantation studies using these neurons substantially con irmed these indings [85]. Thus recapitulating the physiology of development in vitro results in the generation of apparently genuine INs. Still, the differentiation of progenitors towards INs entails formidable problems, due to extreme diversi ication of IN subtypes and their early cell fate determination. One of the key issues is the lack of adequate molecular and functional markers to fully de ine them, and the poor knowledge on the temporal dynamic of their expression. Until few years ago INs were characterized at a population level by using few neurochemical markers, with the caveat that these analyses represents averages of cell behavior and cell states. While individual differences cannot be appreciated. Today we begin to appreciate the full complexity and variability of the molecular states of these neurons at the single-cell level [15,16]; in this respect Close and colleagues examined stepwise changes in gene expression pro iles of in vitro differentiating progenitors, using a single cell approach, and then compare these with fetal cortical INsat various ages [86]. Their analysis yielded results that substantially con irm all key inding in vivo, while at the same time pointing to novel disease-relevant differentiation determinants, to be included in future monitoring efforts. Differently form mouse PSCs, the generation of cortical INlike cells from human ESCs and iPSCs lagged behind mainly because of the much longer time required for differentiation of human cells, their tendency to die after replating, and the lack of methods to purify IN progenitors or post-mitotic precursors from the mixed population that occur with every protocols. While some study demonstrated to capacity of human iPSCs to generate telencephalic-like progenitors [87], two studies were critical in moving the ield forward. First, the optimization of aprotocol based on the strong dual inhibition of the TGF pathway with the peptide noggin and the small molecule SB431542 during initial phases of differentiation, showed that neural ectoderm can be ef iciently obtained from pluripotent stem cells [88]. Second, also in human, differential WNT signal agonism or antagonism can be used to drive telencephalic (Foxg1 + ) human ES-derived cultures into cortical-like (Pax6 + ) or MGE-like (Nkx2.1 + ) progenitors [89]. Now, a relatively rapid and uniform generation of telencephalic progenitors has been achieved. These progenitors can be "dorsalized" or "ventralized" into a variety of sub ields and differentiated into a variety of neuron types, including cortical INs [90][91][92]. While these studies are promising, caution needs to be applied as they show two major challenges towards safe and reliable generation of human INs. First is the protracted maturation stage of the human MGE progenitors to give rise to functional INs. Second, only few cells actually acquire a PV+ identity, which in turn could be the consequence of the longer differentiation time of these particular neurons. As much attention is drawn towards this disease-relevant neuron type, our knowledge is possibly biased towards these neurons. Despite the challenge in generating reliable and stable human INs, two studies showed some degree of ef icacy in the treatment of seizure in mouse models, in vivo [93,94]. Based on this, research in the ield is rapidly proceeding trying to answer these crucial questions: a) can we obtain the desired IN type and subtype? b) can we retain their ability to establish inhibitory synapses on the correct neuronal targets? c) can we avoid long and complex protocols, inevitably leading to considerable cell loss and increased variability? Considering the high molecular variability, a valid monitoring method will be single cell RNAseq analyses [86]. Alternative strategies to the direct generation of INs from PSCs have been also developed. Among them there is the possibility to derive stable and homogeneous lines of adherent neural stem cells from iPSCs and maintain them in monolayer culture while preserving a full and homogeneous commitment towards the GABAergic lineage [95,96]. These adherent monolayer cultures show limited capacity to maintain a de ined dorso-ventral identity [97,98] and to generate multiple brain cell types, other than GABAergic neurons [99,100]. This approach has been shown to work also for human cells [101][102][103]. Finally, direct conversion strategies that allow to directly convert somatic cells (i.e. ibroblasts) into forebrain INs by forced expression of speci ic transcription factors have been recently described [104,105]. This technique can produce functional forebrain INs without the requirement to pass through a pluripotent state but the overall ef iciency, in particular for the human system, is quite limited. The choice of cells used in models of epilepsy Cells from various sources and maintained in different conditions have been tested in preclinical models of epilepsy following grafting into distinct regions of the brain ( Figure 1). The donor cells examined include hippocampal precursor cells, neural stem cells (NSCs), primary GABAergic neurons or GABAergic precursor cells derived from either embryonic LGE or MGE, or from mouse and human ESCs and iPSCs [106]. All these cells have been shown to modulate hippocampal plasticity, modify epileptogenesis, reduce the frequency of spontaneous recurrent seizures and alleviate related comorbidities, although with varying ef iciency. The outcomes varied depending on the animal model employed, the timing of grafting intervention after the irst seizures and the timepoint of measurement of SRS [106]. 1. NSCs short-term expanded from the embryonic MGE or from the postnatal subventricular zone (SVZ) are good candidates as donors, being multipotent and self-renewing [107]. To be noted, NSC grafting implies the possibility to replenish both new GABAergic INs and new astrocytes into a brain area, the later secreting a multitude of neurotrophic factors and anticonvulsant proteins such as GDNF [108,109]. Due to their ability to engraft into the dentate gyrus and in luence hippocampal neurogenesis [107], NSCs might also be able to ameliorate those cognitive functions which typically decline in chronic TLE. 2. Cells dissociated from embryonic MGE can be maintained in culture and generate INs able to migrate in the host brain when transplanted in early postnatal mice, expressing INs markers PV, SST, NPY and CR [22,110]. These cells are characterized by the expression of markers such as NeuN, doublecortin (DCX) and Hu24, but they are negative for nonneuronal markers such as glial ibrillar acidic protein (GFAP), vimentin and Olig2 [111]. Most importantly, MGE-derived neurons establish synapses and direct evidences suggest that these cells can functionally integrate and in luence GABAmediated inhibition in the host brain [110]. Moreover, in vitro electrophysiology studies con irm that these cells have iring properties typical of mature INs [111]. 3. Obtaining genuine INs from hiPSC to be used in cell-based therapy circumvents two key issues: the limited availability of fetal brain material and the use of autologous, patient-speci ic cell [112]. hiPSC-derived MGE-like progenitors potentially yield both cortical-like and striatal-like GABAergic INs that express SST, CR, and CB. Electrophysiological analyses have shown a slow maturation of these INs. Following injection into the mouse brain, hiPSC-derived IN precursors dispersed from the injection site, matured into subtypes, and functionally integrated into the host cortex. They did not migrate as extensively as dissociated embyonic MGE cells and some cells remained at the injection site. This suggests that optimization of the iPS differentiation and/or development of methods to purify migratory MGE-like cells may be needed [92]. Advances in the use of GABA-committed progenitors in models of epilepsy There is urgent need to develop new therapies that directly target epileptic foci restoring proper inhibition and synchronicity, rather than more systemic interventions [93,113]. One strategy that has been proposed in recent years is to restore normal circuit function by transplanting cells capable to differentiate into GABAergic neuronsat the seizure focus. This approach has been recently tested experimentally and validated, owing to improved tools and methods for cell propagation and differentiation in vitro [106,114] (Figure 1). In this section we will illustrate the most recent developments and discuss critical issues in moving these experimental therapies into clinic. We will also highlight aspects such as cell-survival, migration, differentiation into INs subclasses and functional integration in the host network, and especially emphasizing the long-term effects of different grafting approaches. For clarity, we have subdivided the results of grafting into the hippocampus from those obtained in the cortex, justi ied by the fact that TLE, the most common type of epilepsy in adults, initiates at the hippocampus. Key observations are summarized in table 1 (hippocampus) and table 2 (cortex). Hippocampus: TLE in rodents can be pharmacologically induced by systemic injection of pilocarpine or kainic acid (KA). These treatments recapitulate the status epilepticus (SE), followed by a latent period and the subsequent appearance of spontaneous recurrent seizures (SRSs) representing the chronic phase. These treatment also induce functional and histoanatomical changes in the hippocampus, such as degeneration of dentate hilar neurons and some pyramidal neurons in CA1 and CA3 regions, differentiation of dentate granule cells, reduction in the number of GABAergic INs, aberrant sprouting of dentate mossy ibers, hippocampal hyperexcitability and learning and memory de icits [37,93,107,115,116]. A irst attempt exploited an established cell line of uncommitted human pan-neuronal stem cells (hNSCs) obtained from dissociated fetal brain tissue. These cells were grafted in the hippocampus of rats 24 hrs after the cessation of pilocarpine-induced SE [31], and the anti-epileptic effects were evaluated 28-35 days after. The grafts resulted in a signi icant reduction of seizure frequency, duration and severity, associated with reduced aggressiveness. Six weeks after grafting, hNSCs had migrated in various hippocampal areas, in the amygdala and piriform cortex; 42 days later the number of grafted cells in CA1, CA3 and hilus was diminished, while it remained unchanged in CA2 and granule cell layers. GABAergic INs, representing 26% of grafted cells, were found in the hippocampus and piriform cortex, and 30% of these expressed PV. A small percentage of glutamatergic (GluR2 + ) and astrocytes (GFAP + ) was also found in the CA1 area. The grafted cells did not acquire granular or pyramidal phenotypes, suggesting that hNSCs mostly differentiated into GABAergic INs. This was further con irmed by recording ield excitatory postsynaptic potential (fESPS) in CA1 and stimulation of Schaffer collateral ibers. These experiments showed that the fEPSP amplitudes of hNSCs-transplanted group was signi icantly smaller compared to controls. Thus, the attenuation of seizure was associated with an enhancement of inhibition due to increased GABAergic activity in the damaged hippocampus [31]. In this study, the long-term fate of the transplanted cells, in terms of survival, senescence and stability of the differentiated phenotype, was not assessed. In a subsequent study, GABAergic progenitors obtained from dissociated mouse embryonic LGE were used [117]. The cells were pre-treated with FGF2 and caspase inhibitor bilaterally grafted into the hippocampi on KA-treated rats. Gra ing was done four days after the appearance of SE. The majority of surviving grafted cells (33% of the total) differented into NeuN+ cells, a large percentage of which was GABAergic, further subdivided in CB + , PV + , CR + and NPY + neurons (Table 1). This study performed a long-term follow-up, i.e. 9-12 months after grafting, and showed a much reduced frequency (between -67 and -89% compared to controls) of SRSs but observed no rescue of the aberrant mossy iber sprouting [117]. Interestingly, although the LGE is not the physiological source of cortical and hippocampal INs, these results suggest that LGE and MGE progenitors are interchangeable when used as a source of cells for grafting, reasonably because their behavior might be strongly context-dependent. MGE-progenitor cells dissociated from mouse embryonic brains were also used for transplantation studies into the hippocampi of adult mice, 9-20 days after pilocarpine-induced SE [115]. Starting from 7days after transplantation the grafted cells exhibited a bipolar migratory morphology and showed a survival rate of >30%. Sixty days after grafting, cells were found to participate in host inhibitory local circuits, with a survival rate of about 15% and showing features of mature INs. Within 2 months, the grafted cells differentiated into GAD67 + INs, expressing a wide set of neurochemical markers including SST, nNOS and PV (Table 1). Very few cells expressed glial lineage markers. The authors characterized the grafted cells for their electrophysiological properties and RNA expression pro iles. The pro iling results (26% fast spiking, 41% regular nonpyramidal spiking, 9% late spiking and 24% burst spiking; all expressing gene markers typical of the MGE lineage of INs such as Lhx6) were consistent with an MGE-derived mature IN phenotype. The authors reported a 92% reduction in seizure frequency but no rescue of aberrant mossy iber sprouting, in accordance with previous results [115]. The therapeutic effects were analyzed only in a two-month time scale. Adopting a similar experimental paradigm, another study used freshly dissociated MGE-derived progenitors grafted in a model of pilocarpin-induced seizure, and monitored the effect at 7 and 12 months after transplantation [118]. At 7 months, the grafted cells were differentiated into GAD67 + neurons expressing SST, PV and nNOS (Table 1). Two months after grafting, the animals showed seizures suppression and rescue of behavioral comorbidities; at 6 months seizures frequency was reduced by 84% and at 12 months the seizures total number was reduced by 88%. The authors also documented enhancement of GABAergic IN-mediated currents suggesting a functional integration in the host network [118]. In a recent study, MGE-derived dissociated progenitor cells were used in grafting experiments and compared to MGE-derived neurospheres for their anticonvulsant effects in a model of pilocarpine-induced TLE [119]. Speci ically, the authors compared three in vitro states of the grafted cells: fresh MGE-cells; MGE-derived neurospheres cultured with growth factors and retinoic acid (GF-RA group); or the same cultured without retinoic acid (GF group). While the GF-RA group resulted in increased neuronal population in vitro, only the GF-neurospheres and freshly dissociated MGE cells showed anticonvulsant activity. Both groups showed reduced seizure frequency 3 months after the grafting, but apparently by different mechanisms: GF-neurospheres ef iciently differentiated into INs and glial cells with equal ef iciency, freshly dissociated MGE cells mainly differentiated into INs, with only 0.7% of GFAP + cells (Table 1). Thus, the anticonvulsant effects of GF-neurospheres seems to be glialmediated, while freshly dissociated MGE cells appear more appropriates for cell-therapy treatment of epilepsy, targeting the inhibitory circuitry [119]. In this study only short-term effects were examined, and cell differentiation, senescence and survival were not assessed on the long-term. On the whole, these studies indicate that grafted embryonic MGE cells are effective in reducing seizure frequency, duration and severity, as well as in alleviating behavioral co-morbidities. However, for clinical application cells sources must be standardized, well characterized, quality-controlled and unlimited in supply. For these reasons and for strong ethical issues, human embryonic MGE-cells cannot be routinely exploited. To overcome this limit, human iPSC have been tested as a valid and promising alternative. hiPSC-derived GABAergic INs were shown to be able to migrate and integrate into dysfunctional circuitry in mice with pilocarpine-induced TLE, and to generate inhibitory postsynaptic responses in host hippocampal neurons [93]. Two weeks post transplantation, grafted cells were found to primarily cluster near the injection site, with nearly 80% of cells expressing GABA, and a large fraction of these scoring positive for Nkx2.1, NeuN and Lhx6 (Table 1). Four months post-injection hiPSC-derived INs had migrated and integrated in the host circuitry, without signi icant differences in the total number of surviving cells. At this time-point, 80% of the grafted cells were bona-ide GABAergic INs (expressing Nkx2.1, Sox6, Lhx6 and GABA) and scored positive for the subtype-speci ic markers SST, CB, PV, CR, NPY and VIP (Table 1). All transplanted cells showed reduced positivity for glial markers, suggesting a proper IN differentiation. Overall, the approach suppressed seizures and ameliorated behavioral abnormalities, providing a irst evidence of hiPSC application in the management of intractable epilepsy [93]. Similar results were recently published using hiPSC-derived MGE-like progenitor cells transplanted into the hippocampi of KA-treated rats [112]. The authors found that the grafted cells proliferate immediately after grafting, then migrate into DG, CA1 and CA3 sub ields, were they differentiated in GABAergic INs with high ef iciency (80%) and integrate into the host network forming synapses with excitatory neurons. The grafting resulted in reduction of aberrant neurogenesis, and a signi icative reduction of seizure frequency, duration and severity, as well as anhedonia and cognitive disfunctions [112].These effects, however, were analysed only in the short-term. In summary, current data show that MGE-derived cells ef iciently migrate and differentiate into GABAergic INs, able to functionally integrate in the host brain network when transplanted into the hippocampus of various models of epilepsy. The overall effect is a reduction of seizures occurrence and severity, associated with increase inhibitory activity. We should note, however, that most of the published studies observe cell and phenotypes in the short-term, while long-term analyses are very limited. Furthermore, all studies adopt a pharmacological induction of SE, which is dif icult to relate this to epilepsy in human, characterized by spontaneous seizures caused by a combination of genetic and environmental factors. Neocortex: The epileptic brain is characterized by a disfunction in excitatory-inhibitory balance that also involves the neocortex [50]. Thus, several studies focused on the effects of cell grafting directly into the cortex and the effect of this procedure on cortical circuity. The main results are summarized in table 2. In an early study, MGE progenitor cells were dissociated from embryonic MGE and bilaterally transplanted into the cortex of wild-type mice; one month after transplantation cells were found to be migrated and dispersed in the host cortex and differentiated into GABAergic INs and their subclasses, the largest proportions being PV + and SST + [110] (Table 2). Grafted cells, speci ically starting from 7-10 days after grafting, acquired intrinsic membrane properties of immature INs and 30-40 days after transplantation an increased inhibition of II/ III layers pyramidal neurons was shown, while not changes in inhibition by host INs. This potentiation of inhibition involves postsynaptic GABA receptors and provides evidence that grafted cells integrate in brain circuits, receive inputs from host brain neurons and form synaptic contacts onto their dendrites primarily targeting excitatory pyramidal neurons [110]. In a subsequent study, hiPSC-derived MGE-like cells were grafted into the cortex of newborn SCID (severe combined immunode icient) mice, and monitored thereafter [92]. The results show that these MGE-like cells survived in the brain after 2, 4 and 7 months, with an average surviving rate of 5%, 3% and 8% respectively. Two months post-injection grafted cells were found to proliferate and to express the immature neuroblast marker DCX + , while at 3 and 7 months post-injection cells had migrated in the cortex. By 7 months post-injection, the grafted cells were seen to express NeuN, a marker of more mature neurons. From a functional point of view, mature hiPSC-derived INs ired subtype-speci ic trains of action potentials, received synaptic inputs, generated GABAergic-exclusive synaptic output, and functionally integrated into the rodent cortex [92]. Three main types of animal models of cortical epilepsy have been used to test the ef icacy of cell-based strategies: a) genetically induced, b) induced by maximal electroshock (MES), c) induced by pharmacological treatment with pentylenetetrazole (PTZ). a) the most used genetic model of cortical epilepsy is the Kv1.1/Kcna1 mutant mouse strain; these animals lack a shaker-like potassium channel and mimic a neuronal ion channelopathy associated with epilepsy in humans [110]. Freshly dissociated MGE progenitors were grafted into the cerebral cortex of P2 Kv1.1/Kcna1 mutant mice resulting in a reduced frequency and duration of spontaneous EEG seizures 2 months after transplantation. Almost 65% of grafted cells differentiated in GABAergic INs, and expressed the subtypespeci ic markers PV, SST, NPY and CR [110]. b) the maximal electroshock (MES) model relies on the application of an alternating current causes a generalized tonic-clonic seizures [120,121]. Adopting this model, freshly dissociated MGE progenitors transplanted into the brain cortex 2 months after MES-induced seizure were shown to protect against tonic seizures and reduce the mortality rate [120]. On the contrary, cells dissociated from the MGE but then maintained and expanded in vitro as loating neurospheres showed strikingly different results in the MES model: unlike freshly dissociated MGE cells, neurosphereexpanded progenitors had no effects on seizures frequency [121], despite both experimental groups used a comparable number of cells injected bilaterally in the cortex. Thus, in the MES model transplantation of precursors dissociated directly from MGE is more ef icient in reducing frequency and intensity of tonic seizures. From a cellular point of view, in both studies, two months after injection grafted cells ef iciently survived, migrated and differentiated into INs. However, the migration sites are different between these studies: Calcagnotto and colleagues found grafted cells in cortex, hippocampus, striatum, subventricular zone and corpus callosum, while Paiva and colleagues found INs throughout the brain parenchyma (piriform cortex, fimbria, and ventricular zone). Moreover, Paiva and colleagues examined only the PV + cells, while Calcagnotto and colleagues evaluated the morphology and the expression of more IN markers including NPY + , CR + and PV + cells [120,121]. c) The pha rmacological model based on treatment with pentylenetetrazole (PTZ) induces clonic or tonic-clonic seizures, by acting on GABA-A receptors and impairing local inhibition. Grafting of MGE progenitors maintained as neurospheres in the neocortex of PTZ-treated animals resulted in a protective effect from seizure [120,121]. These grafted cells showed a high rate of survival and of PV + differentiation as compared to the same cells transplanted into mice that underwent MES [121]. Based on the available results, it can be concluded that neural progenitors maintained as neurospheres are not ideal, as compared to fresh isolated or in vitro-expanded progenitors from MGE. However, this seriously hampers the applications in human therapy, due to the limited availability of fetal tissues. Concluding remarks and perspectives The reported studies show that MGE-derived cells ef iciently migrate and differentiate into GABAergic INs, able to functionally integrate in the host brain network when transplanted into hippocampus or cortex of various models of epilepsy. The overall effect is a reduction of seizures occurrence and severity. Electrophysiological recordings reveal that exogenous INs functionally integrate in the host circuity providing increased inhibitory activity, although not all authors investigated this aspect. Importantly, the anticonvulsant ef icacy of grafted cells seems to largely depend on the mode of collecting and expanding these cells: whether from primary dissociated MGE embryonic tissues [110,115,[118][119][120], from neurosphere-expanded MGE progenitors [119,121], from differentiated hIPSC [92,93,112], or from hNSC [31]. Among these, the use of dissociated MGE cells seems to be the most valid and applied strategy. Studies on differentiation and transplantation of hiPSC-derived cells are likely to lead to signi icant improvements in the next few years. Concerning the translation to the human setting, one issue that has to be taken into account is the animal model of epilepsy or TLE that have been used so far. Models of epilepsy induced by drugs, although easily standardized and quanti iableand thus adopted in scienti ic studies -only partially and imprecisely represent human conditions. In alternative, genetic models in rodents (knock-out or transgenic), in which congenital chronic epileptic syndromes result from a known inherited genetic lesion, have been introduced [65]. Although these are certainly closer to reality, the issue that emerges is the existence of numerous, rare epileptic syndromes, with quite distinct and peculiar phenotypic manifestations. Each genetic model will only represent a single and speci ic human condition, certainly not the full spectrum and not the majority of conditions, which are rather sporadic or with multigenic risk determinants. Most studies have demonstrated survival and differentiation of the grafted cells in the short-or medium-term (2-6 months), but not in the long-term; with the exception of two studies that prolonged the analyses to 12 months [117,118]. Information on long-term cell survival, differentiation and circuit integration is the next important step, in order to better evaluate the translation value of these experimental therapies in view of future clinical applications and to understand if they have a transient short-term effect or a long-term ef icacy. In the perspective of clinical trials, a transient therapeutic effect is clearly insuf icient. The follow up of the grafted progenitors or the immature INs, monitoring their proliferation, stability of the differentiated phenotype, senescence and apoptosis, is another key task. In the rodent brain cells labeled with luorescent markers can be used for in vivo imaging, and single neurons can be visualized and subjected to neuro chemical and electrophysiological analyses. The PV + and SST + neurons are the most studied ones, however the repertoire of IN subtypes is much wider, and the contribution and/or involvement of each of these in seizure is not fully known. Upon grafting, we are currently unable to monitor all IN subtypes, and consequently their survival, senescence and differentiation is not really known. It is likely that we are still missing some important contribution, and this could also be the reason for discrepancies in the results from lab to lab. Likewise, which and how each IN subtype participates and contributes to epilepsy is not fully understood. Although PV + neurons appear to be more directly implicated and are observed after grafting, this could simply re lect the fact that they are mostly studied as compared to other types. If we knew better which types participate and how, we could adopt in vitro procedures that can be able to lead and expand that kind of progenitors, avoiding unnecessary mix of cells. A deeper knowledge of the main actors in the epilepsy's seizure context would be necessary to develop a better and more speci ic therapeutic approach. This could in principle be done using optogenetic methods, which these consents to activate or inihibit speci ic subpopulation of neurons in vivo with unprecedented time-resolution [60]. In parallel, strong efforts in basic and applied research on neural progenitor cells are needed to fully unravel the differentiation trajectory of each IN subtypes in vitro [11,86] and to optimize expansion and differentiation protocols aimed to obtain the speci ic IN subclass required, reliably and reproducibly. One important advancement will be the development of human cell models of seizure, either deriving hiPSC from patients carrying known mutation in epilepsyrelevant genes, or introducing the same mutations via genome editing (CRISPR-cas9). These cells could then be used in research (set up mixed cultures of excitatory and inhibitory neurons or mixed cortical and basal brain organoid). Once validated, these cells could open to way to screening efforts for the search of novel compounds.
2019-11-28T12:18:24.577Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "8337266f080c1cdd3975f254956a3de9be31345c", "oa_license": null, "oa_url": "https://www.heighpubs.org/jsctt/pdf/jsctt-aid1014.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2088c32e0e4f74b493e12f5b5ec0151aa5058c77", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232418723
pes2o/s2orc
v3-fos-license
Absence of detection of RSV and influenza during the COVID-19 pandemic in a Brazilian cohort: Likely role of lower transmission in the community Background Respiratory syncytial virus (RSV) and influenza are prevalent seasonal community viruses. Although not completely understood, SARS-CoV-2 may have the same means of transmission. Preventive social measures aimed at preventing SARS-CoV-2 spread could impact transmission of other respiratory viruses as well. The aim of this study is to report the detection of RSV and influenza during the period of social distancing due to COVID-19 pandemic in a heavily affected community. Methods Prospective study with pediatric and adult populations seeking care for COVID-19-like symptoms during the fall and winter of 2020 at two hospitals in Southern Brazil. RT-PCR tests for SARS-CoV-2, influenza A (Flu A), influenza B (Flu B) and respiratory syncytial virus (RSV) was performed for all participants. Results 1435 suspected COVID-19 participants (1137 adults, and 298 children). were included between May and August. Median age was 37.7 years (IQR = 29.6-47.7), and 4.92 years (IQR = 1.96-9.53), for the adult and child cohorts, respectively. SARS-CoV-2 was positive in 469 (32.7%) while influenza and RSV were not detected at all. Conclusions Measures to reduce SARS-CoV-2 transmission likely exerted a huge impact in the spread of alternate respiratory pathogens. These findings contribute to the knowledge about the dynamics of virus spread. Further, it may be considered for guiding therapeutic choices for these other viruses. Since December 2019 the world experienced the outbreak of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). SARS-CoV-2 has spread throughout the world with a catastrophic impact not only on public health but also with significant social and economic burden. Brazil is currently the third most affected country, with 9 447 165 confirmed diagnosis of COVID-19 and 230 034 related deaths up to February 7, 2021 [1]. The response to the pandemic has triggered a major change in overall human behavior, with social distancing measures, teleworking, closing of schools and daycare facilities, closing of businesses, strict hygiene behaviors, widespread use of face masks, travel restrictions, and avoidance of activities associated with population gatherings. Brazil has not been an exception to this new scenario, and while it took a while for efficient measures of social distancing to take place, schools and daycare facilities have been closed since mid-March and children have been home since then [2]. As respiratory viruses such as influenza, respiratory syncytial virus (RSV) and SARS-CoV-2 share similar routes and means of transmission, these huge social efforts to prevent the spread of SARS-CoV-2 are also likely to affect the epidemiology of influenza and RSV [3]. Both RSV and influenza have very typical and significant seasonal epidemiology in Brazil, especially during autumn and winter in subtropical areas in the South of the country. An initial concern of health authorities worldwide was the burden of these concomitant infections during the winter months and many clinical guidelines reflected this by indicating both prevention of RSV and treatment of influenza for suspected cases [4,5]. The aim of this study was to report influenza and RSV diagnosis during the SARS-CoV-2 pandemic in pediatric and adult populations with suggestive symptoms of COVID-19 in two health care facilities that serve communities with very different socio-economic backgrounds in the city of Porto Alegre, during local well-defined influenza and RSV seasons [5]. METHODS This prospective study was conducted from May 13 to August 31, 2020, two general hospitals (one private and one public) in Porto Alegre, Southern Brazil, a city whose estimated population was 1 488 252 inhabitants in 2020 [6]. Consecutive adults (>18 year-old) or children older than 2 months were included if presenting at either the outpatient clinics (OPC), emergency departments (ER), or hospitalized with at least one of the following signs or symptoms within 14 days of onset: cough, fever, or sore throat. Exclusion criteria included failure in SARS-CoV-2 sample collection. Clinical and demographic data, as well as samples for viruses testing were collected by a trained research staff at enrollment, according to a standardized protocol. RT-PCR tests were performed for SARS-CoV-2, influenza A (Flu A), influenza B (Flu B) and RSV for all included subjects. For SARS-CoV-2 analysis nasopharyngeal and oropharyngeal swabs were collected, both swabs allocated in the same transport media. The qualitative Real Time PCR (RT-PCR), was performed with TaqPath-TM 1-Step RT-qPCR Master Mix, CG, (Catalog Numbers A15299, AppliedBiosystems, ThermoFisher Scientific, Frederick, USA) and TaqMan TM 2019-nCoV Assay Kit v1 (Catalog Number A47532, ThermoFisher Scientific, Pleasanton, USA), the TaqMan TM 2019-nCoV Control Kit v1 (Catalog Number A47533, ThermoFisher Scientific, Pleasanton, USA) as reaction control. QuantStudio 5 (ThermoFisher Scientific, Waltham, USA) was used to perform protein chain reaction. For RSV, Flu A and Flu B analysis, material from both nostrils were collected with the Xpert®Nasal Sample Collection Swab B/F-100 (Cepheid, Sunnyvale, USA). and allocated in proper transport media (Cepheid, Sunnyvale, USA). Samples were transferred and analyzed in Xpert®Xpress Flu/RSV cartridges (Cepheid, Sunnyvale, USA) [7]. For the statistical analyses, proportions of specimens positive for SARS-CoV-2, influenza, and RSV were stratified by age (ie, children or adults). Normally distributed quantitative data, according to the Shapiro-Wilk test, were expressed as mean ± standard deviation (SD), and non-normally distributed quantitative data were expressed by median and interquartile range (IQR). Categorical variables were described as absolute (n) and relative (%) frequencies. The study protocol was approved by the Institution's ethics review board. Written consent was obtained from all participants and/or legal representatives. The study was conducted according to good laboratory practices and in accordance with the Declaration of Helsinki. RESULTS A total of 1435 participants were included in the study from May 13 to August 31, 2020. Of the 1704 screened subjects, 269 were excluded (132 for not meeting inclusion criteria, 63 for not consenting, 72 for not completing initial questionnaire, 1 unsuccessful sample collection, and 1 withdrew consent), as shown in DISCUSSION We have initially expected that COVID-19 frequencies should increase significantly during the fall and winter months, and that the usual patterns of community spread of RSV and influenza A and B would likely change due to public health measures taken to reduce transmission of COVID-19. Interestingly, we have observed a striking absence of these two usually prevalent pathogens in our cohort of symptomatic subjects, despite a detection of 32.7% of SARS-CoV-2. It is important to stress that the region has very well-defined and significant RSV and influenza seasonality yearly [4,5]. Although the hypothesis of lower prevalence for both viruses in this exceptional situation of social distancing was initially reasonable, the observable impact of this reduction is a unique finding. In USA, Asia and Europe, a number of public health measures aiming to prevent the rapid spread of COVID-19 have started mostly at the end of the winter season. The findings of concomitant viral infections in these communities may have been biased by a natural decline in the incidence of both RSV and influenza, although an unprecedented drop in hospitalization due to RSV has been recently described in Alaska [8]. Wu et al also describe that during the COVID-19 pandemic in China there was a decreasing trend in influenza reports early in 2020, in contrast with two spike waves observed in the previous year [9]. In Switzerland, SARS-CoV-2 almost, and very quickly, completely replaced the seasonally circulating community-acquired respiratory viruses within 3 weeks' into the pandemic [10]. This finding raises the hypothesis of a possible competition pattern among respiratory viruses. Yet, some reports of substantial rates of viral coinfection make this explanation unlikely [11]. Another possible explanation for lower influenza rates could be heightened awareness due the pandemic, with a subsequent increase in influenza vaccination numbers. However, in Brazil, influenza vaccination rates were 88.8% for the target population, similar to historical values [12]. Furthermore, according to nationwide data from the Brazilian Ministry of Health, from March to August-2020, 643 090 cases diagnosed with severe lower respiratory tract illness (sLRTI) were notified. Among these 52.2% (n=335 748) were positive for SARS-CoV-2 and only 0.4% (2351) for influenza. Similarly, in our region, including Porto Alegre, there were 26 508 notifications of sLRTI; from those 12 717 (48%) were due to COVID-19 and only 43 (0.16%) were positive for influenza for the same time period [13]. These findings are in stark contrast with 2019 data, where the total number of sLRTI cases associated with influenza was 5800 for the whole of Brazil and 388 in our region, considering the same period of time (unpublished data). The pattern of RSV was similar, with a significant reduction in the number of hospitalized cases with acute viral bronchiolitis throughout the whole country. In our region hospital admissions for infants were 85% lower than in the previous years [14]. Although the local social distancing index varied between 34.2% and almost n -absolute frequency, % -relative frequency, md -median, IQR -interquartile range (25th and 75th percentiles), OSS -onset signs and symptoms 60% from March to August, schools and daycare centers remained closed throughout this period. All these measures can be associated with these observed and significant lower levels of transmission for both influenza and RSV. But not good enough to prevent the spread of the highly infectious SARS-CoV-2 virus. There are a few limitations that may be worth addressing. Data refers to only one city in Brazil at only two health care facilities. Also, subjects were not enrolled using a population-based strategy. However, we believe that the large number of patients in the study, its prospective design, and the inclusion of children and adults from diverse environmental and social backgrounds evaluated throughout usual influenza and RSV seasons outweigh issues of external validity from a convenience sample. The complete absence of detection of influenza and RSV could raise an issue about the quality of our tests and the specimen collection, still we feel confident with our results. GeneXpert© provides a very automated process, less prone to human processing errors. Moreover, its internal controls (Sample Processing Controls and Probe Check Control) are able to check if there was enough nucleic acid in each sample. We do believe that both influenza and RSV are present in the community, but their numbers are so low that a sample of over 1400 subjects was not able to detect these pathogens. In summary, our study adds important information regarding the spreading dynamics of high burden respiratory viruses during a period of effective public health measures. The low incidence of RSV and influenza, in contrast with SARS-CoV-2 should be considered in the development of guidelines for antiviral treatment of influenza or in the prevention of RSV with monoclonal antibodies. Moreover, although maintenance such restrictions are not feasible continuously, similar measures could be adopted to control outbreaks due to these viruses. Further, as SARS-CoV-2 prevention depends currently only on non-pharmacological interventions, continuous monitoring of its transmission dynamics is necessary. Hygiene practice and social distancing measures appear to be associated with dramatic reduction of the spread of RSV and influenza.
2021-03-31T05:15:07.894Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "15ccfa8c0e8e706bc10ebddde452e4795b10a7e5", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979153", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "15ccfa8c0e8e706bc10ebddde452e4795b10a7e5", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
49395578
pes2o/s2orc
v3-fos-license
Fentanyl inhibits proliferation and invasion via enhancing miR ‐ 302 b expression in esophageal squamous cell carcinoma Fentanyl is one of the most commonly used intravenous anesthetic agents during cancer resection surgery, but the effect of fentanyl on esophageal squamous cell carcinoma (ESCC) remains unclear. The aim of the present study was to investigate the involvement of microRNA 302b (miR-302b) in the anti-proliferation and anti-invasion effects of fentanyl in ESCC. In the present study, the effects of fentanyl on cell proliferation, apoptosis and invasion were detected using MTT assays, flow cytometry and Transwell assays in ESCC Eca109 and TE1 cell lines. Subsequently, expression of miR-302b was determined using reverse transcription-quantitative polymerase chain reaction (RT-qPCR). RT-qPCR and western blot analysis were performed in order to evaluate the expression of ErbB4, a target of miR-302b. Furthermore, anti-miR were used to inhibit miR-302b in fentanyl-treated ESCC cells in order to evaluate the role of miR-302b in the effect of fentanyl on malignant behaviors. Fentanyl inhibited the proliferation of Eca109 and TE1 cells in a doseand time-dependent manner. Following exposure to fentanyl for 48 h, Eca109 and TE1 cells exhibited increased apoptosis and decreased invasion. Furthermore, fentanyl upregulated miR-302b expression, but downregulated ErbB4 expression. Finally, loss of miR-302b using the anti-miR technique reversed the effect of fentanyl on cell proliferation, apoptosis and invasion in the two ESCC cell lines. Taken together, the results of the present study indicated that fentanyl inhibits the proliferation and invasion of ESCC cells through upregulation of miR-302b. Introduction Esophageal carcinoma (EC) remains one of the leading causes of cancer-associated mortality (1), with a 5-year survival rate of <20% (2). EC usually occurs as either adenocarcinoma or squamous cell carcinoma (ESCC), the latter of which is more dominant in East Asia and accounts for 95% of all Chinese EC cases (3). Given this, there is an urgent requirement for research to prevent and treat this disease. Fentanyl, a strongly anesthetic analgesic drug, is an agonist for the μ-opioid receptor and is widely used in surgery, including tumor radical resection (4). Furthermore, it is considered to be an effective analgesic for breakthrough cancer pain in patients with terminal cancer (5). Recently, an increasing number of studies reported that fentanyl is able to inhibit cancer progression, including proliferation, cell cycle, apoptosis, invasion and chemotherapy sensitivity (6-10). In brief, fentanyl may serve a potential therapeutic role in cancer treatment. However, the effect of fentanyl on ESCC and the mechanism underpinning this remain unknown. MicroRNAs (miRNAs) represent a class of small non-coding RNAs that regulate gene expression at the post-transcriptional level. One study demonstrated that miRNAs were aberrantly regulated in different oncogenic pathways and/or various types of cancer, indicating that certain miRNAs may function as oncogenes or tumor suppressor genes (11). Previous studies revealed the expression profiles of different miRNAs and identified certain specific miRNAs with biological functions and significance in ESCC (12-14). miR-302b, which was downregulated in ESCC, inhibited cell proliferation, induced apoptosis and reversed invasion in ESCC (13). In addition, it was believed that miR-302b inhibited the malignant behaviors of ESCC by directly targeting ErbB4, a molecular therapeutic target for ESCC (13). Since fentanyl is able to change miRNA expression profiles in human cancer cells (8), we hypothesized that fentanyl may inhibit the proliferation and invasion of ESCC cells through the miR-302b/ErbB4 pathway. Therefore, in the present study, the effects of fentanyl on the proliferation, apoptosis and invasion of ESCC Eca109 and TE1 cells were investigated. Furthermore, the regulatory effect of fentanyl on the expression of miR-302b and its target, ErbB4, was also examined in order to elucidate the exact mechanism of the antitumor effect of fentanyl in ESCC. Fentanyl inhibits proliferation and invasion via enhancing miR‐302b expression in esophageal squamous cell carcinoma NING WANG, ZHENNI ZHANG and JIANRUI LV Department of Anesthesiology, Second Affiliated Hospital, Medical School, Xi'an Jiaotong University, Xi'an, Shaanxi 710002, P.R. China Received August 4, 2016; Accepted January 17, 2018 DOI: 10.3892/ol.2018.8616 Correspondence to: Professor Jianrui Lv, Department of Anesthesiology, Second Affiliated Hospital, Medical School, Xi'an Jiaotong University, 157 Xiwu Road, Xi'an, Shaanxi 710002, Introduction Esophageal carcinoma (EC) remains one of the leading causes of cancer-associated mortality (1), with a 5-year survival rate of <20% (2).EC usually occurs as either adenocarcinoma or squamous cell carcinoma (ESCC), the latter of which is more dominant in East Asia and accounts for 95% of all Chinese EC cases (3).Given this, there is an urgent requirement for research to prevent and treat this disease. Fentanyl, a strongly anesthetic analgesic drug, is an agonist for the µ-opioid receptor and is widely used in surgery, including tumor radical resection (4).Furthermore, it is considered to be an effective analgesic for breakthrough cancer pain in patients with terminal cancer (5).Recently, an increasing number of studies reported that fentanyl is able to inhibit cancer progression, including proliferation, cell cycle, apoptosis, invasion and chemotherapy sensitivity (6)(7)(8)(9)(10).In brief, fentanyl may serve a potential therapeutic role in cancer treatment.However, the effect of fentanyl on ESCC and the mechanism underpinning this remain unknown. MicroRNAs (miRNAs) represent a class of small non-coding RNAs that regulate gene expression at the post-transcriptional level.One study demonstrated that miRNAs were aberrantly regulated in different oncogenic pathways and/or various types of cancer, indicating that certain miRNAs may function as oncogenes or tumor suppressor genes (11).Previous studies revealed the expression profiles of different miRNAs and identified certain specific miRNAs with biological functions and significance in ESCC (12)(13)(14).miR-302b, which was downregulated in ESCC, inhibited cell proliferation, induced apoptosis and reversed invasion in ESCC (13).In addition, it was believed that miR-302b inhibited the malignant behaviors of ESCC by directly targeting ErbB4, a molecular therapeutic target for ESCC (13). Since fentanyl is able to change miRNA expression profiles in human cancer cells (8), we hypothesized that fentanyl may inhibit the proliferation and invasion of ESCC cells through the miR-302b/ErbB4 pathway.Therefore, in the present study, the effects of fentanyl on the proliferation, apoptosis and invasion of ESCC Eca109 and TE1 cells were investigated.Furthermore, the regulatory effect of fentanyl on the expression of miR-302b and its target, ErbB4, was also examined in order to elucidate the exact mechanism of the antitumor effect of fentanyl in ESCC. Materials and methods Cell culture and reagents.The ESCC Eca109 and TE1 cell lines were obtained from the Shanghai Institute for Biological Sciences (http://www.cellbank.org.cn/),Chinese Academy of Sciences (Beijing, China).Cells were cultured in RPMI-1640 medium (Sigma-Aldrich; Merck KGaA, Darmstadt, Germany), supplemented with 10% fetal bovine serum and 100 U/ml penicillin and streptomycin, at 37˚C in a humidified atmosphere with 5% CO 2 .Fentanyl was purchased from Sigma-Aldrich; Merck KGaA, was dissolved in dimethyl sulfoxide (DMSO) and was added into the culture medium at various concentrations (0, 0.5, 5, 50 and 500 ng/ml) for in vitro assays. Cell proliferation assay.Cells were seeded at a density of 5x10 3 cells/well in 96-well plates at a final volume of 180 µl in incubation, at 37˚C in a 5% CO 2 atmosphere.Following various incubation times (24, 48 and 72 h), 20 µl of 5 mg/ml solution of MTT (Sigma-Aldrich; Merck KGaA) in 1xphosphate-buffered saline (PBS) was added to each well.The plates were subsequently incubated for 4 h at 37˚C, prior to the reaction being solubilized in 100% DMSO (20 µl/well) and agitated at 37˚C for 15 min.The absorbance of each well was measured on a multi-detection microplate reader (BMG Labtech GmbH, Ortenburg, Germany) at a wavelength of 570 nm. Apoptosis analysis.The cells were washed twice with cold 10 mM 1xPBS and were resuspended in 1xbinding buffer (BD Biosciences, San Jose, CA, USA).Cells were then washed twice with PBS and 400 µl 1x binding buffer was added followed by 5 µl Annexin V-fluorescein isothiocyanate (FITC) conjugate from the FITC Annexin V Apoptosis Detection kit (cat.no.556547; BD Biosciences).The cells were then incubated in the dark for 15 min at 2-8˚C, then 5 µl PI was added and incubation was continued for 5 min.The samples were analyzed using a flow cytometer (FACSCalibur; BD Biosciences) and analyzed by the Cell Quest software (version 3.3; BD Biosciences). Cell invasion assay. For the invasion assay, the membrane invasion culture system Transwell membranes with a diameter of 6.5 mm diameter and a pore size of 8 µm; Costar (Corning Incorporated, Corning, NY, USA) was used according to the manufacturer's protocol.Briefly, harvested cells (1x10 5 ), resuspended in 100 µl of serum-free RPMI-1640 medium, were added into the upper chamber.A total of 1,000 µl conditioned RPMI-1640 medium with 20% (v/v) fetal bovine serum was used as a chemoattractant and was placed into the lower chamber.After 48 h, the un-invaded cells on the upper surface of the membrane were removed with a cotton swab.The transformed cells that had invaded through the Matrigel matrix and stuck to the lower surface of the membrane were fixed with 4% paraformaldehyde for 1 h at room temperature and stained with 1% crystal purple for 15 min at room temperature.The invasive cells were then counted (in 5 high-power fields/chamber) using an inverted microscope (Olympus Corporation, Tokyo, Japan; magnification, x200).Each experiment was repeated in triplicate. RNA extraction and reverse transcription-quantitative PCR (RT-qPCR). Total RNA was extracted from Eca109 and TE1 cells using TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA), according to the manufacturer's protocol.RT-qPCR was performed using a Bio-Rad iQ5 Real-Time PCR Detection system to confirm the mRNA expression levels.A reverse transcription kit and SYBR-Green both from Takara Biotechnology Co., Ltd.(Dalian, China) were used.In brief, reverse transcription (RT) was performed in a 20 µl volume with 1 µg total RNA, by incubation at 16˚C for 30 min, 42˚C for 42 min and 85˚C for 5 min.A total of 1 µl of the RT product was used in each PCR.The PCR cycling began with template denaturing at 95˚C for 5 min, followed by 40 cycles of 95˚C for 10 sec, 60˚C for 20 sec, 72˚C for 20 sec and 78˚C for 20 sec.Final PCR products were resolved by agarose gel electrophoresis and a single band of expected size indicated the specificity of the reaction.Relative quantification was performed using the 2 -ΔΔCq method normalized to GAPDH (15).Each PCR amplification was performed in triplicate to verify the results.The primers were as previously described (14). Western blot analysis.Total proteins were extracted from cells using lysis buffer containing phenylmethyl sulfonylfluoride (both from Beyotime Institute of Biotechnology, Haimen, China) at 25˚C.The protein concentration was determined using BCA Protein Assay kit (Beyotime Institute of Biotechnology).For western blot analyses, 20 µg total protein was electrophoresed on a 10% SDS gel, transferred onto polyvinylidene difluoride membranes, blocked with 5% (w/v) non-fat dry milk in Tris-buffered saline with 0.1% Tween-20 for 1 h at room temperature, and incubated with anti-ErbB4 (cat no.sc-283; 1:500; Santa Cruz Biotechnology, Inc., Dallas, TX, USA) and anti-β-actin (cat no.sc-7210; 1:200; Santa Cruz Biotechnology, Inc.) primary antibodies at 4˚C for 12 h.A corresponding bovine anti-rabbit IgG horseradish peroxidase-conjugated secondary antibody (cat no.sc-2370; 1:1,000; Santa Cruz Biotechnology, Inc.) was subsequently applied at room temperature for 2 h.Following chemiluminescence reactions with enhanced chemiluminescence detection reagents kits (GE Healthcare, Chicago, IL, USA), according to the manufacturer's protocol, the membranes were visualized by exposure to X-ray film in the dark.Densitometric analysis was performed using Scion Image software (Scion Corporation, Frederick, MD, USA). Anti-miR design and transfection.miR-302b inhibitor (A) and miR Inhibit or Negative Control (NC) were purchased from AngRang Inc. (Xi'an, China).The sequence for miR-302b inhibitor was 5'-CTA CTA AAA CAT GGA AGC ACT TA-3'.Cells were seeded on to a 24-well plate at a concentration of 1x10 5 cells/well.RNA oligonucleotides transfection (50 nM) was performed with Lipofectamine 2000 (Invitrogen; Thermo Fisher Scientific, Inc.) according to the manufacturer's protocol.Fresh growth medium (RPMI-1640) was changed 6 h after transfection, and the cells were harvested for analysis 48 h after transfection. Statistical analysis.Data are expressed as the mean ± standard error of the mean from ≥3 separate experiments performed in triplicate.Differences among groups were assessed by a one-way analysis of variance (followed by Student-Newman-Keuls) using SPSS 13.0 (SPSS, Inc., Chicago, IL, USA).P<0.05 was considered to indicate a statistically significant difference. Effect of fentanyl on cell proliferation, apoptosis and invasion. The present study initially investigated the effects of fentanyl on cell proliferation, apoptosis and invasion.The Eca109 and TE1 cell lines were cultured in the presence of various concentrations (0.5, 5, 50 and 500 ng/ml) of fentanyl and the cell proliferation were measured using MTT assays.As demonstrated in Fig. 1A and B, the proliferation of the Eca109 and TE1 cells was inhibited by fentanyl in a dose-and time-dependent manner.Fentanyl significantly inhibited cell proliferation at 48 and 72 h.In order to further quantify cell death, Annexin V/PI analysis was performed.Following exposure to fentanyl for 48 h, Eca109 cells exhibited a decreasing rate of apoptosis (Fig. 1C and D).The cell invasion assay also revealed that fentanyl significantly stimulated invasion in a concentration-dependent manner (Fig. 2A-C).Concentrations of fentanyl >5 ng/ml exhibited a significant inhibitory effect on cell proliferation and metastasis.Therefore, a concentration of 5 ng/ml was selected for the subsequent experiments.miR-302b is involved in the effect of fentanyl on ESCC behaviors.It was previously revealed that miR-302b suppressed proliferation by inducing apoptosis and repressed the invasion of ESCC cells through targeting ErbB4 (13).The present study further investigated whether or not miR-302b is also involved in the effect of fentanyl on the biological behaviors of ESCC.An miR-302b inhibitor (anti-miR-302b) was used to block miR-302b expression in ESCC cells, with the results demonstrating that fentanyl increased the expression of miR-302b, and that anti-miR-302b reversed the upregulation of miR-302b (Fig. 3A and B).Subsequently, the effects of altered miR-302b expression on the anti-proliferation, pro-apoptosis and anti-invasion effects induced by fentanyl Figure 1.Effects of fentanyl stimulation on cell proliferation and apoptosis.Cells were incubated with increasing concentrations (0, 0.5, 5, 50 and 500 ng/ml) of fentanyl.Fentanyl inhibited the proliferation of (A) Eca109 or (B) TE1 cells in a time-and dose-dependent manner.Apoptosis analysis using flow cytometry demonstrated that fentanyl promoted the apoptosis of (C) Eca109 and (D) TE1 cells.All these results confirmed that fentanyl (at a concentration ≥5 ng/ml) significantly inhibited proliferation and induced apoptosis in the two cell lines.Data are presented as the mean ± standard deviation of three independent experiments.* P<0.05 vs. the group with the previous (lower) dose of fentanyl.A, absorbance. in ESCC were detected.It was revealed that downregulation of miR-302b reversed the anti-proliferation (Fig. 3C and D), pro-apoptosis (Fig. 3E and F) and anti-invasion (Fig. 4A-C) effects of fentanyl in the two ESSC cell lines. Fentanyl upregulated the expression of miR-302b in ESSC cells. The present study analyzed the effects of fentanyl on the expression levels of miR-302b.As demonstrated in Fig. 5, following treatment with fentanyl for 48 h, the expression level of miR-302b increased significantly in the two ESCC cells in a dose-(Fig.5A and B) and time-(Fig.5C and D) dependent manner, according to the RT-qPCR results. Fentanyl downregulated the expression of ErbB4, but this effect was reversed by anti-miR-302b transfection in ESSC cells. ERBB4 is one of the down-stream targets of miR-302b ( 14); therefore, we hypothesized that fentanyl modified the behaviors of ESCC cells through suppressing ErbB4.As demonstrated in Fig. 6, fentanyl decreased the expression of ErbB4 at the transcriptional (Fig. 6A) and translational levels (Fig. 6B) in a dose-dependent manner.However, the suppressive effect of fentanyl on ErbB4 expression was subsequently reversed by anti-miR-302b transfection (Fig. 6A and B), demonstrating the effects of miR-302b on the ability of fentanyl to inhibit the activation of ErbB4. Discussion Cancer is a major public health issue in the majority of countries, including China.Cancer is often treated by chemotherapy, immunotherapy, radiation and surgery.Anesthesia serves an important role in surgery, ensuring the safety and comfort of patients during procedures (16).However, numerous anesthetic agents are used without knowledge of their effects on cancer.Recently, it has been suggested that certain anesthetic drugs may modify malignant biological behaviors, including proliferation, angiogenesis and apoptosis in certain cancer cells (17).Nevertheless, the possible role of anesthetic drugs in cancer development and progression remains unclear. Fentanyl is widely used in clinic as an anesthetic, particularly in the treatment of different types of cancer, including ESCC (18).Previous studies have reported the potential antitumor effects of fentanyl, but there are limited reports regarding its role in ESCC (6)(7)(8)(9)(10).The results of the present study suggested that the mechanisms of the anti-proliferation and anti-invasion effects of fentanyl were associated with Figure 2. Effects of fentanyl stimulation on cell invasion.Cells were incubated with increasing concentrations (0, 0.5, 5, 50 and 500 ng/ml) of fentanyl.(A) Effect of fentanyl on cell invasion was detected using a Transwell assay.The cell invasion assay revealed that fentanyl significantly reversed the invasion of (B) Eca109 and (C) TE1 cells.All these results confirmed that fentanyl (at a concentration of ≥5 ng/ml) significantly reversed invasion.Values are presented as the mean ± standard deviation of three independent experiments.* P<0.05 vs. the group treated with the previous (lower) dose of fentanyl.miR-302b expression in ESCC Eca109 and TE1 cell lines.Accompanied by the malignant biological behaviors changes, the expression of miR-302b was elevated by fentanyl treatment.Furthermore, ErbB4 was targeted and inhibited by increased miR-302b expression.Notably, the application of anti-miR-302b impaired the anti-proliferation and anti-invasion effects of fentanyl.These results supported our hypothesis that fentanyl inhibited the proliferation and invasion of ESCC cells by stimulating the expression of miR-302b which, in turn, downregulated the expression of ErbB4.miR-302b is a member of the miR-302 cluster, which regulates the regulatory circuitry controlling ES cell 'stemness' (19).Previously, it was revealed that miR-302b acted as a tumor suppressor by post-transcriptionally regulating different types of oncogenes.miR-302b was able to inhibit proliferation (20,21), induce apoptosis (22,23) and enhance chemotherapy sensitivity (24,25).A previous study revealed that miR-302b was a potential molecular marker of ESCC and that it acts as a tumor suppressor by targeting ErbB4 (13). ErbB4, one of the potential targets in ESCC, is one of members of the ErbB/HER subfamily, which regulates cellular proliferation, differentiation and programmed cell death (26).Xu et al (27) revealed that extra-nuclear ErbB4 had negative effects on the progression of ESCC, while the nuclear translocation of ErbB4 exhibited a tumor-promoting property.Zhao et al (28) reported that ErbB4 served as a potential molecular target in the treatment of ESCC.Therefore, activation of ErbB4 may promote proliferation and invasion in ESCC.The results of the present study demonstrated that, accompanied by elevation of miR-302b in ESCC cells, fentanyl treatment also downregulated the expression of ErbB4, thereby inhibiting proliferation and invasion.Furthermore, fentanyl failed to downregulate the expression of ErbB4 in cells transfected with anti-miR-302b, which also impaired the inhibitory effects of fentanyl against the proliferation and invasion of these cells through miR-302b. In summary, to the best of our knowledge, the present study is the first to identify the involvement of miR-302b in the anti-proliferation and anti-invasion effects of fentanyl in ESCC.Based on the results in the present study, it was concluded that fentanyl inhibited the proliferation and invasion of ESCC cells by elevating the expression of miR-302b and, in turn, suppressing the activation of ErbB4.However, the manner in which fentanyl treatment regulatesmiR-302b expression in ESCC cells remains unknown and requires further study. Figure 3 . Figure 3. Effects of downregulation of miR-302b on the cell proliferation inhibition induced by fentanyl.Expression of miR-302b in (A) Eca109 or (B) TE1 cells treated withanti-miR-302b and/or fentanyl (5 ng/ml; detected by quantitative polymerase chain reaction.MTT was used to assess the effects of downregulation of miR-302b on the proliferation (induced by 5 ng/ml fentanyl) of (C) Eca109 and (D) TE1 cells, and flow cytometry was used to assess the effects of downregulation of miR-302b on the apoptosis (induced by 5 ng/ml fentanyl) of (E) Eca109 and (F) TE1 cells.Downregulation of miR-302b reversed the anti-proliferation and pro-apoptosis effects induced by fentanyl in Eca109 and TE1 cells.Data are presented as the mean ± standard deviation of three independent experiments.Δ P<0.05 vs. 0; # P<0.05 vs. 0 or NC; * P<0.05 vs. F+NC.miR, microRNA; 0, cells without fentanyl exposure; F, cells treated with fentanyl (5 ng/ml); NC, negative control of anti-miR-302b; A, cells treated with anti-miR-302b. Figure 5 . Figure 5. Effects of fentanyl on the expression of miR-302b in Eca109 and TE1 cells.(A) Eca109 and (B) TE1 cells were treated with different concentrations (0, 0.5, 5, 50 and 500 ng/ml) of fentanyl for 48 h.(C) Eca109 and (D) cells were treated with fentanyl (5 ng/ml) for different times (0, 12, 24, 36 and 48 h).Quantitative polymerase chain reaction was used to evaluate the expression of miR-302b.Fentanyl upregulated the expression of miR-302b in the two cell lines in a dose-and time-dependent manner.Representative results are from three independent experiments are demonstrated.* P<0.05 or ** P<0.01 vs. the group treated with previous (lower) dose of fentanyl. Figure 4 . Figure 4. Effects of downregulation of miR-302b on the cell invasion inhibited by fentanyl.(A) Transwell assays were used to assess the effects of downregulation of miR-302b on the invasion inhibited by fentanyl (5 ng/ml).Downregulation of miR-302b reversed the anti-invasion induced by fentanyl in (B) Eca109 and (C) TE1 cells.Data are represented as the mean ± standard deviation of three independent experiments.# P<0.05 vs. 0; * P<0.05 vs. F+NC.miR, microRNA; 0, cells without fentanyl exposure; F, cells treated with fentanyl (5 ng/ml); NC, negative control of anti-miR-302b; A, cells treated with anti-miR-302b.
2018-07-03T23:37:42.102Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "3bfbb80aebaeb086c93350df857ed89636095bcb", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/10.3892/ol.2018.8616/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "765c7d6d342ae7d3d23a213b55fefee5593ab7db", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
208144341
pes2o/s2orc
v3-fos-license
Twelve complete chloroplast genomes of wild peanuts: great genetic resources and a better understanding of Arachis phylogeny Background The cultivated peanut (Arachis hypogaea) is one of the most important oilseed crops worldwide, however, its improvement is restricted by its narrow genetic base. The highly variable wild peanut species, especially within Sect. Arachis, may serve as a rich genetic source of favorable alleles to peanut improvement; Sect. Arachis is the biggest taxonomic section within genus Arachis and its members also include the cultivated peanut. In order to make good use of these wild resources, the genetic bases and the relationships of the Arachis species need first to be better understood. Results Here, in this study, we have sequenced and/or assembled twelve Arachis complete chloroplast (cp) genomes (eleven from Sect. Arachis). These cp genome sequences enriched the published Arachis cp genome data. From the twelve acquired cp genomes, substantial genetic variation (1368 SNDs, 311 indels) has been identified, which, together with 69 SSR loci that have been identified from the same data set, will provide powerful tools for future explorations. Phylogenetic analyses in our study have grouped the Sect. Arachis species into two major lineages (I & II), this result together with reports from many earlier studies show that lineage II is dominated by AA genome species that are mostly perennial, while lineage I includes species that have more diverse genome types and are mostly annual/biennial. Moreover, the cultivated peanuts and A. monticola that are the only tetraploid (AABB) species within Arachis are nested within the AA genome species-dominated lineage, this result together with the maternal inheritance of chloroplast indicate a maternal origin of the two tetraploid species from an AA genome species. Conclusion In summary, we have acquired sequences of twelve complete Arachis cp genomes, which have not only helped us better understand how the cultivated peanut and its close wild relatives are related, but also provided us with rich genetic resources that may hold great potentials for future peanut breeding. rich source of genetic variation among the closely related wild relatives within the Arachis genus that may be very useful for broadening the genetic basis of the cultivated peanut [2,4,[12][13][14][15]. So far, over 80 species have been identified within the Arachis genus [2], which were arranged into nine taxonomic sections (including Sect. Arachis, Sect. Erectoides and Sect. Procumbentes) by Krapovickas and Gregory [16]; the cultivated peanut belongs to Sect. Arachis. Many useful resistances to a number of diseases (e.g. early leaf spot, late leaf spot, peanut rust and rosette disease) and pests (such as nematodes, armyworm and corn earworm) that can cause serious yield loss in the cultivated peanut have been identified from the wild Sect. Arachis species [2]. For example, accessions of A. duranensis Krapov. & W.C. Gregory and A. cardenasii Krapov. & W.C. Gregory that belong to Sect. Arachis have been found to be resistant to twelve or more different diseases/pests, representing two of the richest sources of novel resistant alleles for cultivated peanut [2,17]. To more efficiently make use of these rich genetic resources for peanut breeding, a better understanding of the genomes and phylogenetic relationships of the species within Arachis is a prerequisite. As mentioned above, there are nine taxonomic sections within the genus Arachis that are arranged based on morphology, crossability, cytogenetics, hybrid viability and geographic distribution [2,4,16,18,19], however, this grouping has been challenged by molecular phylogenetic studies [20,21]. Among the nine Arachis sections, Sect. Arachis that includes the cultivated peanut is the largest, the most diverse and the most derived one: it includes more than one third of the Arachis species, and harbors both annual and perennial species that may also differ in chromosome number, ploidy level and genome type [1,4]. Many attempts to infer the phylogenetic relationship among Arachis species have been made, though incongruence between markers and studies is very common [8,10,11,14,20,[22][23][24][25][26]. The rapidly developing high-throughput sequencing technology may provide us with a good chance to make use of the chloroplast (cp) genome data for helping improve the situation [27], due to several advantages of the cp genome data in phylogenetic analyses. First, cp genome data harbors many different gene loci and non-coding regions that contain relatively large amount of DNA sequence information, this would not only boost the resolving power of phylogenetic inference, but also dramatically reduce stochastic errors that are associated with limited sequence information of single genes in traditional phylogeny construction [28,29]. Second, as being maternally inherited, cp genome in Arachis may provide the best tool for inferring the maternal origin of the cultivated peanut. Third, the haploid nature of the cp genome severely restrains the occurrence of non-homologous recombination events, which will make cp genome suffer less from recombination in phylogenetic analysis [30,31]. Finally, the cp genome has a relatively small size compared to nuclear genome, which means that it's relatively cheap and easy to sequence and analyze [27]. Apart from the advantages in phylogenetic analysis as mentioned above, the cp genome is also useful for developing DNA barcodes that can be helpful, for example, in distinguishing taxa, as well as for cp genetic engineering that transfers foreign genes into cp genomes [27]. Comparing to nuclear transgenic plants, chloroplast genetic engineering has several advantages, including a high level of transgene expression and no escape of transgenes through pollen [27]. Now there are already plenty of successful cases of cp genetic engineering that have been performed [27]. Moreover, cp genes have also been found to possibly contribute to host plants' resistance to environmental stresses [32,33]. Although the cp genome is very useful, there are, however, still a very limited number of cp genomes available for the Arachis species so far. The first two Arachis complete cp genomes that have been sequenced are from the cultivated peanut, A. hypogeae, and were reported rather recently by Schwarz et al. [34] and Prabhudas et al. [35]. After that, Yin et al. [21] acquired the complete cp genome sequences from seven different Arachis species including the domesticated peanut, while Wang et al. [36] explored the complete cp genome sequences of four A. hypogeae botanical varieties. To sum it up, there are only thirteen Arachis complelete cp genomes from seven different species that have been sequenced so far. In this study, we have assembled a total of twelve complete cp genomes of Arachis species, among which eleven are from Sect. Arachis while the last one is from Sect. Erectoides, and these data represent a rich source of genetic variation that may hold great potential for peanut improvement. These sequences together with earlier published Arachis cp genome data have helped us better understand the Arachis cp genomes and the phylogenetic relationships among species within and among Arachis sections, as well as give more information about the wild maternal origin of the cultivated peanut. Basic characteristics of the acquired Arachis chloroplast genomes A total of twelve Arachis species that belong to Sect. (Table 1). In comparison to nuclear genome, cp genomes of land plants have highly-conserved circular DNA molecules with two inverted repeat (IR) regions (IRa and IRb) (identical but in opposite orientations) that were separated by small (SSC) and large (LSC) single copy (SC) regions [38]. The twelve cp genomes assembled within this study had this typical quadripartite structure (Figs. 1 and 2), and with total lengths varying from 156,287 bp (A. stenosperma) to 156,491 bp (A. cardenasii) (Additional file 1), which were similar to those earlier published Arachis cp genomes [21,[34][35][36] As with earlier reports about Arachis cp genomes [21,[34][35][36], a total of 110 unique genes were found in each of the twelve assembled cp genomes, including four ribosomal RNA (rRNA) genes, 76 protein-coding genes and 30 transfer RNA (tRNA) genes (Additional files 1 and 2). The gene order of these 110 genes was the same for all the twelve assembled cp genomes (Fig. 1), and was also in line with all published Arachis cp genomes so far [21,[34][35][36]. In addition, six of the identified tRNA genes, and eleven of the protein-coding genes contained introns (Additional file 2). Similar to earlier studies [34,35], the overall GC contents of the twelve assembled cp genomes were 36.3-36.4% and the GC contents were not evenly distributed among the different genome regions: IRs (42.9-43.0%) had higher GC content than LSC (33.8%) and SSC (29.9-30.3%) (Additional file 1). The high GC content of IRs was mostly due to a gene region (including the rrn23, trnA-UGC and trnl-GAU genes) that stands out in its GC content comparing to the rest of the cp genome (Fig. 2). Genetic variation and SSRs Among the twelve cp genomes that were assembled in the present study, a total of 1368 single nucleotide divergences (SNDs) (0.87%) were identified, and most of these SNDs were distributed within the LSC region (959, constituting 1.11% of the LSC sequence), but the SSC region contained the highest proportion of SNDs (299, 1.58%), while the SND content of the IRa/b regions was the lowest (55 each, 0.21%) ( Table 2, Fig. 2). The cp genome region with the highest density of SNDs was located at the intergenic spacer between the psbE and petL genes within LSC, where 47 SNDs were found within 500 bp (Table 3). There were totally 311 insertions/deletions (indels) (0.20%) that had been found within the twelve cp genomes, and > 90% of which were short indels (1-10 bp) ( Table 2). The distribution of these indels among the four cp genome regions was very similar to that of SNDs: LSC had the largest number (241, 0.27%), but SSC held the highest density (64, 0.34%), while the IR regions was the lowest in both number and density (6 per region, 0.02%) ( Table 2, Fig. 2). The cp genome regions with the highest density of indels were found at two 500 bp-long sequence blocks: one included the intergenic spacer between the trnQ-UUG and accD genes within LSC, while the other was within SSC and was composed of the intergenic spacer between the rpl32 and trnL-UAG genes, the trnL-UAG gene as well as the intergenic spacer between the trnL-UAG and ccsA genes. Within both regions, eleven indels were detected, respectively ( Table 4). The result of the VISTA analysis is consistent with the results of SNDs and indels: almost the entire IR regions were conserved, while the identified conserved regions within SCs were short and scattered (Table 4). Although the IR regions were rather conserved, the IR boundaries could vary greatly even within species [39], so in order to detect any possible IR border polymorphism, we compared the four IR-SC borders among the twelve assembled genomes, but no difference was found at the IRa-SSC border, while at the LSC-IRa, SSC-IRb and IRb-LSC borders, only small differences were discovered from A. ipaënsis, A. cardenasii and A. helodes (Fig. 3). The rps19 gene at the IRa-LSC boundary expanded 9 bp from the LSC region to the IRa side in A. cardenasii while it stops at the LSC-IRa junction in the rest of the species (Fig. 3). The length of the ycf1 gene (in SSC) at the SSC-IRb boundary was 4805 bp for A. ipaënsis/A. helodes and 4778 bp for A. cardenasii, which were shorter than those in the other species that were all 4811 bp (Fig. 3). On one side of the IRb-LSC boundary, the spacer between the rpl2 gene (in IRb) and the IRb-LSC junction was 69 bp long for A. cardenasii while this spacer for the rest of the species had a length of 60 bp. On the other side of the IRb-LSC boundary, the lengths of the spacers between the IRb-LSC junction and the trnH-GUG gene (in LSC) were, respectively, 61 bp and 71 bp for A. cardenasii and A. helodes while those of the rest species were all 64 bp (Fig. 3). With MISA analysis, 69 universal SSR loci were detected within the twelve assembled cp genomes, among which 59 were mononucleotide, nine were dinucleotide and one was tetranucleotide. The majority of the identified SSRs were composed of A or T, which was consistent with earlier observations about cp SSRs in other taxa [40,41]. 41 of the identified SSR loci showed variation among the twelve acquired Arachis cp genomes, and most of them (75.6%) were located in the LSC region, followed by SSC (24.4%), and none was found within IR (Additional file 3). PCR primers have been designed for each of the 41 variable SSR loci (Additional file 3). Phylogeny of the studied Arachis species Two different phylogenetic methods (Maximum Likelihood and Bayesian Inference) were used to infer the phylogenetic relationships between the analyzed species, and both methods generated nearly identical trees, we therefore only showed the Bayesian Inference phylogenetic tree ( Fig. 4) (the Maximum Likelihood trees are available on request). Considering the much slower substitution rate of the IR regions that may not be suitable for inferring the relationship between closely related Sect. Arachis species, we therefore constructed phylogeny both with and without them. In addition, phylogenetic analyses of species differing in ploidy levels might produce unusual results comparing to those only involving species with the same ploidy level [1,20], we thus tried to exclude the two tetraploid species (A. hypogeae and A. monticola) and only used the diploid species to infer phylogenetic trees with whole genome data. Furthermore, indels were not considered in the abovementioned phylogenetic analyses, however, information embedded within indels might help improve the resolution of phylogeny for closely related species [42]. We therefore performed Bayesian inferences of phylogeny that took indel information from the entire cp genome into consideration. However, among all the different situations we considered, the arrangement of major lineages and sublineages in the acquired phylogenetic trees always remained consistent (Figs. 4, 5, 6 and 7). Below, we would only present the results from the Bayesian Inference tree that was based on the whole cp genome data (excluding the indel information). A total of 16 Arachis genomes (the twelve genomes assembled in this study plus four earlier published A. hypogaea genomes) had been included in this part of the phylogenetic analyses, and they fell into two wellsupported major lineages (I, II) in the inferred phylogenetic trees (Bootstrap [BS] value: 100; Bayesian posterior probability [BPP]: 1.0) (Fig. 4). Lineage I was composed of one BB genome species (A. ipaënsis) and two AA genome species (A. cardenasii and A. helodes), while lineage II comprised the rest of the species (AA genome species: A. duranensis, A. hoehnei, A. chacoensis, A. villosa, A. stenosperma, and A. correntina; AABB genome species: four A. hypogaea varieties and A. monticola; KK genome species: A. batizocoi; EE genome species: A. paraguariensis) (Fig. 4). Our molecular dating analysis implied that these two major lineages split 0.818 million years ago (Mya) (Fig. 8), and this divergence time was slightly shorter than the divergence time that was estimated between A. duranensis (in the present study belonging to lineage II) and A. ipaënsis (in the present study belonging to lineage I) (2.16 Mya) by Bertioli et al. [43], which was perhaps not surprising considering the latter was based on data from the bi-parentally inherited nuclear genome while the present study used cp genome (uniparentally inherited) data. Both lineages didn't differ much in their GC contents: 35.13% for lineage I while 35.09% for lineage II. Within lineage I, the two AA genome species (A. cardenasii and A. helodes) grouped closely together (BS: 100; BPP: 1.0), and their most recent common ancestor was dated back to 0.3086 Mya (Figs. 4 and 8). In contrast, the split time between these two AA species and the BB genome species (A. ipaënsis) was much earlier: 0.6718 Mya (Fig. 8). The time to the most recent common ancestor of lineage II was 0.2917, and the species within this lineage diverged rapidly (Fig. 8). Within lineage II, two distinct sublineages can be recognized: the first sublineage (BS: 100; BPP: 1.0) was composed of two AABB genome species (A. hypogaea and A. monticola) plus one AA genome species (A. chacoensis), and the most recent common ancestor of this sublineage was dated back to 0.1412 Mya (Figs. 4 and 8). Within this first sublineage, the two AABB genome species (the cultivated peanut and A. monticola) formed a highly supported clade (BS: 100; BPP: 1.0), the estimated time to the most recent common ancestor of The GC content, the SND density, the indel density, and the conserved regions are shown from the inner to outer rings. The outermost rectangles are cp genes belonging to different functional groups that are color-coded. The high-density areas are highlighted in orange for GC content (> 53.06%), SNDs (> 8.39%) and indels (> 5.34%). Boxplots for the GC contents, the SND and indel densities have also been shown, with bars representing the median, the bottom and top of the boxes representing, respectively, the 25 and 75% percentiles, and whiskers extending out to 1.5 times the interquartile range. Dots are outliers bartizocoi) and one EE genome species (A. paraguariensis). The split between these two sublineages might occur around 0.2078 Mya (Fig. 8). Discussion The close wild relatives have been contributing valuable genetic resources to many economically important crops [44][45][46][47]. This study successfully acquired the complete cp genomes of twelve Arachis species that are closely related to the cultivated peanut (A. hypogaea), which is one of the most important oilseed crops worldwide. The rich genetic resources associated with the twelve Arachis cp genome data may hold great potential for future peanut cultivar improvement. The same to those of other land plants [48][49][50], the Arachis cp genome is a single circular molecule that displays a quadripartite structure (LSC, IRa, SSC, IRb). Moreover, the genome size, gene composition and order, GC content, as well as IR-SC boundaries are also rather conserved among the Arachis cp genomes that have been acquired in this study (Additional file 1) and among thirteen earlier published Arachis cp genomes [21,[34][35][36]. Nevertheless, still substantial genetic variation (1368 SNDs and 311 indels) has been identified among the twelve acquired cp genomes, which, together with 69 SSR loci that have been detected in the present study, may serve as useful tools for future studies. The highly conserved IR regions Among the four cp genome regions, the two IR regions are highly conserved comparing to the two SC regions, as reflected by the fact that less than 8% of the SNDs, indels and SSRs that have been identified in this study are located within the IR regions even though IRa and IRb constitute about one third of the genome (Fig. 1, Table 2). This low level of genetic variation at the IR regions (Comparing to SCs) is very common among plant species [51][52][53]. One possible explanation is that for the cp genome that exists as multiple copies within single plant cells, gene conversion with a tiny bias against new mutations would much more efficiently reduce the mutation load at the two IR (inverted repeat) regions than at the SC (single copy) regions due to the duplicative nature of the IRs [51,52,[54][55][56]. Such conversion bias might arise from the base preference of the mismatch repair system during gene conversion [54,57,58]. Two major lineages within sect. Arachis There are two main genome types that have been identified within Sect. Arachis: AA (chromosome number: 2n = 20) and BB (2n = 20), which are also very important because the tetraploid cultivated peanut has a complicated genome (AABB; 4n = 40) that is composed of both of them [2]. Apart from AA, BB and AABB, Sect. Arachis species may also have a genome type of DD (2n = 20), FF (2n = 20), KK (2n = 20) or aneuploidy (2n = 18). Our phylogenetic analyses have shown that the analyzed Sect. Arachis taxa fall into two major lineages: the first one (lineage I) includes A. ipaënsis, which is the only BB genome species that has been studied here, as well as Table 4 The genome areas identified to be rich in indels among the twelve acquired chloroplast genomes. Genome areas that have more than 5 indels per 500 bp are considered to be rich in indels two AA genome species (Fig. 4). The second lineage (lineage II) comprises taxa that include two AABB genome species, six AA genome species, and one KK genome species. These two major lineages observed within Sect. Arachis are very well supported by bootstrap values (Fig. 4), if without the tetraploid A. hypogaea and A. monticola unifying them; they may be well distinguished as separate taxonomic sections [1,20]. Our divergence time estimation shows that the most recent common ancestor of these two major lineages is dated back to 0.818 Mya, and from this common ancestor lineage I was first derived (at least 0.6718 Mya), while lineage II was derived rather recently (0.2917 Mya) and rapidly (Fig. 8). The cp genome size and structure, the gene content and order, and the GC content are well conserved between the two lineages. However, the IR-SC border regions of the lineage I species cp genomes have been found to vary in length and differ from those of the lineage II species that are all identical (Fig. 3). The presence of two major Sect. Arachis lineages is confirmed by nuclear genome data generated by genotyping-by-sequencing (GBS) approach (unpublished observations). Similar species grouping (into two major lineages) within Sect. Arachis has also been observed in a number of other studies (Additional file 4) [11, 14, 20-22, 24, 25, 59-62]. These studies may work on different Arachis species, and use very different methods, genetic markers or sequence data, but lineage II or the equivalent, in almost all of these studies, is dominated by the AA genome species and tends to exclude genome types other Fig. 3 Detailed view of the IR-SC border regions among the twelve studied Arachis species. Regions that differ from the majority are highligted in grey boxes, which, according to our phylogenetic analyses, all belong to species within lineage I than AA and AABB (Additional file 4). In contrast, lineage I or the equivalent is more diverse and can take on any genome type (AA, BB, AABB, DD, FF, KK or aneuploidy) that has been identified within Sect. Arachis but slightly prefers the BB genome species (Additional file 4). Interestingly, the AA genome species within Sect. Arachis are all perennial (except A. duranensis, A. stenosperma and A. hoehnei), while all the other various genome types (e.g. BB, DD, FF, KK and AABB) belong to annuals/ biennial species (Sect. Arachis is one of the only two Arachis sections that have annual species; the other one is Sect. Heteranthae) [4]. The genome type compositions of annual/biennial and perennial species may be consistent with the earlier finding that annual species generally have a much higher molecular evolutionary rate than perennial species [63][64][65]. Moreover, in the place of origin for Arachis, South America, the distribution areas of annual/ biennial species have relatively more diverse environmental conditions than those of perennial species. In South America, the overall geographic distribution of Sect. Arachis species has a bizarre shape that is kind of similar to a capital T [4]. The perennial Setc. Arachis species prevail the vertical axis of this T shaped range, which more or less coincides with the meridians 57°and 58°west and mainly includes the watersheds of the Paraguay and Uruguay rivers [4]. Whereas the annual/biennial species dominate the two arms of the "T" shaped geographic distribution: the Tocantins river to the east, the Mamoré river and the Guaporé river to the west, as well as the dry "charco" region (up to the foothills of the Andes) to the southwest [4]. At these arm regions, the annual/biennial species are usually adapted to very stressful environmental conditions, such as prolonged inundation and periodic drought [4]. Although the presence of two major lineages within Sect. Arachis and the overall pattern of the species component of these lineages are evident, however, to which lineage one Sect. Arachis species should be placed is not always clear. For example, according to the present study, the KK genome species, A. batizocoi, falls with most of the analyzed AA genome species into lineage II, however, A. batizocoi has been found to appear in lineage I-equivalent clade by three earlier chloroplast phylogenetic studies [14,20,21]. Another example is that the AA genome species A. cardenasii and A. helodes, which fall into lineage I at the present study, have, however, often been found in lineage II-equivalent clade by earlier studies [11,20,21,60]. Moreover, A. paraguariensis (belonging to Sect. Erectoide), which is the only non-Sect. Arachis species acquired by the present study, was originally chosen as an Fig. 4 Bayesian inference tree for Arachis that is based on the entire cp genome. The tree is rooted with Stylosanthes viscosa. Two major lineages (I and II) have been observed among the Sect. Arachis species, and within lineage II, two sublineages (1 and 2) can also be recognized. For each node, the bootstrap support value is shown on the left and the Bayesian posterior probability is shown on the right. The genome type of each analyzed Arachis species are also given outgroup but turned out to be closely related to A. batizocoi (KK), A. hoehnei (AA) and A. duranensis (AA) within lineage II (Fig. 4). It is worth noting that similar species mixing between Arachis sections are not uncommon [11,20,21,24,66]. For example, Yin et al. [21] studied seven Arachis species, five of which belong to Sect. Arachis while the rest two are the members of Sect. Procumbentes; instead of grouping together, the two Sect. Procumbentes species were, respectively, nested within different Sect. Arachis species groups. These incongruences of species relationship that are mentioned above may be the result of several different reasons. First, a high level of genetic variation has been reported within different Arachis species (especially the wild ones) [4][5][6][7][8][9][10]67]. This high level of intraspecific genetic variation may be at least partly due to the autogamous reproductive system and the underground fruiting habit of the Arachis species, which can seriously restrain interspecific and intraspecific gene flows [4]. Therefore, different samples of the same species may have distinct genetic constitution (such as A. duranensis, see below) and phylogeny inference based on these different samples is then likely to result in very different species relationship, for this reason, more representative samples from each species need to be considered for future phylogenetic study. Second, Arachis species are not always completely incompatible with each other, hybridization even between different genome types or sections have been well documented [2,14,15,59]; the interspecific hybrids that are possibly produced in nature may well blur the species boundaries. Third, considering the rather recent divergence of the Sect. Arachis species (< 1 Mya, Fig. 8), ancestral polymorphism is also likely to be maintained within extant species and consequently complicate phylogeny inference. In addition, differences in analyzing method (such as UPGMA and maximum likelihood), in genetic data type (AFLP, RFLP and DNA sequence etc.), in the amount of genetic information that is considered (single genes or genome data), as well as in the inheritance mode (bi-parental or uni-parental) of the acquired data set will all have an impact on the inference of species relationship [29]. The maternal origin of the cultivated peanut (A. hypogeae) Between the two major Sect. Arachis lineages that have been observed from the phylogenetic trees inferred by the Fig. 5 Bayesian inference tree for Arachis that is based on the two single copy genome regions. The tree is rooted with Stylosanthes viscosa. For each node, the bootstrap support value is shown on the left and the Bayesian posterior probability is shown on the right. The genome type of each analyzed Arachis species are also given. A. chacoensis is now known as A. diogoi present study, it is lineage II that the cultivated peanut falls into, and within this group, the four considered A. hypogeae varieties mix together with A. monticola. This result is not in conflict with previous views that A. monticola may be the direct wild ancestor of the cultivated peanut [11] or as an introgressive derivative between the cultivated peanut and wild Arachis species [68,69]. These earlier views are based on a combination of different evidences. For example, these two species are the only tetraploid species within Sect. Arachis and they usually group together in phylogenetic analyses [2,26,66]. Moreover, the geographic distributions of these two species are close to each other [70] and hybridization between them has been well documented [14,16]. From the discussion above, we have already known that lineage II or the equivalent is dominated by AA genome species and particularly "disfavors" BB genome species, and this is especially true for the phylogenetic trees that are inferred from cp sequences as reflected by the present and three earlier studies [14,20,21]. Actually in all of these four studies, the cultivated peanut is nested within lineage II or the equivalent. Considering the chloroplast genome is maternally inherited in Arachis as shown by an earlier study, where the F1 hybrids between two Arachis species always grouped together with their maternal parents in the phylogenetic tree that was built based on chloroplast DNA [14], our result suggests that the maternal genome donor to the tetraploid cultivated peanut (AABB) and the tetraploid wild peanut species (A. monticola; AABB) is one AA genome species [14,20,71], or one can say that the A genome of A. hypogeae and A. monticola is contributed by their maternal progenitor. Currently, the most popular view regarding exactly which species serve as the genome donors to A. hypogeae and A. monticola is that A. duranensis (AA) and A. ipaënsis (BB) contribute, respectively, the A and B genomes. This generally accepted opinion is supported by evidences from genome type, geographic distribution, crossability, cytogenetic analysis, molecular analysis, phylogenetic analysis and genome sequence comparison [11,14,43,60,[70][71][72]. However, in the present study, A. hypogeae and A. monticola group closely with neither A. duranensis nor A. ipaënsis in the inferred phylogenetic tree (Fig. 4), instead, these two tetraploid species form a well-supported subgroup with A. chacoensis (AA). Our result alone, however, cannot conclude that A. chacoensis, instead of A. duranensis, serves as the A genome donor to A. hypogeae and A. monticola due to a combination of different reasons. First of all, A. duranensis has a relatively wide geographical distribution and lots of intraspecific variation has been reported within this species [8,22,71,72], if different samples that have distinct genetic makeups are used for phylogenetic inference, the result may be very different. Next, A. duranensis has been shown to be able to hybridize with other Arachis species [14], leading to interspecific gene flow that may also blur the species boundary. At last, although chloroplast genomes have a lot of advantages in phylogenetic analysis as mentioned in Introduction, they are however relatively vulnerable to problems like introgression and the retention of ancestral polymorphism that are frequently encountered when inferring the phylogenetic relationship between closely related species, due to their maternal inheritance mode [73,74]. Conclusions The highly variable wild peanut species may serve as a rich source of useful alleles for the improvement of the cultivated peanut, which is one of the most important oilseed crops in the world. The present study has acquired the complete cp genome sequences of twelve Arachis species and eleven of which belong to Sect. Arachis; the cultivated peanut is also a member of Sect. Arachis. As for other land plant species [48][49][50], the cp genome size and structure, as well as gene content and order are highly conserved among the twelve acquired cp genomes. Nevertheless, substantial SNDs, indels and SSRs have been identified from the acquired genomes, and most of these SNDs, indels, and SSRs are distributed at the two single copy genome regions (LSC and SSC). The two inverted repeat genome regions (IRa and IRb) have a very low level of genetic variation, which may be due to biased gene conversions. Phylogenetic analyses of the acquired genomes have identified two major lineages (I and II) within Sect. Arachis. Our results together with many earlier studies show that lineage II is dominated by AA genome species that are mostly perennial while the genome types of the lineage I species are rather diverse; Sect. Arachis species with genome types other than AA are all annual/biennial [4]. In addition, the tetraploid cultivated peanut is found within lineage II, which together with the maternal inheritance mode of chloroplast suggest that it is an AA genome species that served as the maternal donor of the cultivated peanut. In summary, the twelve cp genomes acquired in the present study have not only helped us understand the genetic basis and phylogenetic relationships of the Arachis species better, but also provided us with substantial genetic resources that may be valuable for future peanut improvement. Plant material and genome sequencing A total of twelve Arachis species that belong to Sect. Genome assembly and annotation The Illumina sequencing generated > 1 Gb raw paired-end reads for each sample, and these data were deposited into NCBI Sequence Read Archive (SRA) (BioProject Accession No. PRJNA543570). These raw paired-end reads, plus one raw read dataset for A. ipaënsis that was downloaded from the NCBI SRA database (accession number: SRX2701518) [37], were analyzed using the NGS QC ToolKit v2.3.3 for filtering low quality data after quality check and removing adaptor sequences [75]; the cut-off values for the percentage of read length and phred score were set, respectively, to 80 and 30. In total, 305,336-3,503,151 high-quality reads were acquired per sample, which produced 293-3362 fold cp genome coverage when being mapped onto a reference cp genome from A. hypogaea (GenBank [76] accession number KX257487, [35]) using bowtie [77] (Additional file 1). The high-quality reads (obtained from last step) that belong to the cp genome were extracted and assembled into contigs using the de novo assembler SPAdes v3.9.0 [78] (with several different k-mer sizes: 93, 105, 117 and 121). These assembled contigs were further assembled into complete cp genomes by NOVOPlasty v2.6.2, which has been designed specifically for assembling organelle genomes [79]. The complete cp genome was annotated using the DOGMA tool with default parameters [80]. Genetic variation analysis Both SNDs and indels were detected from the mapping (to the A. hypogaea reference genome) results of the bowtie analyses (see above) using GATK [83] (ploidy setting = 1). The VISTA server [84] was used to identify the conserved genome regions. The visualization of the densities of SNDs and indels (i.e. the number of SNDs or indels counted for every consecutive 500 bp blocks), and the conserved regions over the entire cp genome were performed using Circos [85]. Simple sequence repeats (SSRs) were predicted using MISA [86] with default parameters except the minimum counts of the repeat unit within single SSR motifs: (10) mono-, (6) di-, (5) tri-, (4) tetra-, (3) penta-, and (3) hexanucleotide repeat units [21]. All the identified SSRs were manually checked and redundant results were removed. Phylogenetic analysis and divergence time estimation To better understand the species relationships within Arachis, especially within Sect. Arachis, phylogenetic analyses were performed on the complete cp genome data. Apart from the twelve cp genomes that were acquired by the present study, four earlier published cp genomes from different A. hypogeae botanical varieties (GenBank accession no. MG814006 for var. fastigiata Waldron, MG814007 for var. hirsute Kohler, MG814008 for var. hypogaea L., MG814009 for var. vulgaris Harz) [36] were also included in the analyses, so, in total 16 complete cp genomes from thirteen Arachis species were considered for the phylogenetic inference. In addition, a cp genome of Stylosanthes viscosa L. (GenBank accession no. MG735675) was chosen as an outgroup for the phylogenetic analysis; this cp genome showed the highest similarity to Arachis species [87] among the cp genomes that were available up to the date when the analysis was performed. Before being used for the phylogenetic analyses, the 17 cp genomes were aligned with the HolmBlocks pipeline (with default settings unless specified) [88], which was fast and efficient especially for handling a large amount of divergent interspecies sequence data and was therefore suitable for overcoming the alignment difficulties that may be introduced by the relatively distantly related outgroup species. Within the HolmBlocks pipeline, progressiveMauve [89] was first used to identify conserved genome regions, based on which a preliminary alignment was next constructed; the alignment was then trimmed by Gblocks [90] to remove poorly aligned and divergent regions. The phylogenetic analyses were first carried out using the Maximum Likelihood methods as implemented in IQ-TREE [91]. Ten independent searches were performed, and the statistical confidence in each predicted node was evaluated with 10,000 non-parametric bootstrap replicates. MrBayes v3.2.5 [92] was then used to perform Bayesian inferences of phylogeny via Markov Chain Monte Carlo method [92]. We run the inferences for 100,000 Markov Chain Monte Carlo generations, with a sampling frequency of 1000 generations. Results of the first 25% generations were discarded as burn-ins and the rest were used to build a 50% majority-rule consensus tree. Land plant cp genomes were characterized by four typical regions: two IR regions and two SC regions, (see Results) [38]. The IR regions were shown to have a much lower nucleotide substitution rate comparing with the SC regions [52,54,93], so might not be suitable for inferring the phylogeny of the closely related Sect. Arachis species as analyzed by the present study. Here, we therefore reconstructed the phylogeny using the same methods as above but excluding the IR regions. Phylogenetic analyses of species differing in ploidy levels might produce unusual results comparing to those only involving species with the same ploidy level [1,20], in order to test whether this is the case with our study, we then excluded the two tetraploid species (A. hypogeae and A. monticola) and only used the diploid species for inferring phylogenetic trees with whole genome data. Moreover, indels were not considered in the abovementioned phylogenetic analyses, however, information that was embedded within indels might help improve the resolution of phylogeny for recently divergent species [42]. We therefore performed the last phylogenetic analysis (for this study) that took the 311 indels observed in our Arachis whole-genome data into consideration. In this step, the indels were first converted into binary data with the simple indel coding method [94] using the SeqState software [95]. The acquired binary data together with the ordinary nucleotide substitution information were input into MrBayes as mixed data for Bayesian inferences of Arachis phylogeny. At last, the software BEAST v1.7.2 [96] was used to estimate the divergence time among different Arachis species. The estimated divergence time between genus Stylosanthes and Arachis from Saslis-Lagoudakis et al. [97] was used as a calibration point.
2019-11-19T15:36:14.834Z
2019-11-19T00:00:00.000
{ "year": 2019, "sha1": "52c2810aa4e9d77eedd64fe20af59ebabfcd55c9", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-019-2121-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52c2810aa4e9d77eedd64fe20af59ebabfcd55c9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
258444191
pes2o/s2orc
v3-fos-license
Voices are much more than a symptom The international Hearing Voices Movement, which emerged in the 1980s, understands the phenomenon of hearing voices not just as a symptom, but helps in the development of strategies to deal with these voices. The objective of this study was to understand how people who participated in groups of voice hearers in the Brazilian health system dealt with such experiences. This was a qualitative research study, carried out in 2020, with data collection from in-depth interviews and field diary, analyzed using content analysis. Adult participants (4) attended the group for more than a month. It was possible to explore the individual coping strategies developed from the experiences of each one with their voices. The group was also an instrument of socialization and, from the normalization of the experience, of greater self-acceptance and reduction of stigma. Introduction Hearing voices that other people do not hear is a permanent phenomenon in humanity. In the last century, however, this experience has been constantly associated with diagnoses of mental disorders 1,2 . Contradicting this comprehensive paradigm, the International Hearing Voices Movement (IHVM) arose in the 1980s, from the collaboration between people who hear voices and Dutch health professionals and researchers. Since then, the movement has gained strength and seeks autonomy, respect, and quality of life for people who experience this phenomenon. The movement also believes in the need to combat the prejudices that involve the phenomenon of hearing voices and encourages support for people who need it 1,3,4 . One important form of action of the movement is the creation of groups of voicehearers, where people who go through such situations can meet and share experiences. These groups are spaces to validate feelings and act in favor of the individuals' belonging 5,6 . By investing in the subject and fighting against silencing and isolation, IHVM can be seen as a deinstitutionalization strategy, with a strong aspect of fighting against stigma and prejudice 5 . Studies have already pointed out that the voice-listening groups are a resource to facilitate the recovery process, with improvement in the hearing of voices, as well as in social, emotional, and clinical aspects. Such results are evidenced both in groups that take place in the community, within mental health services, as well as virtually. Those who participated in these strategies began to better understand their own dynamics with the voices, as well as present a sense of belonging, identification of reasons for certain behaviors and exchanged information on how to deal with this experience with other participants [7][8][9] . In the latest World Health Organization 10 (WHO) Guide on Mental Health, actions that encourage recovery and community participation are indicated and valued, as well as the creation of person-centered strategies that respect human rights. Among them, hearers' groups are cited as models to be followed 10 . Therefore, it is of interest to the field of Collective Health and the Brazilian Psychiatric Reform to put in place strategies based on this paradigm 11 , because they see the individuals, their histories, and their contexts in their integrality. Recovery is a concept that emerged in the 1970s and was established as a movement and model to be followed, from experiences and exchanges among users of mental health services. It is related to a change in paradigm, in which the notion of cure and remission of symptoms is overcome, and the individual starts to be seen with more hope and search for quality of life 11 . This paradigm asserts that the individual who suffers psychically can recover by remaining a citizen, seeking life in the communityregardless of whether or not the symptom has been removed. It also encompasses the fields of education, work, and leisure 12 . The IHVM, in line with the Recovery Movement, also advocates that people can develop skills to deal with voices, in order to maintain autonomy. These skills can be developed throughout life, and are also techniques that can be shared among people who attend the groups 13 . Additionally, understanding the subjective meaning of the voices may be important to guide coping strategies, because an approach that proposes to better understand the subject will have greater meaning and acceptability, increasing the chances of these tools being used in everyday life 14 . Some research in Brazil, based on voice-listening groups, have already listed several coping strategies for this experience when it becomes a suffering. Attitudes such as ignoring the voices, talking to them, questioning what they say, being distracted by other activities, or talking to family members about them have proven effective 15,16 . They end up composing an important repertoire to foster a clinic centered on the citizens' experience and not only on their diagnoses. Therefore, in this paper, we will also investigate the coping strategies of voicehearers. The present work is a section of a larger research, and aims to understand how people who participated in voice-hearing groups in the Brazilian National Health System (SUS) dealt with the experience of hearing voices. Material and methods This is a qualitative research carried out in the municipality of Curitiba-PR that used in-depth interviews and field diary preparation via participant observation as the main methodological tools. This type of research takes into account the singularities of each individual and seeks to know the experiences and representations 14,17 about the lived experience. This methodology presupposes greater proximity between researcher and field, adapting well to the health context when there is a need to understand the processes that surround it 17 . The research was carried out in a Psychosocial Care Center (CAPS) in Curitiba -where the first author was also a worker. This specific CAPS is a service that assists people in emotional distress, with or without substance abuse 18 . This healthcare center operated at the time of the research, providing care to a population of 142,577 inhabitants, divided into 12 neighborhoods 19 . It was a CAPS that did not have the capacity to receive people at night, operating from 7 AM to 7 PM and offering multidisciplinary clinical care 18 . Angrosino 20 says that it is in the interest of the qualitative researcher to have access to the experiences as they occur in their natural context, also performing participant observation. As a form of data collection, the author also indicates the field diary, which was kept by the researchers. These data were collected throughout 10 meetings of a voice-listening group, in which the researchers participated in the aforementioned CAPS, between January and March 2020. The group was happening previously and independently of the research. The researchers' initial planning was a longer participation in the group, however, due to the COVID-19 pandemic, the participation was limited to the period previously mentioned. The field diary entries were made the day after the group meetings. The general characteristics of the participants were recorded, as well as their speeches and their behaviors, in chronological order, as oriented by Angrosino 20 . In addition, in-depth interviews were carried out between September and October 2020 with the group participants. Recruitment privileged participants over 18 years old who were in treatment at CAPS and who regularly attended the aforementioned Voice Hearers group for at least 1 month. The Voice Hearers group had 10 fixed participants, who were in the group weekly and met to discuss the subject of voice hearing. The group took place once a week, lasting 1hour. The participants had the leading role of the group, that is, its functioning privileged horizontality, according to which all those who participated had their history and voice respected. The group occurs as an exchange of experiences, in which one participant helps the other based on his or her own experience. Before any recruitment, the research was presented to the CAPS team and the users. Those who accepted received the Consent Form (CF), which was read, and their doubts were clarified. Afterwards, interviews were scheduled with each participant who accepted and signed the CF. All interviews were audiorecorded and transcribed in full. In the transcription, all names were replaced, and information that could lead to identification was suppressed or changed to preserve confidentiality. The guiding questions used in the interview aimed to understand the following aspects: the history of the subject in relation to the voices, including his/her interpretation and beliefs regarding the phenomenon; the ways he/she uses to deal with the voices, and what can help and hinder this process; and, finally, the participation in the group of voices hearers and how has been this experience was explored. More questions were asked in the course of the interview, in order to deepen and better understand what the interviewees reported, bringing information from different angles and directly involved with the lived experience 21 . This specific group of voice hearers was chosen for the research because it followed the recommendations of IHVM regarding the principles of the experience approach: (1) hearing voices is a human experience that can be understood as natural, explained differently and understood in the context of life; (2) hearers are encouraged to take ownership of their experience, seeking acceptance and understanding; and finally, that (3) the support of people going through similar experiences can help in the process of recovery 22 . The data obtained from the interviews and the field diary were analyzed using content analysis. This methodology adapts well to the style of research carried out, because it understands the need to understand that the subject's speech is inserted in a context, and that this subject is also active in the construction of knowledge 23 . We followed the analysis procedures recommended by Franco 23 and Campos 24 . It began with pre-analysis, the first contact with the transcripts, followed by multiple readings of the material, understanding key points of the text. After an exhaustive reading, the units of analysis were selected according to relevant themes, also taking into account previous knowledge in the area and the research objectives. This was followed by categorization and subcategorization, listing meanings closer to each other, to create analysis categories. And, finally, there is the inference and interpretation phase, in which assumptions and the search for meaning in the results were made, related to the world literature. Results and discussion Interviews were conducted with 4 participants, and one of them was re-interviewed, because in his first interview some data were scarce due to his more introverted profile. This totaled 5 interviews (Frame 1). The interviews, allowed to notice that the voices have an impact on the subjects' lives, and that they needed to establish ways to deal with them to be able to go on with their daily activities. The descriptions of how they deal with these situations, frequent in their daily lives, were varied. The actions that these people use to deal with the situations, from their experience and experimentation, can be called coping mechanisms. A study that carried out a review on the state of knowledge of research focused on the diagnosis of schizophrenia and hallucinatory symptoms identified that the coping mechanisms used can be aimed at reducing the voices, or they can aim at improving the feelings associated with them 25 . Moreover, the interviewees said that their reactions varied according to the experience: some voices aroused one type of reaction, while others triggered other responses. When they saw or heard people with whom they had or have an affectionate relationship, they tended to react more receptively and engage more easily and interact with them. On the other hand, when the voices were experiences with unknown people, perceived as invasive and intense, the reactions were more defensive, less interactive, aiming at self-protection. As Frida quotes, in this excerpt That it is a constant learning too, yes. It depends on the person that it is and what that person represents, what that vision represents, you will deal differently. It's my brother, for example, I'll talk about my mother, you know? (Frida) Marius Romme and Sandra Escher 1 surveyed data on people's relationship with voices, demonstrating the complexity and importance of life histories. The authors started several conversations, including at a congress, with people who hear voices, to better understand this phenomenon, its relationship with life history and how people outside the psychiatric circuit deal with this issue, giving for the first time a voice to those who hear voices, putting them at the center of the debate -overcoming the notion of "symptom". From this, then, IHVM says a lot about the importance of investigating the tools for coping with voices, because they tend to be articulated with the subjectivity of the subject, as well as with his or her beliefs about life 1 . For these reasons, a strategy that works for one may not work for another: strategies are unique 16,[26][27][28] . The techniques used can be behavioral or cognitive actions. These strategies can be one-time, for short-term use, or more ingrained, for the long term. Some are for practical benefit, such as lowering voices, and others are for emotional benefit, such as minimizing negative emotions 29 . Knudson, Coyle 16 also described the existence of social and sensory strategies. In the research findings, it was possible to find all these examples of strategies. It is also valid to understand that the strategies employed in practice based on the personal experience of the situation have great relevance because they place the subjects in a leading role, valuing their own way of building resources to alleviate suffering 16 . In this research, for example, Virginia said she could deal with the voices based on the understanding that she didn't need to respond to their demands, understanding that she could make judgments and choices. According to her, the voices told her to do bad things, which she interpreted as demonic: I thought: I'm not going to do that, because it's the voices that are telling me to do it. It's a bad thing, so I didn't do it. (Virgínia) Similarly, another participant in the group reported that in order to get out of the persecutory state, he used logical reasoning. He was able to think that if he didn't do anything to be persecuted, then it must not really be happening, and these thoughts allowed him to reassure himself. These strategies are consistent with what has been described in other research, according to which regaining control over the situation, with the person putting himself in the center, can help in the recovery process 27 . With this, the individual leaves a passive posture and becomes active in his life. This discussion about rationality and decision making also appeared in the research results of other authors 25 . The author observed that in voice-listening groups there is a lot of discussion about the fact that, although the voices guide commands, it is the hearers who could actually carry them out -thus placing the possibility of the person issuing a value on the action and making the decision to carry it out or not. It is also worth noting that within this strategy there is the understanding that the subject and the voices are separate, valuing once again individuality 15 . This process, however, is not simple, and must be tied to individual beliefs, once more respecting the subject's understanding of themselves and their voices -because some subjects may perceive these voices as different from themselves, just as others may have the impression that they are part of themselves. Other participants said that talking or being close to someone can help. For Frida, being close to her mother was a source of relief. So, I would stay, I would run into my mother's room, get on my knees, hold the blanket, put it under my head and stay until dawn. So, there were these things like that, you know? I don't know how to explain it. There it was like a comfort, there near my mother, it seemed that nothing happened near my mother. (Frida) Likewise, other participants also cited that talking to someone, thus changing the focus of attention, was something beneficial. Such strategies were also found in other research on the same phenomenon, which indicate that individuals who hear voices seek other people to talk, feel safe or distract themselves 13,15,30 . Chun, Tsun 28 says that this strategy is related to shifting the focus from voices to social interaction, which is shown to be protective with respect to other negative feelings. Another strategy cited by Frida, when she is having unpleasant bodily sensations (bugs running under the skin), is to make use of the prescribed medications, which, she says, help her in her daily life. It is worth remembering that, to be in accordance with the paradigm of recovery, it is important that we increasingly value the subject's unique experience with medication, creating conditions for the user to have a voice and a place as a citizen 31 . The conscious use of medication can also be combined with other strategies, because one does not necessarily exclude the other 32 . Within the voice-hearing groups, there is even room to talk about medication, its therapeutic and adverse effects, as well as tips on how to deal with such effects 32 . Medication as a support possibility is also pointed out in the research of Kantorski et al. 25 and Corradi-Webster, Leão, Rufato 30 , who tell that the use of these medications may not eliminate the voices, but help keep calm to deal with the situation. Frida also mentioned that something that helped her was trusting the professionals' guidelines and putting them into practice. It can be affirmed that a good bond with the professionals tends to predispose to an improvement in relation to the feelings, and, within this perspective, it is important that the professionals value the experience of the living, perform the person-centered care, and share the decision making 16 . The interviewees also mentioned pleasurable activities that required concentration or creativity, such as manual or body practices. The performance of activities can help because the person who is listening to voices can divert their focus from them, and may feel less invaded 13,25,30 . We painted things, we painted dishcloths, we made vases, everything with the therapist down there. It helped me more, I didn't keep hearing voices, coming home more motivated, right? (Virgínia) I have to breathe, I learned that now, breathe in, breathe out, hold on, count to six and slowly let go. I learned this and it helped me a lot" (Frida) Knudson, Coyle 13 also found the use of relaxation and meditation as a coping strategy. In specific research on mindfulness as a support for those who hear voices, the participants had a reduction in negative sensations as a result of the voices 33 . Bispo said that he always goes out for a walk, stating that when he walks, he feels that "the despair" coming from the voices he hears passes. This strategy is also related to the findings of Knudson & Coyle 13 , who in their literature review article found researches that indicate physical activities as a tool to deal with the voices, because they increase mental and physical excitement, and may help in the process of minimizing distress. Participants also reported that interaction with the voices may often be necessary, either to understand what the voices want, what they are saying, or to try to somehow stop them, or even "beat them through exhaustion" (Salvador). Salvador described a very elaborate way of dealing with the phenomenon. He seems to use creative resources and a mode of interaction that is as if it takes place in another world, to which he has access, and other people do not. Unlike reports in which people describe what they experience as "being from another world", intruding into ours, for Salvador he actively accesses this other world. And in this world things happen that are beyond his control, but he is able to act in this world using the power of his imagination. Not only to respond to voices and establish a cordial relationship. Sometimes setting specific times to talk to them, but also attacking them and creating things in this world, being able to influence it. No such degree of interaction has been found in the specialized literature. I also imagined destruction" from the west, like atomic bombs, acid rain, very cold, to see if I could kill these hallucinations. They reacted, with withdrawal and so on. It's just that this West is a different world. It's not here. It is in the hallucinatory world. [...] Well, you have to try not to pay attention, if it is impossible, pay attention and try to pass intelligence to them. There are some personalities, some hallucinations that have intelligence, so you can talk to them. Ah, there is one that came during a storm and settled there in the West, and he wanted to talk a lot; then in the morning I would take two hours to talk to him, where I would say what he thought, then I would make him talk in the hallucinatory world, and everybody listened. I didn't use a real voice; it was a hallucinatory voice... I don't know whose it was. (Salvador) Other researchers corroborate these findings, having in their research, in a group in Italy, the perception that there was an orientation to change the relationship with the voices, talking to them, trying to understand and control them, starting from the assumption of overcoming fear and protagonism of life 5 . Some authors encourage contact with the voices in a systematic way to establish a cordial relationship with them. Talking to the voices and trying to understand them helps to have more control of the situation, because the listener can decide what he really wants to pay attention to and choose what he considers beneficial in his life. This interaction with the voices can also contribute to a greater understanding of these experiences articulated with their history 1,3,27 . Additionally, in the work of Romme & Escher 1 , the authors wrote about the possibility of some strategies involving structured contact with the voices, not only through conversations, but also through performing body or mental actions, as well as performing rituals or rites. These authors realized that it was more beneficial for people to understand the phenomenon of voices and visions, accepting them and investigating the relationship of the voices with their life history, than to interpret them only as a pathology or something to be suppressed. About the techniques that involve direct contact with the voices, listening to them and selecting parts of the dialogue, Mcnally, Goldberg 34 investigated, from qualitative research with people diagnosed with schizophrenia, possible cognitive strategies to deal with the voices, and found the following results: to create internal dialogues with one's own voice, seeking rationalization and negotiation; to try to apply one's will over that of the voices; to be self-consoling and self-assertive; and, finally, to use humor, that is, to laugh at oneself. Finally, another strategy mentioned was religiosity. It was defined as a point of support and understanding, either to ask for help at the necessary moment or to present itself as an explanation, besides being described as an important space to frequent and socialize. From the field diary records, Evangelicals, Catholics, and Umbandists were identified. It was also noticed that spirituality becomes a possible explanation for extrasensory phenomena, and can help in the process of self-understanding. A young man in the group, for example, sees some boys running around in his backyard and identifies them as spirits -he understood that he was in a mystical experience, and this gave a greater meaning to his experience, because it addresses something explainable and collective. Other participants also believed that they were in contact with the world of the dead, having experiences with dead people, or, as Frida explained, when she said that "people in white" visit her and, probably, they are beings from other worlds. In the studies of Romme, Escher 1 , mystical and religious explanations are usually a possible way to understand these phenomena. Mccarthy-Jones, Waegeli, Watkins 35 indicate that religiosity may not only offer a path of response to the phenomena, but also a whole community support. However, these same authors also discuss that some people may feel oppressed and pressured by religion due to some explanations that may make the person feel bad for hearing voices 35 . Religion can also reinforce bad feelings and stigma, particularly when it attaches a negative causal relationship to the experiences of the voice-hearers, e.g., possession, deviance, or lack of faith, blaming people rather than helping them. Finally, voice-hearer groups were described as spaces of safety, belonging, and as a time to gain insight into the experience of hearing voices. They described that the group helps them come out of isolation, because, customarily, the experience of hearing voices is very lonely. The group also emerged as a place of hope, as people with more experience and more coping strategies can share their knowledge with people who are still at the beginning of their journey. We can state, then, that ombudsman groups can also be placed as a support strategy. This finding is consistent with the literature, where participation in a group can also generate new strategies, through the exchange of knowledge and experiences 30 . The groups are also spaces where, in addition to new strategies, there may be sharing that helps in the re-signification of the experience, as well as in possibilities of recovery 7 . This research has the limitation of having been conducted with a very specific group, which was in treatment due to psychological suffering in a specialized service of the public health network and that usually serves people with severe, complex and/ or chronic conditions. It is possible that, because of this, most of the experiences involving listening to voices have been described as negative. Another limitation was the Covid-19 pandemic itself. The group had to be suspended in 2020, as it was experiencing the beginning of the increase of cases in Brazil, which postponed the interviews, which were only conducted 6 months after the closure of the group. Thus, it was not possible to conclude whether the coping strategies that are used to deal with the voices come directly from the participation in the group. The pandemic itself reduced the number of participants at the end of the study. Furthermore, the scope of this research does not allow us to fully cover the existing strategies, nor to extrapolate them to all people who hear voices. Concluding remarks With this research, we were able to access several strategies to deal with voices, visions and bodily sensations considered strange or bizarre. People who experience them use such strategies in their daily lives, including complex modes of interaction that tend to be left aside or little explored in health care. These ways of dealing with such phenomena go far beyond the classic treatment approaches, and, as they become known, they can be stimulated, developed over time, and shared. They are: reflecting and making one's own choices; using rationality; communicating with someone you trust; prescribed medication; relying on professionals; activities that involve concentration and creativity; interacting with voices, and religiosity. At the same time, participation in voice-hearing groups ensures social interaction and reinforces the perception of not being alone. The stimulation and development of voice-listening groups can allow people to contain what they experience in a protected way, suspending, even temporarily, the social stigma in these spaces, to function as a place of learning and exchange of experiences, respecting the singularities of each individual, but, at the same time, helping to maintain hope, protagonism, mutual support and collective construction. The diversity of people participating, with their varied experiences, coping strategies, life stories, and re-significations, can guarantee examples and bring hope for a better life for those who are facing such phenomena. Such an approach broadens the classic comprehensive paradigm of psychiatry itself, which sees the phenomenon as just a symptom to be controlled. This look, based on experience, on the contrary, allows the structuring of a solidary, communitarian network that does not depend on health services for it to happen.
2023-05-03T15:05:30.091Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "48ff9b0f5f5785e154329ba6ae3e61326813756d", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/icse/a/36Njdr5VnRfB7Vs86YCptcK/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "97dd7971efdb2ed7288c0e0cb572f5eba5e1d71b", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
4552203
pes2o/s2orc
v3-fos-license
Roles played by community cadres to support retention in PMTCT Option B+ in four African countries: a qualitative rapid appraisal Objectives To explore the roles of community cadres in improving access to and retention in care for PMTCT (prevent mother-to-child transmission of HIV) services in the context of PMTCT Option B+ treatment scale-up in high burden low-income and lower-middle income countries. Design/Methods Qualitative rapid appraisal study design using semistructured in-depth interviews and focus group discussions (FGDs) between 8 June and 31 July 2015. Setting and participants Interviews were conducted in the offices of Ministry of Health Staff, Implementing partners, district offices and health facility sites across four low-income and lower-middle income countries: Cote D’Ivoire, Democratic Republic of Congo (DRC), Malawi and Uganda. A range of individual interviews and FGDs with key stakeholders including Ministry of Health employees, Implementation partners, district management teams, facility-based health workers and community cadres. A total number of 18, 28, 31 and 83 individual interviews were conducted in Malawi, Cote d’Ivoire, DRC and Uganda, respectively. A total number of 15, 9, 10 and 16 mixed gender FGDs were undertaken in Malawi, Cote d’Ivoire, DRC and Uganda, respectively. Results Community cadres either operated solely in the community, worked from health centres or in combination and their mandates were PMTCT-specific or included general HIV support and other health issues. Community cadres included volunteers, those supported by implementing partners or employed directly by the Ministry of Health. Their complimentary roles along the continuum of HIV care and treatment include demand creation, household mapping of pregnant and lactating women, linkage to care, infant follow-up and adherence and retention support. Conclusions Community cadres provide an integral link between communities and health facilities, supporting overstretched health workers in HIV client support and follow-up. However, their role in health systems is neither standardised nor systematic and there is an urgent need to invest in the standardisation of and support to community cadres to maximise potential health impacts. Objectives To explore the roles of community cadres in improving access to and retention in care for PMTCT (prevent mother-to-child transmission of HIV) services in the context of PMTCT Option B+ treatment scale-up in high burden low-income and lower-middle income countries. Design/Methods Qualitative rapid appraisal study design using semistructured in-depth interviews and focus group discussions (FGDs) between 8 June and 31 July 2015. setting and participants Interviews were conducted in the offices of Ministry of Health Staff, Implementing partners, district offices and health facility sites across four low-income and lower-middle income countries: Cote D'Ivoire, Democratic Republic of Congo (DRC), Malawi and Uganda. A range of individual interviews and FGDs with key stakeholders including Ministry of Health employees, Implementation partners, district management teams, facility-based health workers and community cadres. A total number of 18, 28, 31 and 83 individual interviews were conducted in Malawi, Cote d'Ivoire, DRC and Uganda, respectively. A total number of 15, 9, 10 and 16 mixed gender FGDs were undertaken in Malawi, Cote d'Ivoire, DRC and Uganda, respectively. results Community cadres either operated solely in the community, worked from health centres or in combination and their mandates were PMTCT-specific or included general HIV support and other health issues. Community cadres included volunteers, those supported by implementing partners or employed directly by the Ministry of Health. Their complimentary roles along the continuum of HIV care and treatment include demand creation, household mapping of pregnant and lactating women, linkage to care, infant follow-up and adherence and retention support. Conclusions Community cadres provide an integral link between communities and health facilities, supporting overstretched health workers in HIV client support and follow-up. However, their role in health systems is neither standardised nor systematic and there is an urgent need to invest in the standardisation of and support to community cadres to maximise potential health impacts. IntrODuCtIOn In April 2012, the WHO recommended the use of lifelong triple antiretroviral treatment (ART) for all pregnant and lactating women living with HIV, regardless of CD4 cell count and/or clinical staging (PMTCT Option B+), to prevent mother-to-child transmission of HIV (PMTCT) and to keep mothers healthy. 1 Lifelong treatment for pregnant and breastfeeding women living with HIV has also been advocated as a strategy to reduce transmission to HIV-negative partners. 2 The WHO further strengths and limitations of the study ► Inclusion of four diverse countries in Southern, Central and West Africa, at different stages with implementation of PMTCT (prevent mother-to-child transmission of HIV) Option B+. The extent of involvement of community cadres in PMTCT in each country reflects this with Malawi and Uganda having more integrated and institutionalised approaches compared with the Democratic Republic of Congo and Cote d'Ivoire, which are at an earlier stage of implementation. ► Qualitative data collection undertaken with a wide range of stakeholders in four diverse countries to capture implementation experiences and key roles and innovations introduced by community cadres operating within a complex health programme. ► A limitation is the field research by rapid appraisal during short country visits. Thus, the impressions presented must be regarded as a snapshot, raising questions for further exploration, particularly regarding the impact of the identified strategies on increasing retention and their potential for scale-up. ► This study could not explore the perceptions of women living with HIV and their families regarding the role of community cadres. These would be important to address in future research as the perspectives of patients and their families could differ from healthcare workers and managers. Open Access states that this approach would strengthen the effectiveness of the PMTCT programme, through improved linkages with ART programmes. 1 3-5 These global recommendations have prompted rapid adoption of Option B+ guidelines across high-burden countries. The WHO identified 22 priority countries encompassing 90% of the world's population living with HIV and comprising 75% of women in need of PMTCT globally. In those predominantly low-income and middle-income African countries, the proportion of women receiving treatment more than doubled between 2009 and 2015. 6 These increases have been largely attributed to the adoption of Option B+, with all priority countries having implemented this approach by 2015. 7 Nonetheless, countries face challenges in reaching scale, while health systems and health workers face ever-increasing, complex demands. Therefore, as more countries endorse lifelong treatment for all individuals living with HIV, health services should implement strategies to ensure good retention in care. Research in Malawi, the first country to implement Option B+, found lower retention in care for pregnant women living with HIV initiated on lifelong ART compared with other adults. 8 Uganda, however, reported similar retention rates at 6 months for pregnant women (88%) and other non-pregnant adults (87%). 9 A recent review of Option B+ roll out in Malawi 10 demonstrated that while women receiving lifelong ART had a higher risk of dropout during the first 2 years following initiation than other adult cohorts, retention rates were similar as the programme matured. This emphasises the need to focus efforts in the first years of implementation, when women are most likely to be lost to follow-up. While the literature around factors contributing to poor adherence and retention in HIV care is well known, [11][12][13] evidence around strategies to improve retention is limited. One of the serious constraints to scaling up HIV treatment and care is the critical shortage of health workers. With 3% of the global health work force 14 and a disproportionate share of people living with HIV, the sub-Saharan African region is increasingly focused on the potential for different community cadres to fill the gap. [15][16][17] This article presents qualitative findings from a rapid appraisal with the objective to explore the roles of community cadres in improving access to and retention in care for PMTCT in the context of treatment scale-up. This paper aims to highlight the different cadres and the wide range of activities they perform. MethODs study design The research was part of an evaluation of the Optimizing HIV Treatment Access (OHTA) initiative for pregnant and breastfeeding women. The initiative, funded by the governments of Sweden and Norway through the Unicef , was undertaken in four countries (Malawi, Uganda, the Democratic Republic of Congo (DRC) and Côte d'Ivoire) between 2013 and 2017 in partnership with several international and local implementing partners (IPs). 18 The OHTA initiative aimed to support the transition to Option B+ for PMTCT in the DRC and Cote d'Ivoire and to optimise delivery and increase demand in Uganda and Malawi. 19 To achieve its aims, OHTA focused among other objectives on strengthening community-facility linkages through establishing or strengthening community-based lay health worker cadres. We defined community cadres as any lay health workers (paid or voluntary) who: provide care and support for pregnant and breastfeeding women living with HIV; are trained on PMTCT but have received no formal professional or paraprofessional certificate or tertiary education degree (adapted from Lewin et al, 2010.) 20 This descriptive qualitative study 21 used rapid appraisal methods 22 to explore the roles of community cadres in improving access to and retention in care for PMTCT services. Rapid Appraisal is an approach that draws on multiple data collection methods and techniques to quickly, yet systematically, collect data when time in the field is limited and research findings are needed in a timely manner for decision-makers. Qualitative methodology was chosen as it allows for direct engagement with participants within their social context and this qualitative approach is flexible and adaptive allowing for probing key aspects and multi-level factors experienced by the range of stakeholders involved in PMTCT service delivery. 23 settings and participants Qualitative data were collected through desk review and individual interviews and focus group discussions (FGDs) during country fieldwork of 12 days per country in the DRC, Cote d'Ivoire, Malawi and Uganda between June and July 2015 (table 1). sampling and recruitment In advance of the country visits, potential organisations and individuals for key informant interviews and FGDs were identified through a desk review process and were shared with and amended in collaboration with Unicef headquarters and the Unicef country offices. In compiling the list of potential participants, we gave consideration to gaining as wide a range of opinion as possible so as to ensure a fair representation of how the implementation of PMTCT Option B+ and particularly community involvement was experienced in the four settings. The Unicef country teams assisted with prescheduling appointments. Before engaging with participants, we explained in detail who we were, why we were visiting and why we wanted to speak with them. When necessary in Uganda and Malawi, especially with community cadres and their supervisors, we used the services of a translator to explain our research aim and the consenting process, while in the DRC and Cote d'Ivoire, all interviews were conducted in French through a translator. One of the research team members was a French national. Open Access research team Eight researchers (all women) participated in the study as teams of 3-4 for each country visit. DB, TD, AG, NR, ED and SR had experience undertaking multicountry evaluations and have worked in the area of PMTCT but had no prior relationship with any of the participants. Data collection Semistructured interview guides were developed for each category of respondent (Ministry of Health, IPs, district management teams, facility-based health workers and community cadres). The terms of reference excluded beneficiaries. Each semistructured interview and FGD was conducted by one or more researchers at the interviewees' workplaces and lasted an average of 45 min, with the support of translators. Interviews were audio-recorded where permission was granted and researchers took notes. Signed informed consent from literate participants and recorded verbal consent from illiterate participants were obtained by the interviewer. Table 1 shows the numbers of interviews undertaken in each country. The total number of interviews undertaken per country was determined by several considerations including the geographic scope of the OHTA support in each country, regional variations in health services and cultural diversity and ensuring fair representation of all categories of participants. The number of interviews was largest in Uganda as the OHTA programme supported all four regions of the country. Data analysis Audio-recorded interviews and FGDs were translated and transcribed into English, and field notes were summarised. We conducted a simple manifest analysis of the qualitative material 21 24 and analysed the data both deductively and inductively. 25 Deductively, we sought to find answers to predefined questions (eg, what role do community cadres play in delivery of PMTCT Option B+?). Inductively, we tried to understand what new insights could be gleaned from the interviews and our experiences in the field. The analysis was based on the typed interviews, field notes and desk review material (programme reports, policy documents and country plans). Country teams came together to discuss, compare and critique emerging themes and categories. Data were then grouped (via word processor) into final categories, whose results are reported in narrative form in this paper. results types of community cadres Interviews identified different community cadres, newly created or strengthened, to support PMTCT services. Table 2 summarises the community cadres involved in the PMTCT response. These community cadres operated either solely in the community, worked from health centres or in combination. Their mandates were PMTCT-specific or ranged across general HIV support and broader health issues. Community cadres included volunteers, such as the 'Relais communautaires' in the DRC to the Village Health Teams (VHTs) of Uganda. Others were supported by IPs, such as the mentor mothers in Malawi, Uganda and the DRC, while some were employed directly by the Ministry of Health, such as the Health Surveillance Assistants (HSAs) of Malawi. Some of these cadres, including the mentor mothers in Uganda or expert client/peer supporters in all four countries, were themselves living with HIV and trained to provide counselling, psychosocial support and peer support to their peers. Activities performed by community cadres Acting as the interface between communities and health services, community cadres created awareness, generated demand for PMTCT services (raising awareness about service availability and importance of seeking care), referred and followed up pregnant and lactating women living with HIV in the community, to ensure they received appropriate services and remained in care. Figure 1 illustrates the roles of community cadres across the PMTCT care continuum. Community engagement and awareness raising 'I am not paid anything. I joined this because I felt that the life of other people was very important to me. When I moved to the villages, my first role was to mobilize (provide information and encouragement) women. When I identified any pregnant women, I mobilized them to come to the hospital so that they test. When I send them, I follow them (make a follow-up home visit) to make sure that they have reached the unit.' (Expert client, Uganda) Cadres including the Agents de Santé Communautaires (ASCs) of Cote d'Ivoire, Relais communautaires of DRC, and the Expert clients and VHTs and committees of Uganda and Malawi, participated in community dialogues to increase service uptake and retention. 'They (community cadres) have contributed a lot to the health centre…because we as health workers don't have time to go into the community to sensitize them (inform them of HIV treatment and prevention available).' (Facility-based health worker, Cote d'Ivoire) Health workers reported that participating in an open dialogue with community members and getting the buy-in of leaders helped dispel myths and fears around HIV and addressed challenges with stigma. Open Access partner participation in reproductive health and addressed interpersonal barriers to retention including partner disclosure and domestic violence. 'Male motivators and male study circles conduct door-to-door peer education to encourage fellow men to accompany their wives to ANC, couple HTC, delivery and post-natal checks. But during meetings organised by chiefs, they also take advantage to provide education on a topic.' (IP, Malawi) Client follow-up and retention in care 'Many people still don't believe in the HIV/AIDS. They still don't think they need to live, so you find many families are breaking because of HIV/AIDS and so these high levels of stigma is still causing treatment interruptions (because women drop out of care).' (MOH, Uganda) Once clients are initiated into care, community cadres focused on counselling and psychosocial support, including formation of support groups and key activities to promote positive living and self-efficacy in HIV management. Community cadres who undertook home visits and followed up patients were perceived to play an integral role in this domain. 'There is also a challenge to work with VHTs (Village health teams) with regards to HIV-positive mothers. They don't want VHTs to know their status, especially with retention. Mothers get very angry when VHTs go to do home visits (likely due to fear of HIV stigma).' (Facility interview, Uganda) Communities were often more accepting of these generalist community cadres for broad health promotion activities (such as ANC care, follow-up of mothers postpartum, and their children), while HIV-specific follow-up was preferred from peer supporters and lay counsellors. As many of these HIV-specific community cadres were living with HIV, they could share personal coping strategies and demonstrate the positive impact of treatment adherence through their own experiences. 'There is very good retention for Option B+ and also good coverage for HIV testing and that in a way is attributed to the Mentor Mothers. Because these are the people who (have) gone through the experience of PMTCT or Option B +… and are able to share Figure 1 Conceptual framework of community-and facility-based activities for increased service uptake and improved retention in PMTCT care. The roles of community cadres across the PMTCT care continuum, which includes community engagement activities to sensitise the community around the need to test for HIV and access care; linkage to care in which community cadres inform the community around where to access services and refer to care and adherence a strategies to ensure those living with HIV are retained in care. The figure further illustrates the role of community cadres who operate partly out of the health facilities. ANC, Antenatal Care; ART, antiretroviral treatment; EID, Early Infant Diagnosis; HTC, HIV counseling and Testing; PHC, Primary Health Care; PMTCT, prevent mother-to-child transmission of HIV. Open Access with other women, to help them provide some of the counselling, so that they can get the intended care.' (Malawi, IP) Peer support was a commonly used role for community cadres in all four countries. Through support networks (treatment buddies, peer supports, mentor mothers, expert clients, support groups), mothers had access to emotional support and motivation and were provided with a platform to share knowledge and experiences. 'So even the peer clients, the peer mothers work with the VHT members, so they can follow-up their colleagues and bring them back. Then healthcare workers… can do physical follow-up but they also have a bit of issues around, you know, going through the community. And the community knows that, oh, they recognise that house, and there is something wrong with that woman there, you know, that kind of thing, yeah. But mostly the peer, that is where the peer mothers become very successful in following up (to address problems with retention).' (IP, Uganda) Such strategies to improve patient retention recognised the time and cost burdens for patients travelling monthly to facilities for ART refills. In Malawi, HSAs were trained to provide ART refills at rural health posts. In this model, clients obtained refills every 3 months, only visiting the clinic for screening every 6 months. Similarly, community ART distribution points in the DRC were run by People Living with HIV. One IP in the DRC piloted the use of an adherence group for HIV-positive women, with one patient responsible each month for picking up ART refills. 'We are piloting the GAAC model (groups to support community accession), which is an adherence group in the community. One person in the group goes every month to pick up drugs for the group. This is working well in certain areas. The group needs to know each other well for it to work.' (IP, DRC) health facility-based activities Some community cadres, including linkage facilitators in Uganda and community counsellors in Cote d'Ivoire, were based in facilities full-time or divided their time between community and facility, to support staff with patient triage, educational talks, pretest counselling and referrals to facility staff for HIV-testing. In Malawi, the HSAs performed HIV-testing and counselling after 28 days of formal training and in Uganda lay counsellors conduct HIV testing. One advantage of this system was that, by performing regular educational, counselling and administrative duties, these paid community cadres focused on guiding patients through the continuum of care and eased the non-clinical workload of midwifes and nurses: 'They have lay counsellors at health centres permanently, who fill registers and records of pregnant women, and make appointments for treatment, and follow-up women who miss her appointment… The ASCs also have a referral form. In addition to this, the NGO has designed some materials like diaries to monitor the appointments of pregnant women. And when they are completed at field level, they summarize this at the health facility, and then at the health facility they can know how many have been referred.' (Health worker, Cote d'Ivoire) Uganda established Family Support Groups to encourage family participation in follow-up ART visits, to improve patient retention. These support groups were often facilitated by nurses, in conjunction with community cadres and mentor mothers. Encouraging women to bring their children ensured exposed children were also monitored, until their 18 months status was ascertained. Furthermore, these groups encouraged women to disclose their status to partners and included them as active participants in family health decisions. Group sessions included a health education talk, scheduled on the same day as ART drug pick-up, to encourage adherence. Support groups on the same day as monthly ART pick-up dates were also occurring in the DRC, Cote d'Ivoire and Malawi. 'So they support retention in that way, they support the health workers to coordinate family support groups, […] that have been institutionalised by Minister of Health. So in these family support groups, these HIV-positive mums come with their babies. We always insist that facilitators come with the baby and… in m2m we also do what we call a needs assessment, to ensure that, in addition to just getting the education and the testimonies and trying to make each other strong, we ensure that that's an opportunity to catch up with services that are due, like Polymerase Chain Reaction (PCR).' (IP, Uganda) Patient tracking In all countries, community cadres supported health workers with tracing women and children who missed appointments. Tools used for longitudinal follow-up varied across settings, generally including client appointment books, agendas to identify those missing appointments and longitudinal facility registers. A combination of phone calls and home visits were used to track patients and reconnect them to services. 'So when she's compiling her report, she has a paper-based report that shows loss of month one, loss of month two… and then missed appointments for that month. So as she sending the report to the central level, she's also thinking of what actions. I was expecting thirty mothers and got fifteen. So, she has to put down actions for the fifteen lost mothers. And then either use of community people or whatever, she has to make sure that she tracks them.' (National MOH, Uganda) Open Access Limited access to accurate patient information caused a major barrier to finding patients lost to follow-up. In 'Mon Bip Mon Sauveur' (My Beep My Saviour) a Cote d'Ivoire initiative, facility staff or ASCs gave women a missed call immediately after they provided phone numbers, to ensure the number was correct. Since a large proportion of the population did not have access to mobile phones or formal addresses, another strategy ('Cahier de Localisation' or Location Book) described the area in which patients lived according to landmarks and mapped them to allow easier tracking. Challenges to the sustainability of community cadres Concerns were expressed around community cadre remuneration that is mainly dependent on external support and variability in payment schedules. 'She is saying that these peers, they are widows, and they spend a lot of time here when there is no-one to do any other activities in their homes. And then, on top of that, they have their children who are at school. So they are worried where to get funds for their children. So they are saying, even their funds don't come in time, because she is saying like after 3 months, so they find that they are really broke.' (Health worker, Uganda) Despite some financial support for community cadres undertaking HIV-related activities, these incentives did not amount to a living wage, and retention of these cadres was described as a challenge. 'For the village health teams, I will say allowance. When we work with them often, we give them some refreshment and some transport. Otherwise, paying them a stipend, like which is regular, no.' (IP, Uganda) 'I am not married, so even though the money is little, it still helps me because I have children and it helps me to help them […]. I don't worry about anything, because I consider myself to have a job.' (Expert Client, Malawi). The mothers2mothers model in Uganda and Malawi, mentor mothers in the DRC and the HSAs in Malawi were among the only cadres receiving a regular salary: 'Mothers to Mothers model, […], is not voluntarily at all, in all in the countries. So we don't believe in voluntarism. We have a component, one of objectives is empowering women who are living with HIV, and we realise that when you get the stipend and give it to them at the end of the month, it makes more meaning to them. They can be able to invest it, they can be able to do things with it.' (IP, Uganda) 'I think that professionalisation of these mums makes them feel maybe valued, and so it really makes a difference.' (IP, Uganda) DIsCussIOn This paper highlights the range and characteristics of community cadres engaged to support PMTCT programmes across the four countries. The findings of the paper provide important insights into the unique roles of community cadres and innovative strategies employed by them to support PMTCT. These include family support groups, community adherence groups and active follow-up which can have significant influence on the uptake and retention in HIV care in these low-resourced contexts. The scale-up of lifelong treatment and investments in newly created cadres or capacity-building of existing cadres have facilitated their engagement in promoting and supporting lifelong HIV treatment at community and facility level. While this paper, reflects a synthesis of a mid-term programmatic evaluation and therefore does not make linkages between activity data and PMTCT-related health outcomes, the synthesis of qualitative investigations from key informant interviews demonstrates the interplay of these community cadres with facility-based interventions in supporting PMTCT scale-up. Investments in increasing community awareness around the benefits of HIV testing and treatment adherence, while addressing stigma and discrimination in the community through positive messaging and the use of peer supporters who openly disclose their status, have been shown in other studies to improve patient retention. 26 Once clients are linked to services, the HIV-specific community cadres, largely facility-based, support uptake of and retention in HIV services, through counselling, HIV-testing, home-based care, patient education, adherence counselling, patient SMS reminders and defaulter tracing. Integrating peers into the healthcare team has resulted in positive patient outcomes, where peers motivate behavioural change in people living with HIV, to improve patient retention. 26 Furthermore, investments in facility-based lay cadres have eased the clinical workload of health workers, resembling task-shifting in other programmes. 27 Operationalising community cadres for the HIV response has to take into consideration interactions between established generalist community cadres covering a range of healthcare activities and cadres created specifically for the HIV response. The mothers2mothers programme is a successful example of peer-support in PMTCT services across several countries. It hires, remunerates, supervises and supports with external funding women living with HIV to serve as peers in PMTCT programmes. A recent evaluation in Uganda found improved outcomes across a range of health-related indicators, including significantly higher rates of 12-month ART retention (91% vs 64%), uptake of EID at 6-8 weeks (72 vs 46%), ART initiation in infants (61% vs 28%) and partner disclosure (82% vs 70%) in M2M supported sites. 28 Ongoing challenges with stigma, geographical access to health services, high levels of poverty and low male partner involvement in maternal and reproductive Open Access services make women less likely to remain on treatment. Furthermore, high HIV-prevalence rates, coupled with high fertility rates in these countries, place an increasing burden on health systems for follow-up and support of a growing number of women on ART. It is therefore critical that PMTCT programmes make concerted efforts towards scaling up effective strategies to optimise retention. With the rapidly increasing HIV care and treatment needs and the accelerating human resource crisis in many African countries, community-based cadres will remain a core feature of health systems. Effective inclusion of these cadres in the health team requires political and financial commitments, regulatory frameworks and mechanisms for supervision and mentoring. 17 29 30 In the absence of formal recognition, these cadres will continue to be inadequately resourced and undervalued, undercutting their potential health impacts. 31 Community cadres interviewed highlighted a range of challenges for patient follow-up including lack of transport, phones or airtime and insufficient money for transport. Furthermore, remuneration of community cadres is currently not standardised within countries and has the potential to create tensions between cadres and to reduce motivation. These cadres are largely donor-supported and high turnover rates, inadequate job security, formal recognition or harmonisation, threaten the sustainability of achievements. These well-established challenges affecting Community Cadre programmes have been reported for several decades. 32 The recent interest in and use of community cadres in response to large-scale ART roll-out appear to pay insufficient attention to these major determinants of success. 29 COnClusIOn Community cadres can provide an integral link between communities and health facilities, using innovative strategies to support overstretched health workers in HIV client support and follow-up. However, challenges remain including the need to invest in country-specific standardisation of roles, responsibilities and remuneration for the range of community cadres in order to promote sustainability and maximise the potential effectiveness of their activities. Further research is needed to understand which services, strategies and approaches are most effective in improving outcomes along the continuum of care, including the perspectives of women living with HIV and their families.
2018-04-03T04:57:12.930Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "106131fbef614decd9ee80ae9c3ec6320761eeab", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/3/e020754.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d77c586476fdd9f2354a100ffd226f8c3881eb5", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
218537939
pes2o/s2orc
v3-fos-license
On Optimal Control of Discounted Cost Infinite-Horizon Markov Decision Processes Under Local State Information Structures This paper investigates a class of optimal control problems associated with Markov processes with local state information. The decision-maker has only local access to a subset of a state vector information as often encountered in decentralized control problems in multi-agent systems. Under this information structure, part of the state vector cannot be observed. We leverage ab initio principles and find a new form of Bellman equations to characterize the optimal policies of the control problem under local information structures. The dynamic programming solutions feature a mixture of dynamics associated unobservable state components and the local state-feedback policy based on the observable local information. We further characterize the optimal local-state feedback policy using linear programming methods. To reduce the computational complexity of the optimal policy, we propose an approximate algorithm based on virtual beliefs to find a sub-optimal policy. We show the performance bounds on the sub-optimal solution and corroborate the results with numerical case studies. INTRODUCTION The subject of Markov decision process (MDP) has been broadly explored in the area of robotics, wireless communication, and economics. In MDPs, the decision-maker is assumed to have complete state information. Notwithstanding, in many real world application, the direct observation of the state is either impossible or difficult to acquire (See [11], [9], [4]). Therefore, partially observable Markov decision process (POMDP) becomes a standard framework where the decision-maker does not have direct access to the state information but indirect observations that are correlated with the true state. A substantial literature has been established over the past few decades, including [10,1,6,13]. In standard POMDPs, the state information as a whole is taken as incompletely observable and the observations are statistically dependent on the state. In this work, we consider a class of problems where the state takes the form of a vector and its information can be partitioned into two components. One component contains a subset of states that are completely observable while the other component consists of a subset of states that are completely unobservable. This class of problem often arises from distributed multi-agent control systems, where one agent can only observe his own state while the state information of the others are not observable. We refer this class of problems as MDP under Local State Information or LSI-MDP, in short. One difference between this class of problems and the classical POMDPs is that decision-maker of LSI-MDP has no information of a subset of states. As a result, the optimal control policy of the decision-maker takes the form of localstate feedback, which depends solely on the observable components of the state vector. We use two examples to motivate the LSI-MDP model as follows. Team Optimization Problem and Multiagent System Problems In both team optimization problem and multiagent systems, multiple agents make decisions based on their observations to optimize their objective functions, and the decisions can impact the state of the system, which is the aggregation of states of all the agents (See [12], [3]). If their objective functions are fully aligned, the problem becomes a team problem. If their objective functions are partially aligned with each other, the problem becomes a nonzero-sum game problem. Our work studies this problem from the perspective of a single agent in which the agent knows his own state but has no access to the states of other agents. Optimal Planning in Robotics The robots plan the route or actions based on the observations or information it acquires (See [5,8]). Nevertheless, due to the physical limitation of the sensors, there is no guarantee that the robots are capable of obtaining the complete observation of the state (See [11]). Hence, the state can be divided into two parts: one part is observable and the other part is unobservable. As the unobservable part of the state is also influenced by the actions, this scenario coincides with our model. Specifically, our contributions can be summarized as follows: -We formulate an LSI-MDP problem and characterize the local-state feedback policy using the principle of optimality. We identify the connections with MDPs and POMDPs. -We show that the local-state feedback policy is characterized by a mixture of open-loop deterministic nonlinear system dynamics and a feedback solution arising from dynamic programming. -We develop a method termed as Virtual Belief Method to provide a suboptimal stationary local feedback policy. We can show that the worst-case performance degradation is bounded. This paper is organized as the following, In Section II, we present the problem formulation and identify the relations of our framework with MDPs and POMDPs. In Section III, we use the principle of optimality to establish the associated Bellman-like equation. In Section IV, we propose a method to find suboptimal solutions. In Section V, we study several special cases regarding the structures of the system dynamics, cost function, and transition probabilities. It is shown that under some of the special cases, the method proposed in Section IV can yield optimal solution. In Section VI, we conclude this work and give possible directions of the future work. PROBLEM FORMULATION In this section, we present the problem formulation of the infinite-horizon discounted cost optimal control problem under local-state information. Let A be the finite action space and X be the finite state space. The state of the dynamical system considered in this work, is assumed to be a joint process of two substates, one of which is called observable substate. The observable substate, which is denoted by x o , can be observed to the agent directly and utilized for the decision making. The other substate is called unobservable substate, which is denoted by x u , cannot be obtained as observations by the agent. Thus, the state space is the Cartesian product of two state subspaces as follows: where X o contains all the possible observable substates x o and X u contains all the possible unobservable substates x u . The stage cost function is assumed to be a nonnegative and bounded stationary function c(x, a) : The transition probability is given by a stationary function More specifically, it can be written as p(x ′ o , x ′ u |x o , x u , a). In this work, we study the following criteria: where x o,t , x u,t , and a t are the observable substate, unobservable substate, and action at time t, respectively. We aim to determine the policy which minimizes (1). Here, β is the discount factor and 0 ≤ β < 1. The distribution of the initial state x u,0 is given by α u , π a policy is a collection of the decision rules, and each decision rule is a mapping from the space of the history of states and actions to the action space. The agent only has access to the observable substate x o at each time instant. Therefore, his decision can only be dependent on the observation history formed by x o . Formally, denote the state-action history of the original system at time t as Let H t be the space of h t . By definition, π = {d t } t and d t : Here, x u can be regarded as unobservable uncertainty in the dynamic system. Thus, to cope with this uncertainty, we have the expectation in (1) that averages out the randomness induced by x u . It is clear that our framework differs from the classical MDPs and POMDPs and there exists close connections between the LSI-MDP framework and these two models. To see this, we let the dynamical system evolve according to the following rule Assumption 1 There exists a deterministic function g(·, ·) such that at each time instant, where X o,t andX u,t are conditionally independent conditioned on a given stateaction history. This assumption means that we can decompose the unobservable into two parts: the first part is correlated with the observable substate and second part is independent of the observable substate. This assumption implies that, given a pair of (x o , x u ), we can identify the value ofx u uniquely. Next, we use the following theorem to construct a controlled Markov process in which {x u,t } t is conditionally independent of {x o,t } t . Theorem 1. Under Assumptions 1 and 2, for any given fixed policy, there exists a random process {X u,t } t which satisfies the following: , ; and b) one can represent the state of the system as (X o,t ,X u,t ) with the same amount of information; at time t + 1, there exists a deterministic functioñ f such that Proof. See Appendix.A. ⊓ ⊔ In view of the above theorem, in some of the future sections, we focus on the class of MDPs with 'conditionally independent' transition probabilities as in the right hand side of equation (1). To complete the earlier argument, here we discuss how our model is related to POMDPs and MDPs. In POMDP, the observation and the state are assumed to be statistically correlated. Our system formulation includes the case where x u,t and x o,t are conditionally independent and x u,t provides no information of x u,t at all (once actions are observed). Hence it is a generalization of POMDP. When the relation between x o and x u can be described by a deterministic function,ḡ, such that x u =ḡ(x o ), then our framework reduces to a classical MDP, as x o can represent the system state. DYNAMIC PROGRAMMING with Beliefs As the substate x u cannot be observed, the agent can form belief over the unobservable state. Denote the belief at time t by b(x u,t ), which evolves (depending upon observation With a slight abuse of notation, let b t be the belief vector at time t. Thus, with the state denoted by (x o , b), the system is Markovian. We define the transition function of the belief state as We would like to point out that the belief state acts as a deterministic nonlinear subsystem. Definec Since x u 's are not observable and we can only form belief over x u . After taking expectation using the belief of x u , we define the new objective function As we have mentioned above, the new system whose state is ( Denote the set of Markovian deterministic policies in this new system by Π MD . That is, the decision at time t is only dependent on the current state (x o,t , b t ). For the system whose state is (x o , b), the state-action history at time t is given byh It is worth noting thath t provides the same information as h t , as b t evolves according to the rule (3). Lemma 1. If a given pair of policies at every time instant, they generate the same action sequence {a t } t . To keep it simple, we provide the proof for deterministic policies, and the proof goes through in a similar way for randomized policies. Equation (1) can be written as where the inner conditional expectation of the RHS term in (6) is ). An important property of this conditional expectation is that it captures the information structure of the agent. For example, at time t = 1, after observing x o,1 , the agent aims to choose an action to minimize an objective function, . The expectation is taken over is induced by the unobservable substate x u,1 . This expectation requires us to have the knowledge of the distribution of x u,1 . And we use the following as the distribution of x u,1 , , as x u,0 is also unknown and is averaged out using b 0 . The outer expectation is taken with respect to x o,1 conditioned on x o,0 using the marginal distribution xu,1 p(x o,1 , x u,1 |x o,0 , x u,0 , a 0 ) by averaging x u,0 out with b 0 . Progressing this way for any n < ∞ we have the following: By boundedness of c(x o , x u , a) (and because β < 1), the result follows by dominated convergence theorem by letting n → ∞. ⊓ ⊔ Proof. The first equality directly follows from Lemma 1 : given the same policy, V π αu (x o ) andV π αu (x o,0 , b 0 ) yield the same value. Thus, they are minimized simultaneously. To prove second equality, we analyze the new system whose state is (x o , b) beforehand. This new system can be viewed as a special case of MDPs. In MDPs, the current state and the next state are correlated through some transition kernel. Define, In our case, we have , where δ(·, ·) is a Dirac delta function, as the belief evolves like a deterministic system. The rest of the proof is by [10]. ⊓ ⊔ Using standard arguments of dynamic programming, we can write down the following Bellman equation: We note that the system can be regarded as a mixture of two subsystems: one is MDP and the other is a nonlinear deterministic system. And the states of these two subsystems are intertwined through the transition probability. Let α o (x o ) be the distribution of the initial observable substate x o . Moreover, let B be the reachable set of the belief, which contains all the possible belief. If B is infinite, then the number of constraints of linear programming formed by (7) are also infinite, even for finite state and action spaces. Therefore, solving this optimization problem is challenging using the classical linear programming method. VIRTUAL BELIEF METHOD In this section, we propose a method called virtual belief method, which aims to approximate the system with an MDP and provide a suboptimal solution. We show that this proposed method reduces the complexity and yet guarantees the performance by a bounded term. In the virtual belief method, the agent is assumed to believe that at each time instant, the system is at x u with probability b 0 (x u ), which is equal to the distribution of the initial substate x u,0 . And this belief stays unaltered throughout the whole process. To proceed, let us formally define the virtual system constructed by this method. The transition probability is in this system given bỹ The new cost function now becomes The objective function in the new system is given bỹ It is straightforward to check that the system considered in the virtual belief method is a classical MDP. Following the procedures in [10], the Bellman equation associated with the MDP is To solve this MDP, we revisit the method of linear programming. Likewise, we begin with the primal linear programming. The corresponding dual LP is given by the following. Both linear programmings above are solvable as they have finite constraints (with finite state and action spaces). To see how the disregard of the evolution of the belief process {b t } t can deteriorate the optimization performance, we first define the operator acting onũ(x o ) as follows: To make comparisons, we consider the bellman equation in a full-information setting. In this setting, both x o and x u are available for decision making, resulting in a classical MDP. The objective funtion in this setting is given by Let the value function in the full-information setting be v(x o , x u ), which satisfies the following fixed-point equation Define the operator acting on {v( Also, the fixed-point equation (10) can be transformed to the following linear programming problems. Primal LP ′′ (Full information case) And the corresponding dual LP is given by the following. Before we give the main theorem of this section, we present the following two propositions. [10]. (9) is a contraction mapping and it has the following properties: Theorem 3. (Comparison between Full information and Virtual Belief models) If the transition probability can be decomposed as is bounded by Proof. Letũ * (x o ) and v * (x o , x u ) be the optimal solution to the virtual belief model and the full information setting, respectively. Then, the corresponding value functions of these models are given by xoũ , respectively. Letã * be the optimal action (which depends upon x o ) that achieves the minimum in (8) and a * be the optimal action (which depends upon x o , x u ) that achieves the minimum in (11). With an abuse of notation, we define for any x u : . By definition and the given hypothesis, for any Further from (9) (for any a * (x o , x u )) and under hypothesis (12) p Thus we have: On the other hand, now usingã * we havẽ First consider the case whereũ Since L is a contraction mapping . By the same arguments, whenũ * (x o ) ≤ Lũ * (x o ), and we acquire the similar inequality with bound −C/(1−β). Hence the proof of this theorem is completed. ⊓ ⊔ Remark 1. The theorem above states that, even though the direct observation of x u cannot be obtained, we can still guarantee that the performance is deteriorated at most by a bounded term. We note that the bound on the difference is dependent on the structure of the cost function with respect to x u . More explicitly, the bounds depend on the sensitivity of c(x o , x u , a) with respect to the change in x u . Also, we can compare the value function in virtual belief method and the value function define in (5). The comparison results are stated in the following theorem. Theorem 4. (Comparison between POMDP and Virtual Belief Models) If the transition probability can be decomposed as is bounded by and Proof. The proof of Theorem 6 largely relies on the proof of Theorem 5. ⊓ ⊔ The results in Theorem 5 & 6 hold under the assumption that the transition probability can be decomposed. Generally, when this assumption does not hold, even if the cost function does not change significantly with respect to x u , the results may not hold. In such cases, the update of the belief is required for estimating the evolution of observable part {x o,t } t . SPECIAL CASES In this section, we discuss several special cases concerning the structure of the system dynamics, cost function, and transition probabilities. , the unobservable substate can be fully determined from the observable substate. That is, we can infer the true value of the unobservable state from the observation. Then, the overall state can be fully characterized by x o . Therefore, x o is sufficient to represent the overall state of the system. As we mentioned earlier, in this case, the system reduces to MDP. It is straightforward to see that (8) and (10) coincide and thus they yield the same value function. Similar arguments hold for the case where X u = ∅. X o = ∅ In this case, the system is a deterministic system in which the state can be fully characterized by the belief b. And the approximated optimization faced here is given by Here, with a slight abuse of notation, c(a) = {c(:, a)} and P u (a) is the transition matrix of x u for a given action a. The optimization above is a classical nonlinear optimal control problem. This case is trivial as the stage cost function is no longer a function of the unobservable substate. Thus,C = C. Here,C and C are defined in the proof of Theorem 2. The two value functions are equal and the virtual belief method loses no performance. In this case, the independent random process, {x u,t } t , is not controllable and thus evolves independently respect to the actions. By assuming that the transition kernel p(x ′ u |x u ) is ergodic, we denote the stationary measure of x u by b s (x u ) and the corresponding belief vector by b s . In such that a setting, as there exist stationary measures over x o and x u jointly, we can reduce (7) to which leads to tractable linear programmings as the state space and number of constraints are finite. As for the virtual belief method, if we replace the initial belief vector b 0 with b s , then it will yield the optimal solution. NUMERICAL EXAMPLE In this section, we use numerical experiments to demonstrate o results. Consider the following dynamical system: Let P o (a) and P u (a) be the transition matrices of the observable substate and unobservable substate, respectively. And the probabilities of x u,0 = 0 and x o,0 = 0 are both assumed to be 1/2. The discount factor is set as β = 1/2. First consider the system of full information, i.e., the agent has access to both x o and x u . The optimal value found by solving LP is approximately 2.0524. And if x u cannot be observed, using virtual belief method and we have that the policy d(x o ) = 1, ∀ x o . and we obtain the value as 2.3714, which is bounded by C. Another way to find the find the stationary policy is by solving the following constrained linear programming: Here, y(x o , x u , a) is the measure function measuring the frequency of the system visiting the state-action pair (x o , x u , a). The constraint arises from the fact that x u cannot be observed and used in the policy in the system whose state is solely x o . The corresponding primal LP is given by Note that there exists one-to-one correspondence between the solutions to Primal LP * and Dual LP * . There is no dynamic programming equation associated with Primal LP * , yet it provides a numerical method to compute the stationary policy. The optimal deterministic stationary found using the constrained LPs is given by d MD (x o ) = 1, ∀ x o , which yields value 1.8706. Conclusions In this paper, we have studied LSI-MDP, which is a Markov decision process with incomplete state information. In this model, the state can be divided into two parts, one of which is observable and the other is unobservable. System of this kind is closely related to MDP and POMDP and we have pointed out their relations. We have shown that directly solving optimal control problem in such systems is challenging using the classical linear programming approach, as the number of decision variables (or constraints) is possibly infinite. We have proposed a new method to tackle this challenge, which provides a suboptimal solution. We have provided bounds on the difference between optimal and suboptimal solution, under certain separability conditions. Future Works When coping with an optimization problem with uncertainty, the agent can either average the randomness out or be robust to the uncertainty. As a consequence, if we consider the unobservable substate as the source of uncertainty, we can formulate a robust optimal control problem, where the objection is given by the following: The agent aims to find an optimal policy while being robust to all the possible trajectories of the unobservable substate, x u . It is worth noting that the transition probabilities and the cost function uncertainty share the same uncertainty induced by x u , which makes the robust problem NP-hard as shown in [7,2]. A Proof of Theorem 1 Proof. a) The assumptions imply that, given x o and x u , we can uniquely determinex u as g(x o , ·) is injective with respect tox u . Thus, we obtain the following decomposition of the transition probability, asx u and x o are independent. Therefore, we can transform the joint random process (x o , x u ) into two conditionally independent random processes (conditioned on the sequence of decisions). b) We prove the the second part of the theorem by induction. At the initial time, x o,0 is given and there exists a distribution of x u,0 , which is denoted by α u (x u ). We can express the unobservable state as a function g(·, ·) such that X u,0 = g(x o,0 ,X u,0 ), whereX u,0 is a random variable that is conditionally independent of X o,0 . At time t, we assume that there existsX u,t which is independent of X o,t such that X u,t = g(X o,t ,X u,t ). Then, given the realizations of X o,t andX u,t , we can determine the value of x o,t and x u,t . At time t + 1, there exists a functionf such that (X o,t+1 , X u,t+1 ) = f (X o,t , X u,t , a, Γ ) = f (X o,t , g(X o,t ,X u,t ), a, Γ ) =f (X o,t ,X u,t , a, Γ ). Hence the second part of the theorem follows. ⊓ ⊔
2020-05-08T01:00:52.763Z
2020-05-06T00:00:00.000
{ "year": 2020, "sha1": "4265cb3ab1f0dd8d1d27ea73e81ecd9d9b34f252", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2020.12.348", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "4265cb3ab1f0dd8d1d27ea73e81ecd9d9b34f252", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
262218272
pes2o/s2orc
v3-fos-license
Farmer perspectives on carbon markets incentivizing agricultural soil carbon sequestration Climate change mitigation efforts to achieve net-zero emissions require not only decreasing current greenhouse gas emissions, but also the deployment of negative emissions technologies. Soil organic carbon sequestration in agricultural lands is one such negative emissions strategy, currently being incentivized predominantly through voluntary carbon offset markets. Through semi-structured interviews, we assess both conventional and organic farmer perspectives on soil carbon offset programs that have been created in the United States since 2017. The perspectives of farmers both participating and not participating in agricultural soil carbon markets were similar and consistent. Farmers in both groups expressed concerns about the convoluted, burdensome and unpredictable nature of receiving offset credits and emphasized that they were implementing practices for their own business interests and sustainability concerns, not the financial incentive of the generation of carbon credits. Based on our research, carbon offset credit payments for agricultural soil carbon sequestration are largely reaching farmers who were already implementing these beneficial practices or were already strongly interested in implementing these practices, and the payments for the offset credits are seen as a ‘gravy on top’, suggesting that these offset markets face strong challenges of ensuring true additionality essential to effective climate mitigation. INTRODUCTION Humanity continues to face the impacts of the climate crisis: extreme heat, increased rainfall, increased severity of tropical storms, increased prevalence of wildfire, sea level rise, and increased severity of droughts 1 .While it is fundamentally necessary to drastically reduce current and future anthropogenic greenhouse gas emissions to mitigate the climate crisis, pathways to avoid catastrophic climate change and hold warming below 2 °C also necessitate the implementation of negative emissions technologies (NET) that remove greenhouse gases from the atmosphere 2,3 .NETs include afforestation, reforestation, forest management, coastal and marine carbon sequestration, and soil carbon sequestration in agricultural lands 4 . Even though agricultural lands generally hold less soil organic carbon than wild lands 5 , agriculture has the capacity to be a major source of negative emissions because of the sheer size it covers: almost 50% of potentially vegetated land surface has been converted to crop and pasture land 6 .Soil carbon sequestration refers to the accumulation of soil organic carbon (SOC) in terrestrial soils.Soil organic carbon accumulates as a result of the balance between carbon inputs to the soillike biomassand pathways for losses of carbon from the soil, such as respiration, decomposition and erosion 7 .When carbon inputs to soils exceed losses from decomposition and erosion, SOC accumulates and soils are a net carbon sink.Soil carbon sequestration on agricultural lands requires increasing carbon inputs and/or decreasing carbon losses, which can be accomplished through a variety of activities, including conservation tillage, mulching, cover-cropping and integrated nutrient management 7 , as shown in Fig. 1.Conservation tillage reduces soil disturbance and the soil organic matter decomposition rate 8 , while cover crops provide additional biomass inputs 9 .These soil management strategies seek to increase the concentration of SOC and can be accompanied by co-benefits in overall soil quality and disease resistance, decreased erosion, and increased productivity.Because of the substantial SOC storage potential of agricultural soils as a key component of natural and working lands-based negative emissions strategies, there has been increasing attention, by both governments and the private sector, toward incentivizing farmers to adopt beneficial cultivation practices to enhance carbon sequestration as part of climate change mitigation strategies. Globally, national-level and subnational climate policies have increasingly included a variety of incentive programs to encourage farmers to undertake on-farm activities that sequester soil carbon 10 .For such incentives to be effective in inducing farmers to implement on-farm activities, they need to be functional for farmers who are implementing them.Previous survey and interview-based research has examined the facilitators of farmers and other landowners adopting these activities.These studies identified local environmental co-benefits and soil health benefits, rather than climate mitigation services, as primary drivers for practice adoption [11][12][13][14] .Taken as a whole, there is consistent socialscientific evidence that farmer motivations for adoption of carbon sequestering activities on their land are driven by perceptions about the co-benefits of such activities, rather than by financial returns or the idea of sequestering carbon. Previous work has also identified numerous barriers to farmer adoption of soil carbon sequestering activities, despite financial incentives in place.These barriers include unfamiliarity with and lack of information about practices, and concerns about the costs of implementing new activities, despite financial incentives [13][14][15][16] . Much of the previous research has been conducted not with row-crop farmers, but with rangeland and other landowners who have participated in biodiversity conservation programs that have additional climate benefits. Since 2017, voluntary carbon offset developers have emerged that seek to incentivize U.S. farmers to change on-farm practices to sequester soil carbon through payments for the farmers' generation of voluntary carbon credits that can subsequently be used by emitters to offset existing sources of anthropogenic emissions, as actors seek to reduce the climate impacts of their operations.Two corporations -Nori and Indigoare the two primary offset project producers for agricultural soil carbon offsets for the voluntary market currently in operation in the United States.Indigo began its carbon market program in 2019 and Nori in 2017.Building on previous research which largely presented work from other countries, this study seeks to understand the motivations and concerns of both participating and nonparticipating farmers about agricultural soil carbon markets that have developed in the United States since 2017.Understanding their perspectives and lived experiences can help further inform the literature on farmer perceptions of market-based mechanisms for soil carbon sequestration and, can also help evaluate these emergent markets to assess of their effectiveness for achieving climate mitigation.Based on the core assumption of additionalitythat activities that enhance carbon sequestration among conventional farmers should be credited because the financial incentive of the offset credit induces greater participation and increases total carbon sequestration from what would have happened in the absence of the offset creditand on the existing literature about farmer adoption of soil carbon sequestering practices, we hypothesize that the farmers participating in carbon markets had adopted the practices for both perceived environmental cobenefits and for financial reasons associated with credit payments, while organic farmers who are not participating in these markets would adopt these practices solely due to the perceived environmental co-benefits. Voluntary offset project developers face several challenges: one is to ensure that the amount of carbon estimated to be sequestered through the implementation of on-farm activities is real and accurate.Voluntary carbon credit markets for agricultural soils require exact measurements or estimation for the quantity of carbon sequestered in order for offset buyers to subtract these credits from their emissions.To facilitate verification and estimation of the amount of carbon sequestered, both Nori and Indigo use carbon offset 'protocols' that were developed by nonprofit organizations known as registries.The protocols in most common use are those developed by Verified Carbon Standard, by the Climate Action Reserve and by American Carbon Registry 17 .They also restrict eligibility only to conventional row-crop agriculture in the Midwest and South where measurements of soil carbon changes in response to on-farm beneficial activities has been demonstrated. The other challenge faced by these markets is to ensure that the carbon sequestered is 'additional' to what would have otherwise occurred.This need for additionality arises because the carbon credits will be sold to 'offset' existing sources of emissions.Additionality refers to a sequestration project being caused by the credit incentivethat it would not have gone forward without the incentive's support 18 .Both Nori and Indigo's certification protocols include eligibility requirements to ensure additionality.Indigo uses the Climate Action Reserve (CAR) Soil Enrichment Protocol, which 'strives to register only projects that yield surplus GHG reductions that are additional to what would have occurred in the absence of a carbon offset market' 19 .The CAR protocol requires that farmers change their soil management practices relative to an established baseline, regardless of motivation, and it allows for the stacking of incentives, including NRCS subsidies, without disqualifying farmers from eligibility.Zelikova et.al. (2021) found that CAR's protocol creates the false appearance of additionality standard while actually treating all practices as additional 17 .Nori has developed its own protocol, which is not directly available to the public.Nori's website stipulates that carbon credits that carbon credits are only issued for 'a discrete and verifiable activity or practice change that is reasonably expected (given the scientific evidence available at the time) to result in a new net CO 2 removal and C retention' 20 .Both Nori and Indigo's additionality standards will allow practice changes that were initiated for reasons other than the market's payment. Study populations and interview solicitation To assess farmer perspectives on the current state of markets for agricultural soil carbon offsets in the United States now that voluntary offset markets have become established, we examined the perspectives of two groups of farmers: conventional row-crop farmers who are eligible and actively participating or seeking to participate in voluntary carbon markets, and organically certified row-crop farmers who are engaging in soil carbon sequestration practices but are not participating, nor eligible to participate in existing carbon markets.The reason for the selection of these two groups is to assess whether perspectives on these offset programs are shared between groups of farmers who are all engaged in carbon sequestering activities, but only some of whom are Fig. 1 Activities that enhance carbon sequestration in agricultural soils.This figure shows on-farm activities that can enhance soil carbon sequestration.In some circumstances.types of activities can be implemented to generate carbon credits under voluntary market carbon offset protocols. participating in carbon markets.By interviewing farmers in both groups, we were able to identify universally held perspectives on motivations for adopting beneficial practices and on concerns about existing agricultural soil carbon market mechanisms.In particular, by interviewing farmers who had all chosen to adopt at least some of these practices, we could develop a typology of both the facilitators of practice adoption and concerns about markets and compare whether incentives from voluntary offset payments shaped adoption pathways for eligible farmers, compared to non-participating farmers. The participating farmers interviewed for this study came from a pool of carbon credit sellers listed on Indigo and Nori's websites.Participants were solicited by email or by phone using the contact information listed either directly on carbon market websites or on other farm websites.Because Nori publishes each participating farm as its own carbon sequestration project, we were able to view on its website every farm that has ever sold credits through the marketplace.This allowed us to solicit interviews from nearly every participating Nori farmer (except 3 that we could not find contact information for).As Indigo does not release the same data broken down by individual farm project, we do not know how many farmers are participating in their carbon sequestration program, and consequently what percentage of them we reached. The non-participating farmers interviewed for this study came from a pool of certified organic vegetable producers in New York State.Because they are organic famers operating in a region for which soil carbon credits are ineligible, we could ensure that they were both adoptive of beneficial practices and not participating in carbon markets.Specifically, farmers were selected from a publicly available registry of farmers certified through the Northeast Organic Farming Association of New York (NOFA-NY), one of the largest organic certifiers in New York State.Participants were solicited by email at the email address listed either directly on the registry or on farm websites. Semi-structured interviews and coding After receiving ethics approval from the Hamilton College Institutional Review Board, author CTB conducted seventeen individual, semi-structured interviews between January 2021 and February 2022, for a total of seventeen semi-structured interviews.All interviews were conducted with informed consent for participants.Confidentiality was maintained for all research participants.Semi-structured interviews lasted between 30 min and 1.5 h in length and conducted on Zoom or over the phone.All interviews were recorded and then electronically transcribed using automated voice-to-text software, after which transcripts were manually corrected for minimal transcription errors.Semistructured interview questions centered on farming practices and farmer perspectives about carbon markets and other incentives to undertake activities that enhance soil carbon sequestration.The interview guide for semi-structured interviews is shown in Table 1 below. Interview transcripts were qualitatively analyzed using a modified grounded-theory approach 21 an inductive method to assign codes to recurring themes across interviews.In this method, we first established two goals: (1) developing a typology of categories and subcategories of that led to farmer adoption of on-farm carbon sequestering activities and (2) developing a typology of categories and subcategories of concerns about currently functioning offset markets.We then sought to define whether these categories and subcategories were unique to one of our two study populations or were shared across them. To develop the codebook of categories and subcategories, we first conducted iterative readings of the transcripts by both authors CTB and ALS to develop an initial list of categories and subcategories of responses.We then refined this grounded, inductive list of categories by comparing our categories and subcategories with identified factors that constituted barriers and facilitators from the literature, in particular from Buck and Palumbo-Compton's 2022 review paper 14 .Using this information, we refined our coded categories to develop the two-level code book of categories shown in Fig. 2 below.Readings of transcripts began after twelve interviews had been completed, and we conducted five further interviews from the same previously identified subject pool before determining that no new codes were being developed and we had achieved saturation within the structure of our code book.After saturation and with our developed codebook, all coding was redone using a combination of Nvivo software (Nvivo Version 12 for Mac) and manual coding of anonymized transcripts.Coding was done first by author CTB, then transcripts were re-coded by ALS to ensure intercoder reliability.Re-coded transcripts had a percentage agreement of 91%. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Response rates Out of thirteen solicitations for interviews for farmers from both Nori and Indigo, we received nine positive responses, a response rate of 69%.These farmers were all farming in the United States: six in the Midwest and three in the Southeast.Of the farmers we interviewed, four were growing only field crops (including corn, soybeans, cotton and winter wheat), and four were growing field crops and raising livestock in mixed operations.Six of the Table 1.Semi-structured interview guide.individuals we interviewed had successfully received payments from carbon credits from Nori or Indigo, while three were farmers working with Indigo and Nori, but had not directly received payments yet.Two farmers identified as female and seven as male.The potential population of eligible farmers to interview was limited by the low number of total farmers actively participating in these programs.While exact figures have not been published, 23 farmers were listed on the two company's websites and in other available documentation, however we were not able to find contact information for them.Thus, our thirteen solicitations represented 57% of the population of farmers whom we could identify.For the organic farmer pool, out of twenty requests for interviews sent, we received nine positive responses, a response rate of 45%.Of the farmers we interviewed, three were growing only vegetables and six were growing vegetables and raising livestock in mixed operations.These farmers were all farming in New York State: three in Western New York, one in Central New York, three in the North Country, and two in the Hudson Valley.None of them were participating in active carbon markets.Four farmers identified as female and five identified as male. Motivations for practice adoption Across both groups of farmersthose participating in carbon markets and those notthe primary motivations to adopt beneficial farming practices were the same: (1) overall economic profitability and (2) intergenerational resilience due to maintaining healthy soils.As a whole, all farmers were motivated to adopt practices that sequestered carbon because of interests in longterm sustainability, crop health, and farm profitability. Farmers who were actively participating in carbon markets for soil carbon sequestration or who had attempted to utilize such carbon markets adopted practices for a number of reasons, but the ability to participate in a carbon market was not the primary reason.As one farmer said of the practice changes encouraged by carbon markets, which he had implemented years before being approached by carbon markets: We made all those changes to cover crops, no till, and all the things that they want you to do.We did that just for our own profitability and survival, you know?It's a better way to farm. Roughly a third of large-scale commodity crop farmers interviewed expressed that more conventional practices had brought them into situations of economic hardship that made alternative practices that enhance soil health more appealing, because soil health practices made farmers more resilient.While participating farmers were aware that these practices made them eligible to receive carbon credit payments, participating farmers' decision to adopt these practices were universally driven by onfarm concerns.Most (seven of nine) were multi-generation farmers who expressed explicit concerns with maintaining longterm profitability for many years into the future.As one participating farmer said: Our goal is that we're constantly looking at the future of our operation, and how we can make sure that we're maintaining the soils and the land that we have, so that they're in very high fertility rates, as well as we're building the organic matter on the farm.So as we start to continue to see, and we have in our area, our weather patterns change, that we can combat by, with hopefully, you know, really, really fertile, healthy soil.Organic farmers also discussed soil and crop health, economic productivity, and adapting to extreme weather events for future generations.While some of these practices were required for organic certification, most organic farmers had gone beyond the bare minimum requirements for cover cropping and crop rotation because of some other motivator.In general, they were concerned with maintaining the health of microbiotic communities and the whole farm ecosystem, and the corresponding impact on plant health and productivity.One farmer noted the benefits of these practices: Obviously sequestration is great for [climate mitigation],[but it's] even better for the soil. A few farmers were also concerned about improving their resiliency in the face of increasingly severe flooding and drought, and thus were interested in building their soil's erosion resistance and water-holding capacity.The majority of organic farmers believed that these practices were chosen out of enlightened selfinterest, meaning that they were better for plant health and overall system sustainability, as well as farm income (self interest).Farmers participating in carbon markets shared this perspective.As one carbon markets farmer described: Well, you're looking at the wrong way, you should put your cover crop out, because it was the right thing for you to do for your operation…Stack the carbon program on top of that.Fig. 2 Code book.This figure depicts the two level code book used in analyzing qualitative semi-structured interview transcripts. Concerns with Soil Carbon Offsets Non-participating and participating farmers shared very similar views on carbon market payments for beneficial activities.Collectively they viewed the payments as helpful, especially for those who were already doing the on-farm activities.In terms of their perspectives on receiving payments for 'what they were going to do anyway', farmers participating in carbon markets expressed positive sentiment about being paid for their practices.The view was summed up by one farmer: Why not take the money while it's there? Another reiterated that carbon payments were a clear benefit to their operation: If I've gotten an installment today, that's something I can make a difference to my family and my business today. Non-participating farmers also acknowledged the benefits of providing farmers with another source of income.For a group of farmers who were already farming in a way that sequestered carbon, it seemed like a clear benefit to receive payment without having to put much money or time into changing their methods or doing anything new.They also seemed optimistic about the long-term success of a program that compensated farmers, over one that relied simply on education or shifting ideologies. Additionally, a few organic farmers reported that a financial incentive might be a good way to convince conventional farmers, who they saw as less ideologically driven than they are, to adopt beneficial practices.Farmers from both groups agreed that farmers tend to be underpaid and overworked, and were appreciative of any incentive that gave them additional income, especially one that supported soil management practices that provide other co-benefits to their operations.One organic farmer spoke positively of carbon markets: [A]nything that has any sort of monetary value attached to it, I think is gonna be what works in the long run. All farmers who were participating saw carbon market programs as an avenue to get compensated for practices they were already looking to adopt, and which they saw as benefits to their farm operations.One farmer reflected: The carbon credit thing is just sort of gravy on top of what we already do, and what we think is the right thing to do. One farmer reflected on the carbon credit payment: I think it's sort of an added bonus. And another farmer encapsulated this same view: Like, we don't (want to) farm differently to sequester carbon.We are farming differently because it's the better thing to do.So whatever system whatever carbon credit market or system lets us, y'know, pays us to do what we were already going to do anyway. Of course, when farmers are paid for doing what they are already doing, low compensation may still be viewed generally positively.One farmer said: I didn't change anything to do it, you know what I mean?… You're doing this, you're doing a really good job.Here's almost a half a million dollars.Is that okay?I go, Yeah, that's fine. Four participating farmers were also open about the fact that they were choosing carbon markets based on which quantification systems would compensate them for practices they were already using. Concerns about soil carbon markets Farmers from both subject pools expressed numerous concerns about and frustration with existing carbon markets.These concerns included (a) compensation being too low, (b) substantial burdens of paperwork, (c) a lack of predictability and the 'black box' of credit calculations, (d) frustration that the markets were skewed to benefit larger-scale agriculture, and (e) concerns about both greenwashing and additionality.We present evidence for each of these concerns below. The most consistent complaint that participating farmers raised about carbon markets was that the payment was simply too low.While the universal view was that money for 'doing nothing new' was great to receive, farmers viewed the payments as too low to incentivize new activities that a farmer was otherwise not inclined to adopt.There was complete agreement from every participating farmer we interviewed that the carbon credit payments available currently are too low to drive substantial practice changes on their own farms that they were not planning to adopt for other reasons, and are too low to drive practice changes for non-participating farmers.They expect that carbon credit programs will not be appealing enough to farmers who are not already interested in practice changes until the value of credits increases substantially.One farmer said of the roughly $15 per acre payment that most farmers receive: No, that's not going to change anybody, nobody's gonna quit doing the way they've always done it, try over something new for that. Non-participating farmers felt that carbon markets were built for large, conventional farmers who were not using many beneficial practices already, and they felt that carbon markets would result in little profit for them.One interviewee said: From what I have seen in carbon markets that have been established, um, cap and trade has not paid enough to fund those so that a smaller scale would get enough to even pay for the time they have to spend applying. Despite the ease of getting paid for doing 'nothing new', both groups of farmers saw the paperwork associated with tracking onfarm activities as a frustrating component of carbon markets.Participating farmers expressed concern that carbon market payments were made more difficult to access because of the hardship of gathering all of the required records and inputting their data into the carbon market system.They complained of the complexity of digitizing older paper-based records, getting records to mesh over the years as fields changed, and converting their data into a precisely specified format.One farmer said of this process: The data, the data part was, is ridiculous.I mean, everything that you do on every acre for the last 10 years, is what you have to do…you get back to 2010 -one, I wasn't even here, and records were mostly like little scribbly notes on notebook paper.So…maybe you weren't even farming the same fields then or you called them something different, or it used to be six fields, and now you made it one. Farmers who were participating in carbon credit markets generally felt that their own records were better than most, and that this was part of what allowed them to succeed, but they were concerned that the older or smaller-scale farmers would not be prepared for the level of detail that was required to participate in these programs.They expressed worry that this would limit the adoption of carbon credit programs.One farmer suggested that other farmers may be unprepared for the paperwork burden: It is a ton of paperwork and a ton of like, proving what you had to do.Yeah, it's pretty extensive…I think the farmers that are looking into these programs are prepared.I think the farmers that maybe are a few steps behind the curve are not prepared. The main things that participating farmers said helped them overcome the record-keeping and data entry hurdle was having spare time to keep records or having someone on staff whose job was predominantly to keep records, using a digital record-keeping system that was compatible with their carbon market's software and having someone at their carbon market helping them through the data entry process to clarify what was required.The farmers who had the easiest time submitting records to carbon markets were those who were using digital record-keeping systems administered by Indigo or by Truterra, which has partnered with Nori. Farmers participating in carbon markets also expressed frustration that the eventual payouts from these markets were difficult to predict.The uncertainty of the payment made the work of changing practices and inputting data seem far less worth it.One farmer suggested that other farmers would be unwilling to take the step to participate in the market (and the associated paperwork burden) just for an uncertain payment: So nobody's gonna do that -they're not going to do all that work on the chance they might not get paid. Farmers felt that it was risky to put effort into substantially changing practices solely to participate in a carbon credit program because of the chance that they would not receive any money.The cost of changing practices, they felt, should be compensated no matter what, or else farmers would be hesitant to take a 'leap of faith' on a carbon market.I mean, it's gotta be something that's a for sure thing if they make the changes.That's the other thing that's always frustrated me.You might make all the changes and then not get paid?That's crazy. Farmers' perception that carbon markets were unreliable was heightened by the fact that payouts were calculated differently from program to program, generally totally out of the view of the farmer.This uncertainty put farmers in the situation of inputting data into a 'black box' and hoping that it would result in a payout, a risk that they recognize others might not be willing to take, hindering wider adoption of carbon market programs. One farmer summarized other farmers' worry about how opaque carbon markets are: It makes people a little nervous, because it's not a tangible thing.And then, like the model that they're using is very complicated.It's kind of a black box.So you don't know what's actually happening.So there's some distrust going on. In order to avoid the distrust and confusion bred by the unpredictability of payouts, the majority of farmers participating in carbon markets voiced support for a more standardized system, with clearer and more consistent rules for setting the value of a credit.One farmer said simply: I would like to see one standardized set of rules.So it wasn't such a wild wild west. Beyond the barriers of predictability and paperwork, all farmers expressed some concerns that existing voluntary carbon markets contain biases and are poorly structured to try to incentivize nonoptimal activities or for the benefit of other actors.They viewed market operators skeptically and thus these concerns about bias can represent a form of barrier to participation.Some of these concerns focused on concerns that markets would incentivize activities that required heavy chemical inputs, which a farmer would have to purchase from a chemical company.Chemical companies tend to emphasize the role of no-till in sequestering carbon above other practices like cover cropping and nutrient management, because no-till often requires heavy pesticide and herbicide inputs to replace the disruption of weed root systems and pest life cycles that normally occurs through tillage 22 .Participating farmers expressed concern that these companies could be involved in setting national government standards for carbon markets, which would then skew all carbon markets toward a specific style of farming and ignore other beneficial practices for carbon sequestration. One farmer spoke negatively about other programs that were closely associated with chemical companies: If a large chemical dealer wants to sell you a chemical that if you use, they promise you'll sequester more carbon, and then they're going to pay you for that carbon, but you can only get that payment if you buy their chemical…like it's pretty obvious what's happening there.And you know, it's just another way for farmers to be taken advantage of by input dealer you're basically sequestering carbon with the intent that this company is going to buy your credit to offset the cost of producing the chemical that they sold you to sequester the carbon that's dumb.I'm not interested in that at all.Organic farmers were especially concerned about carbon markets privileging a specific style of large-scale monocrop farming.They worry that many currently active carbon markets are rooted in models based on pilot phase testing on large-scale commodity crop farms.An industrial-scale model would put smallscale, diversified organic farms at the disadvantage of entering an incentive structure that was not built to adequately capture or account for the way their farms operate and the practices that they're using.Organic farmers were even more concerned than participating farmers that carbon markets would narrowly support only a subset of valuable farming practices, but both groups of farmers frequently raised concerns that carbon markets would inadequately support a full range of beneficial soil management practices.Farmers from both groups expressed that it was a priority that carbon markets be protected from unfair industry bias.I can see already that…there's already the major ag players that are kind of trying to write the rules for the programs and design the standards…around…no till and that approach to farming is where it will get tilted towards… because their incentive is to sell…seed and chemicals and fertilizers. Non-participating farmers also felt it was unfair that carbon markets were built on an industrial monocrop model that would not be easily applied to their small, diversified farms.One participant said of carbon markets: [S]o maybe eventually they do approach, you know, a 30 acre diversified vegetable operation, but if their data and their models are based on a corn and soy operation in Iowa, is that going to make sense?Farmers were left with the perception that some carbon markets were set up just for the purpose of enriching the companies that run them.This led to a distrust of carbon markets in general, and participating farmers worry that this distrust will hinder wider adoption of these programs by other farmers.Farmers were also concerned about the involvement of large chemical companies when they look forward toward more potential government regulation of carbon markets.One farmer expressed this anxiety: A few farmers in both groups raised concerns that companies using carbon credits from the voluntary market would use them for marketing and mislead consumers about their practices.They worried that carbon credits would be used in greenwashing campaigns by industries seeking to paint themselves as more environmentally sustainable than they are, producing more revenue for these companies without producing substantive change to address greenhouse gas emissions.One farmer participating in carbon markets offered this detailed critique of the problem of greenwashing that carbon credits facilitate: [T]he general public, I mean, they, they see these companies buying carbon credits, and they think it's great…but I think they also don't fully understand the whole scope of everything.Because…well take like a Delta Airlines…you buy a flight with Delta, they say we can fly, you know, carbon neutral for an additional $40, you know, and, I mean, I've seen it, there's people getting out their phones, and they're paying those 40 bucks.And they're like, Wow, this is great, you know, I flew carbon neutral.Okay, but you really didn't.Because, you know, Delta Airlines still burned the same amount of fuel, they still put the same amount of emissions out into the air…these big companies…they're using it to their advantage for marketing. Organic farmers were concerned more broadly with the way that companies, particularly food producers, greenwash themselves as 'sustainable,' 'climate friendly,' or 'carbon neutral' and avoid accountability for their harmful practices.One interviewee worried that companies would simply use carbon credits and other market-based climate change approaches as a cover to dodge deeper changes to their practices.We're not going to shop our way out of industrial agriculture being bad for the climate, because these companies are uniquely gifted at greenwashing themselves. Both groups of farmers raised concerns about the extent to which market-based solutions can truly and transparently drive climate change mitigation efforts.At the same time that they want recognition of their own climate beneficial practices, farmers worry that the flip-side of that recognition in carbon markets is the obscuring of continuing harmful practices in the industries that purchase their carbon credits.Looking at the fuller picture of carbon markets, farmers seemed concerned that their own positive practice changes might be misappropriated, making those practices a less effective climate solution. Overall, concerns about additionality are central to an evaluation of any functioning voluntary offset program.Yet, for the participants in the program, concerns about additionality requirements centered not on concerns that sequestered carbon was non-additional and that such credits would be used to allow sources of emissions to continue.Rather, for both groups of farmers, concerns about additionality centered on the perverse incentive these requirements created to reward those who more recently adopted beneficial practices or, in some cases, to incentivize farmers to switch back to conventional tillage practices in order to enhance their eligibility for payments in the future. Farmers participating in carbon markets generally had negative views about existing additionality requirements.They saw them as an unfair burden which prevented farmers using beneficial practices from consistently being compensated and which penalized early adopters of these practices.They generally felt that they should be paid for their beneficial practices, regardless of when they started or whether the practice was additional.One farmer said of their beneficial practices: I'm still doing it.So if you're gonna pay people for doing that, what difference does it make when they started? A few participating farmers were only able to enroll a portion of their acres in carbon market programs, because fields they had been farming for a long time did not meet additionality requirements.Others were entirely excluded from carbon markets whose additionality protocols would allow them to look back only a few years.Participating farmers voiced concerns that prioritizing recent practice conversion created a perverse incentive against maintaining beneficial practices over the long term, which would be most beneficial from a climate perspective.One participating farmer said of the additionality requirement: It kind of is a disincentive.To me, I could see people hopping out of some of these good practices for a year or two just so they can get re-enrolled in them in the future. Another farmer recalled a conversation with a carbon market representative in which they realized farmers could see the most money by pausing their beneficial practices and then starting over again: But one of our initial conversations we were kind of joking with him was like, okay, so you're telling me, we'd be better off to go back to tilling for two years?And then go back to how we were doing things?He's like well be better if you didn't.Well I know, but like this is the way this works?Like, that's kind of how it's set up. Farmers participating in carbon markets felt that additionality requirements "punish the early adopters" and prevent them from seeing as much money as farmers who adopt practices later.Being paid less than farmers who had implemented the same practices that they were using later, most farmers felt that additionality requirements set up an unfair penalty for farmers who had been innovative and forward-thinking enough to adopt beneficial practices years before. Organic farmers were similarly concerned about additionality, especially because they were almost always early adopters who would be ineligible for payments because they had been using beneficial practices for so long.Some expressed the perspective that a carbon market would function more as an incentive to conversion to beneficial practices for conventional farmers, rather than providing a continuation incentive for farmers already using beneficial carbon sequestration practices.Organic farmers felt that farmers should be supported for using beneficial practices regardless of when they began, in order to incentivize long-term use of good soil health practices and climate change mitigation. DISCUSSION Our results show that both groups of farmers largely shared the same motivations to undertake beneficial activities: the benefits to soil and crop health and the long-term economic sustainability garnered from those benefits.This finding matches the results of previous studies demonstrating that farmers' adoption of environmentally beneficial practices is primarily motivated by the perceived conservation and environmental benefits of doing so 14,23 . Based on generalized assumptions of additionality requirements, however, we expected that farmers participating in carbon markets would also perceive the financial benefits of payments at least as one of their primary motivations for implementing carbon sequestering practices, but this was very clearly not the case. Our typology of concerns with carbon markets that emerged from coded interview transcripts also demonstrates that both participating and non-participating farmers largely share the same concerns.They all are concerned that markets are structured unfairly to benefit large agricultural corporations and/or offset developers rather than farmers, and that markets are largely a wild-west of unpredictable benefits and burdensome paperwork.These results confirm findings from Kragt et al. ( 2017) that administrative burdens can pose real barriers to participation in carbon sequestration programs 13 . Our results also suggest that providing more information or experience with programs about programs as a way to address identified barriers around familiarity and information may not be enough not alleviate farmer concerns that are fundamentally rooted in issues of trust 14 .Even farmers participating in the carbon markets do not trust the markets nor view them as beneficial, but rather view them as a means to earn an extra buck for what they are already doing.This confirms the idea identified by Feliciano et al. ( 2014) that those who are most likely to participate in markets are those who are already doing on-farm activities that make them eligible, because the uncertainties and costs associated with participation are seen as lower barriers 15 . Finally, our results present evidence for how the issue of additionality is perceived by farmers participating in voluntary offset markets, with strong implications for broader concerns about the environmental integrity of voluntary soil carbon offset credits. Farmers' perspectives were both that the carbon they were sequestering was not additional to what would have occurred without the carbon credits, but, also that that additionality restrictions are unnecessary and convoluted imply that there is a fundamental disjuncture between how carbon markets define themselves, primarily as a climate change mitigation tool built on rigorous, permanent and additional offsets, and the work that farmers want offset markets to be doing, that is, providing one more source of monetary support for soil management practices.Thus, this disjuncture may lead markets to function improperly to address climate change as they are forced by farmer (and buyer) demand to adopt less stringent additionality requirements, or to fail to catch on at all.Farmer desires for practice support also point to the unmet need for educational and monetary support for practice changes, which is now being imperfectly fulfilled by carbon markets. Based on our research, carbon market payments through existing markets such as Nori and Indigo for soil carbon sequestration are largely reaching farmers who were already implementing these beneficial practices or were already strongly interested in implementing these practices, and that the payments for the offset credits are seen as a 'gravy on top' in the form of payments earned for what they were already doing.Given that farmers also perceived the payments as generally too low to incentivize new adoption of the practices, this has an effect of largely generating credits from farmers who were already and separately motivated to adopt beneficial practices. Under the protocols used to approve offset credits, the additionality standard in practice does not assess or require an assessment of farmer motivations for implementing practices.Rather, the additionality requirement is simply that activities are newly additional relative to a pre-established baseline periodthis is why some farmers joked that they should stop covercropping for a few years so they could re-start and earn more credits.Both Nori and Indigo's additionality protocols fundamentally require only that practice changes be reasonably expected to sequester additional carbon relative to an established baseline of original practices.While this means that farmers' primary motivations for practice adoption do not in and of themselves violate the protocol requirements, the fact that farmers perceive that they are being paid for what they were already doing or would otherwise do even if they were not being paid and that farmers themselves perceive that their credits are not additional demonstrates further disconnection about what additionality means in practice.Arcusa et al. (2022) recently summed up these issues, highlighting that there are numerous working definitions of additionality that are not shared uniformly across different stakeholders participating in carbon markets, and we see evidence for that in our results 24 . These concerns about additionality are not unique.Recent analyses of soil carbon farming the Australian Carbon Farming Initiative's performance have found significant concerns for ensuring additionality under a voluntary carbon offset protocol in that system as well 25 .As the voluntary market for agricultural soil carbon offsets expands, it is increasingly important to ensure that market programs for agricultural soil carbon sequestration are effective at sequestering additional carbon and appealing enough to incentivize farmers to adopt soil carbon sequestration practices. 1 . Tell me a little bit about your farm and what you do. 2. What do you think about your farming as it relates to climate change?In what ways does climate change affect you? 3. What practices are you using to sequester carbon in your soils?What factors drove your decision to implement these practices?4. Hypothetically, if you were not already implementing these practices, what regulations, incentives or programs do you think would allow you or encourage you to implement soil carbon sequestration practices?5a.(if not participating).How would you feel about a carbon farming add-on to the organic certification label?What about a program that paid you for the practices themselves?5c1.(if participating) How were you farming before you got involved with selling carbon credits and how do you reflect on these practices now? 5c2.(if participating) What has been your experience with the program thus far? 6.Overall, what do you think would make carbon credit programs work better for farmers?7. How do you think agriculture needs to adapt to address climate change?8.Where do you see carbon markets for soil carbon sequestration in five years from now?Where would you like to see markets in five years?
2023-09-25T13:44:27.273Z
2023-09-25T00:00:00.000
{ "year": 2023, "sha1": "2d955ad1e0ea4ae434441fb21cc40a583ca8fad9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s44168-023-00055-4.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "456f74d3ec049a02fa94788e108c5a2eda18a4f4", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
253509421
pes2o/s2orc
v3-fos-license
Fostemsavir and ethinyl estradiol drug interaction: Clinical recommendations for co‐administration Fostemsavir, a prodrug of temsavir, is indicated for heavily treatment‐experienced adults with multidrug‐resistant HIV‐1 infection, antiretroviral (ARV) intolerance, or safety considerations. Understanding drug–drug interactions (DDIs) is important in individuals taking fostemsavir with hormonal contraceptives or menopausal or gender‐affirming hormonal therapies. INTRODUCTION There is a continued need for antiretroviral (ARV) agents with new mechanisms of action to address the needs of heavily treatment-experienced individuals with HIV. Fostemsavir (Rukobia™; ViiV Healthcare, Research Triangle Park, NC, USA) is an oral prodrug of temsavir, a first-in-class attachment inhibitor that specifically binds to HIV-1 gp120, blocking viral attachment and entry into host CD4 T cells [1,2]. It is available as an extendedrelease, film-coated 600 mg tablet and is indicated, in combination with other ARV agents, for treatment of HIV-1 infection in adults with multidrug-resistant HIV-1 for whom it is otherwise not possible to construct a suppressive ARV regimen due to resistance, ARV intolerance or safety considerations [3]. Fostemsavir has a favourable safety and tolerability profile and, importantly, no clinically meaningful effects were seen in individuals with mild-to-severe renal or hepatic impairment on temsavir pharmacokinetics (PK), although renal clearance decreased with increasing renal impairment from moderate to severe and exposure tended to increase with severity of hepatic impairment [3,4]. Fostemsavir is likely to be co-administered with combined oral contraceptives (COCs), menopausal hormonal therapy (MHT) and gender-affirming hormonal therapy (GAHT) in some individuals living with HIV. Therefore, for people taking fostemsavir and these hormonal therapies, it is important to understand potential drug-drug interactions (DDIs) that may be of clinical significance. Temsavir, the active moiety of fostemsavir, is a substrate of cytochrome P450 (CYP), and glucuronidation by UDP-glucuronosyltransferases (UGTs) is not a major pathway for temsavir metabolism. Temsavir and two metabolites (hydrolysis metabolite, BMS-646915, and N-dealkylated metabolite, BMS-930644) were evaluated in vitro as inhibitors of various CYPs, UGTs and transporters. Temsavir is an inhibitor of organic anion transporter (OAT) P1B1, OATP1B3 and breast cancer resistance protein (BCRP). The two metabolites are inhibitors of BCRP, and the N-dealkylated metabolite BMS-930644 inhibited CYP2C8 and CYP3A4. No induction by temsavir of CYP1A2, CYP2B6 or CYP3A4 enzymes was observed in human hepatocytes [5]. Ethinyl estradiol (EE) and norethindrone (norethisterone) acetate (NETA), synthetic analogues of oestrogen and progesterone, respectively, are components of commonly prescribed COCs [6]. EE is extensively metabolised and is subject to pre-systemic (gut) and hepatic first-pass metabolism. It is a substrate for multiple drugmetabolising enzymes (e.g. CYP and UGT) among others [7,8]. NETA is completely and rapidly deacetylated to NET after oral administration and is also metabolised by CYPs, UGTs and other enzymes [9,10]. Based on in vitro physiological PK modelling and clinical drug interaction results, significant interactions are therefore not expected when fostemsavir is co-administered with substrates of CYPs or UGTs, such as COCs. The lack of CYP or UGT induction by temsavir suggests no impact on EE and NET PK and pharmacodynamics (PD). From the in vitro studies described earlier, the temsavir metabolite, BMS-930644, has the potential to inhibit CYP3A4 and CYP2C8; however, circulating concentrations of BMS-930644 are low compared with temsavir concentrations and the clinical relevance is uncertain. Given the importance of COC use and the potential for one metabolite to inhibit CYP3A4 and CYP2C8, it was important to evaluate the impact of co-administering fostemsavir and COCs on EE and NET PK, PD, safety and tolerability in a DDI study. METHODS Healthy, non-smoking women of childbearing potential, age 18-40 years, who were not pregnant or breastfeeding were eligible to participate in Study 206279. Women were required to have evidence of intact ovarian function by medical history and history of regular menstrual cycles, and to be on a stable EE and progestin-containing COC regimen without breakthrough bleeding or spotting for at least two consecutive months prior to day À1 (day 1 of study = day 21 of COC regimen). Individuals with HIV, hepatitis B and hepatitis C virus infections were excluded. Study design Study 206279 was an open-label, single-sequence, four-cycle, four-treatment study ( Figure 1). The primary objective was to determine the impact of fostemsavir on EE and NET. Participants took their currently prescribed COC during cycles 1 and 2 and were switched over to Loestrin ® 1.5/30 (NETA 1.5 mg and EE 30 μg) for cycles 3 and 4 for the primary DDI evaluation. Fostemsavir 600 mg tablets were administered twice daily on days 12-21 in cycle 4. Screening evaluations to determine eligibility occurred 28 days prior to dosing on day 1 of cycle 1. In cycles 1 and 2, all participants continued taking their existing COC tablet daily as prescribed. Day 1 of the study was the 21st day of their existing COC cycle. This was followed by 7 days of no pill-taking in accordance with standard pill-taking rules for a 21-day regimen. A serum progesterone (marker of ovulation suppression) sample was also collected on day 1 of cycle 1 and day 21 of cycle 2. A progesterone concentration of 300 ng/dL was used as a marker to identify potential ovulation in study participants. In cycle 3, all participants were switched from their current COC to Loestrin ® (NETA 1.5 mg/EE 30 μg) for 21 days. In cycle 4, all participants continued taking NETA 1.5 mg/EE 30 μg once daily alone for 11 days and were then co-administered fostemsavir 600 mg twice daily for 10 days (days 12-21). Serial 24-hour plasma samples were collected on days 10 (cycle 3) and 21 (cycles 3 and 4) to determine EE and NET concentrations. Serum F I G U R E 1 Study design: all cycles are 28 days and followed sequentially, except for cycle 1, which is only the last 8 days of the cycle (no dosing). On days 22-28 of each cycle, no combined oral contraceptive (COC) was taken. BID, twice daily; QD, once daily. progesterone samples were also collected on days 11, 15, and 21 of cycles 3 and 4. Plasma samples for EE and NET were determined by validated liquid chromatography/tandem mass spectrometry methods using appropriate calibration curves and quality control samples that met pre-established acceptance criteria. Pharmacokinetic analysis Noncompartmental PK analyses were conducted with Phoenix WinNonlin version 6.3 (PharSight Corp, St Louis, MI, USA) to estimate maximum observed concentration (C max ), time of C max (t max ), and area under the plasma concentration-time curve for each dosing interval from time 0 to 24 h post-dose (AUC tau ) for EE and NET. Statistical methods for pharmacokinetic analysis Absence of an effect of fostemsavir on the PK of EE and NET would be concluded if the 90% confidence intervals (CIs) for the ratios of geometric means (GMRs), with and without fostemsavir, were contained within 0.80 to 1.25 for both C max and AUC tau of EE and NET. A sample size of 20 participants was projected to provide at least 88.1% and at least 99.0% power to conclude that fostemsavir has no effect on the C max and AUC tau of EE, respectively. Similarly, a sample size of 20 participants was projected to provide at least 94.0% and at least 99.0% power to conclude that fostemsavir has no effect on the C max and AUC tau of NET, respectively. To allow for possible dropouts and to ensure that at least 20 participants completed both cycle 3 (NETA/EE alone) and cycle 4 (NETA/ EE + fostemsavir), 26 participants were dosed with study drug. The comparisons between test (NETA/EE + fostemsavir; day 21 of cycle 4) and reference (NETA/ EE alone; day 21 of cycle 3) treatments were analysed using a general linear mixed-effects model with SAS v.9.3 (SAS Institute, Cary, NC, USA). Relevant ARVcontraceptive interaction studies and guideline recommendations for ARV co-administration with COCs, MHT and feminizing GAHT were reviewed. Study results were used to predict the impact of fostemsavir co-administration with other contraceptive methods and hormone therapies. Recommendations were based on minimising the risk of thromboembolic events associated with higher oestrogen exposure, ensuring adequate hormonal concentrations to maintain a targeted effect, and relevant treatment guidelines. Ethics The protocol and amendment were approved by institutional review boards (IntegReview, Austin, TX, and Aspire IRB, Santee, CA, USA) of the two study sites used (ICON Early Phase Services, LLC, San Antonio, TX, and QPS-MRS, LLC, South Miami, FL, USA), and all participants provided written informed consent prior to the initiation of any protocol-required procedures. This study was conducted in accordance with the International Council for Harmonization Good Clinical Practice Guidelines and all applicable regulatory requirements and guiding principles of the Declaration of Helsinki. The trial was registered at ClinicalTrials.gov (NCT02480881). T A B L E 2 Summary statistics of ethinyl estradiol (EE) and norethindrone (NET) pharmacokinetic parameters comparing day 10 of cycle 3 with day 21 of cycle 3 (NET/EE alone), and day 21 of cycle 4 (NET/EE + FTR) with day 21 of cycle 3 (NET/EE alone) Analyte Pharmacokinetic parameter T A B L E 3 Antiretroviral (ARV)-contraceptive hormone drug-drug interaction (DDI) data and guideline recommendations for fostemsavir (FTR) co-administration Delivery Data and guideline recommendation [11] FTR co-administration [4,11] Combined hormonal contraceptives A total of 20 of the 26 participants received all doses of study drug per protocol. The six participants who were terminated early discontinued the study for the following reasons: serious adverse event (n = 1), lost to follow-up T A B L E 3 (Continued) Delivery Data and guideline recommendation [11] FTR co-administration [4,11] • ETR, NVP: EE and etonogestrel # possible; no data for recommendation T A B L E 4 Antiretroviral (ARV)-menopausal and gender-affirming hormone therapy drug-drug interaction (DDI) data and guideline recommendations for fostemsavir (FTR) co-administration Data and guideline recommendation [11] FTR co-administration [4,11] Menopausal hormone therapy (n = 1), participant no longer met study criteria (n = 1), participant withdrew consent (n = 2), and other (investigator decision, n = 1). Figure 2a shows that the mean concentrations of EE during cycle 3 (NETA/EE alone) were similar on both days 10 and 21. Adjusted GMR for EE C max and AUC tau showed that EE exposure was comparable on days 21 and 10 of cycle 3 (NETA/EE alone), with no difference in t max . Mean concentrations of EE were higher after coadministration of fostemsavir with NETA/EE (day 21/ cycle 4) compared with administration of NETA/EE alone (day 21/cycle 3). At steady state, co-administration of fostemsavir 600 mg with NETA/EE resulted in a 39% and 40% increase in EE exposure as demonstrated by C max and AUC tau , respectively ( Table 2). The median t max of EE was comparable (1.5 vs. 2.0 h) when fostemsavir was co-administered with NETA/EE. Mean concentrations of NET after NETA/EE alone during cycle 3 were similar on days 10 and 21 and also after coadministration of NETA/EE with fostemsavir 600 mg (day 21/cycle 4; Figure 2b). Co-administration of fostemsavir with NETA/EE at steady state did not have a meaningful impact on NET PK, with both 90% CIs for C max and AUC tau falling within the 0.80-1.25 limits (Table 2). However, the adjusted geometric mean (GM) for C max and AUC tau of NET increased modestly (15-16%) on day 21 compared with day 10 of cycle 3, and the upper bounds for NET for both C max and AUC tau were 1.28 and 1.27, respectively, just above 1.25. There was no apparent change in the median t max of NET (1.00 hour) when NETA/EE was co-administered with fostemsavir at steady state. The rate of participants with a single progesterone concentration > 300 ng/dL was similar for all cyclesspecifically, COC alone (ranged from 3.8-8.3%) and COC administered with fostemsavir (5%). Consistent with no reduction in EE and NET exposures, the addition of fostemsavir did not affect the ability of the COC to maintain low progesterone concentrations, suggesting no impact on ovulation suppression. Based on these data, DDIs between ARVs and hormonal contraception, MHT and GAHT, as well as recommendations for fostemsavir co-administration are shown in Tables 3 and 4. DISCUSSION Fostemsavir had no effect on NET but increased EE C max and AUC by approximately 40%; therefore, coadministration with hormone therapy is not expected to impact hormone treatment efficacy. However, the increase in EE concentration could potentially increase the risk of oestrogen-related toxicities, in particular, venous thromboembolism. CYP3A4 inhibition of EE by the temsavir metabolite BMS-930644 probably contributed to this interaction, and temsavir BCRP inhibition may also have contributed, as there is conflicting in vitro evidence as to whether EE is a substrate of BCRP [7,12]. Clinically, co-administration of ledipasvir, a BCRP and P-glycoprotein (P-gp) inhibitor, without any known effects on drug-metabolising enzymes, increased plasma EE C max by 40% and AUC tau by 20% [13]. This clinical interaction suggests that EE may be a substrate for drug transporters. It is therefore recommended that the total EE dose should not exceed 30 μg when given with fostemsavir. Fostemsavir did not impact the progestin NET; therefore, no impact on efficacy of progestin-only hormonal contraceptives is anticipated. Although non-oral contraceptives have not been studied with fostemsavir, using oral contraceptive DDI data, one may predict that interactions with other oestrogencontaining combined hormonal contraceptives such as the contraceptive patch and vaginal ring are likely to be similar. Oestrogen-containing MHT and GAHT may be coadministered with fostemsavir, with monitoring of oestrogen concentrations and appropriate dose adjustment as needed. MHT should utilise individualised risk-benefit assessment using the lowest effective dose of systemic oestrogen consistent with treatment goals, with or without vaginal oestrogen. When co-administering fostemsavir and MHT, the oestrogen dose should start low and be titrated according to clinical effect. It is recommended that feminizing GAHT regimens target serum oestradiol concentrations in the physiological cis-gender female range of 100-200 pg/mL. Routine monitoring of oestradiol concentrations will allow dose adjustments to achieve goal concentrations. The PK of hormonal contraception with fostemsavir and boosted protease inhibitors has not been studied; therefore, alternative/additional contraceptive methods, guided by protease inhibitor-prescribing recommendations, should be considered given the variable increase in temsavir concentrations via CYP3A4 inhibition from protease inhibitor regimens. Because co-administration of boosted protease inhibitors with transdermal contraceptives has been shown to affect EE concentrations, a fostemsavir-EE drug interaction may be expected regardless of delivery mechanism, and caution is advised. CONCLUSIONS Fostemsavir co-administration with hormone therapy is not expected to impact efficacy. When fostemsavir is coadministered with oral oestrogen-based therapies, the EE dose should be ≤ 30 μg/day to minimise risk. Oestrogencontaining MHT and GAHT can be co-administered with fostemsavir, with monitoring of oestrogen concentrations and dose adjustments as needed. AUTHOR CONTRIBUTIONS MM, PA, AC, and KM contributed to the conception and design of the study. AC and KM contributed to the acquisition of data. EP, ASM, MM, FM, PA, AC and KM contributed to the analysis of data. NN, EP, ASM, RS, MM, PA, AC and KM contributed to the interpretation of data. NN, EP, ASM, RS, MM, AC and KM contributed to the drafting of the manuscript. All authors contributed to critically revising the manuscript for important intellectual content and approved the manuscript for publication.
2022-11-15T06:17:32.956Z
2022-11-13T00:00:00.000
{ "year": 2022, "sha1": "ef52fbf6f1f674f7f846b0c4650faf32fccb8ba3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/hiv.13442", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "e80755fece6086b8459bd2dcdd3c31a919729063", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
13881579
pes2o/s2orc
v3-fos-license
Incarcerated fracture fragments of Longevity polyethylene liners after total hip arthroplasty Highly cross-linked polyethylene liners are widely used in total hip arthroplasty because they experience lower wear rates than conventional polyethylene liners. However, the cross-linking process does decrease the resistance of polyethylene to fatigue failure and fracture. This report describes 2 cases of highly cross-linked polyethylene liner fracture occurring in association with hip dislocation and unsuccessful closed reduction consequent to blockage by an incarcerated liner fragment. These cases highlight the known polyethylene fracture risk factors of thin and unsupported polyethylene and large bearing sizes. They also reinforce the importance of a careful evaluation of postreduction radiographs for the presence of a concentric reduction and provide a possible explanation for postoperative hip instability, multiple dislocations, and incomplete seating of the femoral head on attempts at closed reduction. Introduction Since the late 1990s, highly cross-linked polyethylene (HXLPE) liners have been used in total hip arthroplasty (THA) to reduce wear rates and the incidence of associated osteolysis. Both in vitro and in vivo studies have confirmed that the cross-linking process successfully changes polyethylene (PE) characteristics to reduce wear [1][2][3][4]. Despite the advantage of HXLPE in terms of wear, it has been noted that cross-linking reduces fracture resistance. Several case reports have documented fractures about the rim of HXLPE liners [5][6][7][8][9][10][11][12]. Although most researchers agree that risk factors exposing implants to these types of fractures include acetabular component malposition [11], excessive femoral neck impingement on the PE liner [6], and the use of thin PE liners with large femoral heads [12,13], other researchers have concluded that cracks may occur in the absence of predisposing factors [8]. This case report describes 2 cases of HXLPE liner fractures occurring in association with hip dislocation with subsequent incomplete, nonconcentric reductions. These unsuccessful reductions were attributed to blockage by incarcerated liner rim fracture fragments. Case 1 An 80-year-old man underwent primary right THA with placement of a 54-mm trabecular metal modular acetabular shell (Zimmer, Warsaw, IN), a 3.5-mm lateralized Longevity cross-linked PE liner (Zimmer, Warsaw, IN) with 36-mm inside diameter, a Fitmore press fit stem (Zimmer, Warsaw, IN), B11 12/14 taper with þ3.5-mm neck length, and a 36-mm cobalt-chrome femoral head (Fig. 1). The patient's recovery was uncomplicated and he returned to a high level of function until 5 months postoperatively, when he dislocated his right hip after a slip and fall from standing position (Fig. 2a). There had been no prior sensation of instability until this traumatic event. The hip was reduced in the emergency department, but the patient continued to have episodes of instability and experienced multiple subsequent dislocations. Postreduction radiographs consistently demonstrated a nonconcentric, incomplete reduction, which was not recognized until the patient One or more of the authors of this paper have disclosed potential or pertinent conflicts of interest, which may include receipt of payment, either direct or indirect, institutional support, or association with an entity in the biomedical field which may be perceived to have potential conflict of interest with this work. For full disclosure statements refer to http://dx.doi.org/10.1016/j.artd.2015. 12.006. was evaluated by members of the orthopaedic team (Fig. 2b). On exploration of the hip joint, a fragment of the posterior lip of the HXLPE liner was found incarcerated between the femoral head and the acetabular cup liner which had been blocking successful reduction (Fig. 3). Cultures showed no sign of infection. The liner and femoral head were revised with a PE cobalt-chrome 32-mm bearing and 10 -augmented, oblique-face constrained liner (Zimmer, Warsaw, IN). The acetabular shell and femoral stem remained in good position and were not exchanged (Fig. 4). After revision, the patient experienced an uneventful recovery and has had excellent function to date. He is currently two and a half years status after revision surgery, and his last follow-up examination was 1 year postoperatively. The patient presented in this case was informed that details of his operative and postoperative course would be submitted for publication, and he provided verbal consent. Case 2 A 26-year-old woman with rheumatoid arthritis underwent primary right THA with placement of a 50-mm trabecular metal modular acetabular shell (Zimmer, Warsaw, IN), a 3.5-mm lateralized Longevity cross-linked PE liner (Zimmer, Warsaw, IN) with 36mm inside diameter, a size 12 Trabecular Metal press fit stem (Zimmer, Warsaw, IN) with a þ3.5-mm neck length, and 36-mm cobalt-chrome femoral head (Fig. 5). Acetabular version at the time of surgery was felt to be within acceptable limits, but subsequent radiographs demonstrated a vertical inclination of 55 . The patient had a fall from standing onto the right hip 8 months after surgery. There was no fracture or dislocation at that time, but she had feelings of instability in the months following the fall. At 11 months, the patient suffered a spontaneous dislocation with minimal internal rotation, flexion, and adduction while sitting on a stool (Fig. 6a). Her hip was reduced in the emergency department, but radiograph imaging showed that the femoral head was not completely seated within the acetabular liner (Fig. 6b). On exploration, the patient was found to have a fragment of an unsupported section of the PE liner incarcerated between the liner and the femoral head. The fragment included that segment of the liner from the posterosuperior rim between 2 successive indentations in the PE rim that facilitate the achievement of proper liner rotation before final seating. The fracture plane occurred at the contact point between the liner and the edge of the rim of the metal shell (Fig. 7). The entire component was revised to a more desirable abduction angle of 45 and 20 of anteversion and secured with 3 dome screws (Fig. 8). The PE liner and femoral head selections were 36 mm in diameter. After revision, the patient experienced an uneventful recovery and has had excellent function to date. She is currently 2 years status after revision surgery, and her last follow-up examination was 17 months postoperatively. The patient presented in this case was informed that details of her operative and postoperative course would be submitted for publication, and she provided verbal consent. Discussion Osteolysis secondary to debris from PE wear has traditionally been a common cause of loosening and failure in THA. Both in vitro and in vivo studies have shown that HXLPE liners have significantly decreased wear rates compared to conventional PE liners [1][2][3][4]. The enhanced resistance to wear exhibited by HXLPE liners is attributed to their resistance to plastic deformation. However, improvement in wear rate is associated with a reduction in tensile strength, ductility, and toughness of HXLPE liners, ultimately lessening the liner's fracture resistance [14]. An analysis of voluntarily reported Longevity (Zimmer, Warsaw, IN) liner fractures to the U.S. Food and Drug Administration through April 2013 shows that 74 events had been reported since the liner was approved in 1999 [13]. Of these cases, most occurred in patients with small acetabular shells (<54 mm) and large-diameter femoral heads (>36 mm) [13]. This combination of a small shell and a large head requires the use of a thin liner, and liners with <7-mm thickness at the weight-bearing surface and <4.8-mm thickness at the rim are more likely to fracture than thicker liners. There are several case reports of fractured HXLPE liners in the literature. The earliest report included 4 Longevity (Zimmer, Warsaw, IN) liners from 2 patients that demonstrated cracking and rim failure at the grooves that articulate with the shell locking mechanism [11]. The authors concluded that the use of a thin acetabular liner, relative vertical cup alignment, large femoral head implantation, and the inherent properties of the HXLPE predispose the liner to failure [11]. Other case reports documenting fractured HXLPE liners provide similar conclusions in regard to predisposing factors to such events [9,12]. Hara et al. [10] noted that fractures likely initiate at the rim-dome junction and propagate superficially to the articular surface. Duffy et al. [6] reported a case of rim failure due to excessive femoral neck-liner impingement secondary to the use of a Marathon (DePuy Synthes, Warsaw, IN) extended lip liner. In this case, impingement was not demonstrated on revision if a neutral liner was used [6]. Similarly, Furmanski et al. [7] studied 4 different HXLPE extended lip liners and found that all liners had fractures despite being well positioned. Thus, the impingement stress experienced by an extended lip liner during normal mechanical loading events appears to be sufficient to fracture the HXLPE liner. This is especially evident at the thin areas occurring at the shell-liner locking grooves. Another report of 9 retrieved HXLPE liners found that 6 had shallow initiated cracks, even in 3 liners that were retrieved after only 1 month in vivo [8]. Therefore, these small cracks may have occurred during manufacturing, surgical implantation, or initial postoperative loading. Despite the acknowledgment of liner fractures in other case reports, our report is the first documentation of fractures with associated incarcerated fragments blocking successful concentric reductions of the femoral head. In the only other case report that describes liner fractures after a specific dislocation event, the dislocation was anterior [5]. The fragment was from the anterosuperior rim, and the hip was successfully reduced but unstable [5]. In our case 2, a significant traumatic event was followed by the development of a sensation of instability culminating in a dislocation which was spontaneous. It is likely that in this case, a fracture occurred with the initial event that ultimately leads to further fatigue and crack propagation with enlargement of the cleavage plane between the segmental indentations of the liner. We believe that it is likely that, in this instance, dislocation occurred with the fragment remaining tethered on one end and that, with relocation, the fragment was pulled into the acetabular component by the returning femoral head where complete dissociation occurred. The liners used in these cases accommodated the largest femoral head diameter for the shell selected. Consequently, the liners, each with a dome thickness of 6.8 mm and rim thickness of 3.2 mm, were the thinnest liners available for the implant. The liner thickness was insufficient to prevent PE failure in our patients which further supports the suggestion made by previous researchers that hips with large femoral heads and thin liners are predisposed to such events. Acetabular liner fracture is a known complication that may occur at higher rates with the use of large femoral heads and thin HXLPE liners with unsupported rims especially in the circumstance of vertical acetabular positioning. Since the time of revision in case 2, the frequency and widespread nature of rim fracture associated with this liner design has been generally recognized. The authors have since abandoned the use of components whose design includes an unsupported PE liner. These cases highlight the importance of a careful review of postreduction films for complete and concentric seating of the femoral head within the acetabular cup. It is important to consider the possibility of an incarcerated liner fracture fragment in poorly reduced prosthetic hips and hips that experience multiple dislocations and subjective instability, especially in those with large femoral head components and thin acetabular liners. Summary Fracture of highly cross-linked PE acetabular liners is a known but rare complication associated with THA. In this case report, the authors present an exceptionally uncommon clinical entity in which the liner fracture fragment becomes incarcerated between the femoral head and the remaining liner after attempts at closed reduction. This report supports the conclusions of the authors of previous reports that suggest that the use of large femoral heads and thin acetabular liners increases the risk of liner fracture. However, this is the first report to document the incarceration of a liner fracture fragment and an incomplete seating of the femoral head after attempted closed reduction resulting in hip instability and recurrent dislocation. Importantly, clinicians must closely review postreduction radiographs to assess for concentric seating of the femoral head within the acetabular component, as nonconcentric reductions may indicate liner fragment incarceration and the need for revision arthroplasty.
2018-04-03T03:58:38.985Z
2016-01-13T00:00:00.000
{ "year": 2016, "sha1": "2ebb9dc2ecf144d445e26cd228059f61f1f4f2fa", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroplastytoday.org/article/S2352344115001132/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ebb9dc2ecf144d445e26cd228059f61f1f4f2fa", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
261545368
pes2o/s2orc
v3-fos-license
Segmentation for Athlete's Ankle Injury Image Using Residual Double Attention U-Net Model HIGHLIGHTS We propose a segmentation using the Residual Double Attention U-Net model. Adjusting the gradient propagation of the segmentation framework using the residual structure. Solved the problem of low Correspondence Ratio and F1 values in traditional algorithms. Using multiple data sets to test the application effect of proposed algorithm. Abstract The image of an athlete's ankle joint injury can help to check whether the athlete's ankle joint is damaged, and plays a very important role in clinical diagnosis. To address the problem of poor segmentation effect of traditional athletes' ankle injury image segmentation algorithm, an ankle injury image segmentation algorithm based on residual double attention U-Net model is proposed. First, the region of interest is extracted from the original ankle injury image. After translation, rotation and turnover, the image data is expanded. Second, the residual structure is used to adjust the gradient propagation and residual feedback of the segmentation framework, extract the attribute information in the region of interest, and combine the two to retain more image features. Finally, combined with the double attention module to improve the weight ratio of image features, the athlete ankle injury image segmentation is realized in the image segmentation framework based on residual double attention U-Net model. The results demonstrate that the maximum values of DSC, ASSD, PM, and CR for the proposed algorithm are 0.93, 0.1, 0.96, and 0.95, respectively, and the F1 score is 95.7%, indicating that the segmentation effect of this algorithm is closer to the theoretical segmentation effect, and higher precision in segmentation, and the segmented image has a high degree of similarity to the original image, resulting in excellent segmentation performance. INTRODUCTION Professional athletes should receive corresponding training during their adolescence and youth.However, long-term overload training causes the ankle joint to be affected by different forces in different directions and of different sizes, accompanied by certain wear and impact, resulting in serious ankle injury.It has been reported that ankle joint is one of the most vulnerable parts of professional athletes.The bones of athletes in adolescence and youth have not yet fully developed.Thus, long-term training will have a certain impact on the development and structure of ankle joints.Medical images can help to check whether the ankle joint structure is damaged, and thus they play a very important role in clinical diagnosis.In the field of image segmentation, there are several technical challenges that need to be addressed.Firstly, there is the issue of accurately extracting relevant features for segmentation, especially when dealing with complex images.Secondly, there is the problem of handling image noise and variability.Thirdly, there is a need to improve the speed and efficiency of segmentation algorithms.Fourthly, there is a need to develop more robust and accurate evaluation metrics to assess the segmentation performance of various algorithms.In recent years, with the rapid development of medical image technology, more and more experts and scholars have turned their research focus to the field of medical image segmentation whose main purpose of medical image segmentation is to select the region of interest in the medical image with the help of automatic or semiautomatic segmentation algorithm and segment the image completely [1].The segmented images can help doctors quickly diagnose the loss of ankle joints of athletes and formulate corresponding treatment plans, which is of great significance for guiding athletes, coaches and team doctors to carry out follow-up rehabilitation treatment and recovery training [2]. Aiming at the important research topic of image segmentation of athletes' ankle joint injury, Huang W and coauthors [3] realized image segmentation by training neural networks.The algorithm determined the prior knowledge of the topological structure of the segmented object, which was then introduced into the training network, and the differentiable property was analyzed through the topological data; the required number of topologies was determined according to the Betti number of divided objects; finally, the segmentation included the features of topological structure to realize image segmentation.After testing, it was found that the value of the algorithm is relatively low, indicating that there is a problem of missegmentation in the image segmentation process of ankle joint injury of athletes using the algorithm, and the actual application effect is not good.Chen Z and coauthors [4] used task-driven generation of confrontation networks to realize image segmentation of retinal blood vessels.In the generation model, the U network was used to segment retinal blood vessels.In the discrimination model, multi-scale discriminators with different receptive fields were used to help generate more segmentation details; The task-driven model based on perceptual loss completed feature matching and finally realized image segmentation.However, it was found in practical application that the overlap between the theoretical segmentation effect and the actual segmentation effect of the algorithm is relatively low, indicating that the segmentation effect of the algorithm is poor and difficult to be applied in practice.The difference between the theoretical segmentation result and the actual segmentation result of the algorithm is relatively large.Wang B and coauthors [5] realized automatic segmentation of complex lung tumour images with an effective deep network.The encoderdecoder model was used to connect the global attention units in the image, and the region of interest was extracted by multi-scale semantic information.Finally, the segmentation ability of the algorithm was improved by Tversky loss and boundary loss.However, after testing, it was found that the performance of the algorithm is relatively low, indicating that the degree of under-segmentation in ankle joint injury images by the algorithm is relatively high, and the segmentation effect is poor, resulting in low applicability.Karani N and coauthors [6] proposed an image segmentation method based on an adaptive neural network.The segmented convolutional neural networks (CNN) was designed as a series of two subnetworks: a relatively shallow image normalized CNN, and then a deep CNN, which was used to segment the normalized image.In this process, an independently trained de-noising automatic encoder was used to de-noising the data, and the adaptive neural network was used to realize image segmentation.However, the difference between the theoretical segmentation result of the algorithm and the actual segmentation result of the algorithm is large.Hua L and coauthors [7] proposed a medical image segmentation algorithm based on local edge regions.The local ACM of gradient information was constructed based on the probability score of fuzzy k-nearest neighbour classifier, and the gradient information was then detected.The local feature function was introduced, and the edge information based on the probability score was used to construct the energy of the local region, so that the evolution curve stopped at the precise boundary of the region of interest, and the image segmentation was Brazilian Archives of Biology and Technology.Vol.66: e23230335, 2023 www.scielo.br/babtrealized by combining the boundary localization results.However, this method has the problem of missing segmentation, and the actual application effect is poor. It was found that traditional segmentation algorithms for athlete ankle injury images have lower values of Dice Similarity Coefficient (DSC), Average Symmetric Surface Distance (ASSD), Prevent Match (PM), Correspondence Ratio (CR), and F1, resulting in lower segmentation accuracy and poorer segmentation performance.To address the issues in traditional segmentation algorithms, a new algorithm is proposed which utilizes a residual double attention U-Net model for athlete ankle injury image segmentation.The main contributions of this paper are as follows: (1) The traditional algorithm does not translate, rotate and flip the original image, resulting in the inability to accurately segment small cavities and capillaries.To address this problem, this paper expands the image data after the image translation, rotation and inversion, and realizes the image preprocessing, to lay a solid foundation for the subsequent accurate image segmentation.(2) After pre-processing the images, the residual double attention U-Net model was utilized to highlight the features of the regions of interest through the use of residual recurrence units and dual attention modules.This coupled with the feature-based segmentation method was used to segment athlete ankle injury images, resulting in a solution for the issue of lower CR and F1 values present in traditional algorithms, thereby improving the overall segmentation quality.(3) The application effectiveness of the proposed algorithm was tested using multiple datasets, with DSC, ASSD, PM, CR, and F1 values used as evaluation criteria.Through experimentation, it was demonstrated that the proposed algorithm has an outstanding effect on reducing the segmentation of athlete ankle injury images, while effectively preserving important information and ideal edge processing.The segmented images have no jagged edges and the overall segmentation performance is good. METHODOLOGY In the original U-Net model, a residual structure that can extract attribute information in the region of interest was established, which effectively addresses issues such as gradient disappearance and explosion, and improves the convergence speed and accuracy of the model.To address the one-way transmission of attention information in the traditional U-Net model, the residual double attention U-Net model designed a bidirectional attention module, which can better capture local and global features, increase the weight of image features, and cope with complex backgrounds and uneven distribution of objects, thereby improving the segmentation accuracy of athlete ankle injury images.The athlete ankle injury image segmentation algorithm framework using the residual double attention U-Net model is shown in Figure 1. According to Figure 1, the region of interest is extracted from the input athlete ankle injury image, and the image data is expanded by performing translations, rotations, and flips.The residual structure is utilized to adjust the gradient propagation and residual feedback of the segmentation framework, extract attribute information from the region of interest, and combine the two to preserve more image features.The double attention module is applied to enhance the weight proportion of the region of interest image and retain more original features, ultimately achieving athlete ankle injury image segmentation based on the combined image features. Data preprocessing After the Computed Tomography (CT) image samples of ankle joint loss were acquired, the region of interest needs to be extracted first [8], and then preprocessed.Because a CT image may contain some regions unrelated to diagnosis, this part of the region was removed and only the region of interest was left, as shown in Figure 2.This algorithm calculated the mean and standard deviation of the sample image.After subtracting from the mean value and then dividing from the standard deviation, the grey-scale regularization of the region of interest was realized, which is convenient for more accurate segmentation [9]. Data expansion Based on data preprocessing, the data was expanded by translation, rotation and flipping.The number of ankle injury images allowed to be used is limited, and it is difficult to form a complete U-Net model.Therefore, this paper used a series of operations such as flip, translation, rotation and image deformation to expand the existing data.In which, flipping, translation and rotation are only simple deformations of the image, and there is no significant difference [10][11].Image deformation can generate image data of various shapes for U-Net model training. The process of data expansion in this paper is as follows: first, the original image was deformed twice, and then it was translated, rotated and flipped to complete the expansion of the original CT image. Image segmentation using residual double attention U-Net model Based on data preprocessing and data expansion, the residual double attention U-Net model was used to segment the ankle injury image to ensure the segmentation quality and speed.It is composed of a loop residual unit and a double attention module.The in-depth diagram of image segmentation is shown in Figure 3. [12][13].Each training process can generate high-level context information, and to obtain more accurate edge information, an integration process is added to the segmentation framework [14], the residual double attention U-Net model selects a 1 × 1 convolutional enhancement of the low-level feature depth before cascading.At the end of the frame, the training result was converted into a binary classification problem using the Softmax layer, and the max soft − value of each pixel in the image was calculated through the energy function.The definition formula of where ( )  refers to the activation value [15] of the pixel characteristic channel at the point k is image category, ( ) k mx refers to the approximate maximum function, T is the linear conversion parameters, and i refers to energy function coefficient. In this paper, the binary cross entropy of pixels was used as the target to train the U-Net model [16][17]. The Gaussian distribution ( ) 0, 0.4 Q was used to initialize the convolution kernel.The gradient descent method was used to control the loss function to the lowest value.The formula is ( ) where  refers to the learning rate, j refers to the loss function coefficient, and J refers to the constraint coefficient of gradient descent method. is represents the cost function. Loop residual unit Ankle injury image segmentation is very difficult to work.Only when the depth of the training model reaches a certain level can accurate feature extraction be achieved.Therefore, the residual structure was used to achieve ideal gradient propagation in the segmentation framework [18].The definition formula of residual structure is ( ) where w refers to the input content of the framework, u is the input result, ( ) + means that after experiencing w the two convolution layers, the obtained ( ) Fwis integrated with w through jumping. The residual structure combines the pre-convolution content and the post-convolution content using jump connection [19], so that the error information in the image is directly transmitted to the bottom layer of the frame, effectively avoiding the disappearance of the gradient in the calculation process.In addition, the residual structure is similar to the recall mechanism of human brain.When people come into contact with new content, they will probably forget the content they came into contact with before.At this time, the recall mechanism is needed to help people remember these fuzzy memories.The residual structure strengthens the original feature information in the output result using jump connection, thus effectively avoiding the problem of network degradation.The residual structure in this paper consists of two convolution layers and one jump connection. The residual feedback [20] can automatically extract the attribute information in the image.Compared with the recall mechanism of the residual structure, the residual feedback is similar to the consolidation mechanism of the human brain, which deepens the impression of the known things through consolidation review.The extracted features of the region of interest are taken as the input content, and the feature extraction is performed again.The feature information is enhanced to enhance the impression of the known things. In this paper, the residual structure and residual feedback are combined to obtain the loop residual unit.The definition formula is In the process of image segmentation of ankle joint injury, the looped residual element completely preserves the injury characteristics of ankle joint using jump connection; with the loopback connection, the overall feature extraction ability of the algorithm is improved, and the basic preparation for accurate segmentation is made. The double attention module The double attention module is composed of two parts [21]: trunk branch and soft mask branch.In which, the role of the trunk branch is to preserve the original features of the CT image, and the role of the soft mask branch is to preserve the features of the region of interest and enhance the weight ratio of the region of interest in the trunk branch.The definition formula of double attention module is ( ) ( ) where l i w is the content input to the attention module. i g represents the gate signal provided by higher-level contextual information.The formula for calculating batch normalization parameters is as follows: ( ) ( ) where l att q represents the normalization parameter. att  is the dual attention parameter.The calculation formula of the activation function is as follows: In summary, the double attention module is to convolute high-level features and low-level features to reduce the number of channels in the segmentation framework.Then the high-level features and low-level features are integrated, and the weighted vector is obtained after a series of operations such as convolution layer, batch normalization processing and up-sampling. The proposed algorithm To improve the segmentation effect of athletes' ankle injury images, the segmentation algorithm of athletes' ankle injury images based on residual double attention U-Net model is improved.Input: athlete ankle joint injury image Output: segmentation result of ankle joint injury image of athletes The region of interest is extracted from the original ankle injury image of athletes.After translation, rotation and turning, the image data is expanded.The residual structure is used to adjust the gradient propagation and residual feedback of the segmentation framework, extract the attribute information in the region of interest, and combine the two to retain more image features.It is combined with the double attention module to improve the weight ratio of image features, the image segmentation of athletes' ankle joint injury is realized in the image segmentation framework based on the residual dual attention U-Net model.It is shown in Figure4. Experimental environment and Data sets The experimental environment of this paper is based on windows10 64-bit system, with 16GB memory, and the GPU is NVIDIA GeForce Titan X.The deep learning PyTorch framework is built to realize the training of the model Two data sets were selected in the experiment: (1) MPII human pose data set (http://human-pose.mpiinf.mpg.de/)includes 165 images of the wrist, elbow, knee and ankle, including 775 × 522 pixels of ankle injury image.85 of them are randomly selected as training images and the remaining 80 are test images.( 2) UCI machine learning data set (http://archive.ics.uci.edu/ml/datasets.php).The data set used in this experiment is from the localization data for person activity data set in the life science class of UCI machine learning data set.There are 164860 samples and 8 features in this dataset, and the number of samples × the number of features > 500,000, including the position coordinates of the left and right ankles, waist and chest of five people at different time points, including 58 pieces of 896 × 768-pixel ankle joint image.In which, there were 26 ankle injury images and 32 normal ankle images, all of which were in 24-bit RGB format. The experimental steps are as follows: (1) Because the two data sets selected in the experiment are small, there are only 223 images in total.(2) After translation, rotation and flipping by using the data expansion method, 26,000 images are finally obtained as experimental data.After the experimental data set is set, 20000 ankle injury images are used as the training set, and 6000 ankle injury images are used as the experimental set.(3) After the above operations are completed, this part of data is taken as the data set of the experiment, and the experimental operation process is completed based on the two data sets.( 4) NVIDIA GeForce Titan X GPU training includes 75 stages, and each stage trains 20 images.Set 0.01  = , and after every 1,000 times of training,  is multiplied by 0.1. Evaluation criteria The algorithm in MDAN [3], the algorithm in RVSTGAN [4], the algorithm in DNAS [5], the algorithm in TANN [6] and the algorithm in LEAC [7] as well as the algorithm experimental comparison method in this paper are compared, and five performance evaluation indexes are selected to verify the segmentation performance of different methods. DSC: DSC is used to evaluate the coincidence degree between the actual segmentation effect and the theoretical segmentation effect of the algorithm.The larger the value of DSC , the better the segmentation effect of the algorithm. where ( ) V  refers to the size of the segmentation region in the image, M and N refer to the theoretical segmentation effect and the actual segmentation effect of the algorithm. ASSD: ASSD is used to calculate the difference between the theoretical segmentation result and the actual segmentation result of the algorithm.The smaller the value of ASSD , the closer the segmentation result of the algorithm is to the theoretical segmentation result. XY ASSD MN where PM: PM is used to measure the degree of missing segmentation of ankle joint injury image by the algorithm.The larger the value of PM is, the less the algorithm misses segmentation. 100% TPs PM GT = where TPs refers to the size of the region correctly segmented by the algorithm, and GT refers to the theoretical segmentation region. CR: CR is used to measure the degree of error segmentation of the algorithm to the ankle injury image.The larger the value of CR is, the less the false segmentation in the actual algorithm. 100% where FPs represents the size of the algorithm's erroneous segmentation region.F1 value: F1 value is used to judge the segmentation accuracy of the algorithm.The higher the value, the higher the segmentation accuracy of the algorithm. where Pr ecision represents the segmentation accuracy of the algorithm. Re call represents the recall rate of the algorithm. RESULTS AND DISCUSSION The comparison results of DSC values of different algorithms are shown in Figure 5.According to the data in Figure 5, the maximum DSC value of the proposed algorithm is 0.93, which is 0.01, 0.13, 0.13, 0.1 and 0.06 higher than the algorithms in MDAN [3], RVSTGAN [4], DNAS [5], TANN [6] and LEAC [7], respectively; The minimum DSC value of the proposed algorithm is 0.93, which is 0.14, 0.19, 0.18, 0.19 and 0.18 higher than the algorithms in MDAN [3], RVSTGAN [4], DNAS [5], TANN [6] and LEAC [7], respectively.It shows that the DSC value of the proposed algorithm is higher than the algorithm in MDAN [3], the algorithm in RVSTGAN [4], the algorithm in DNAS [5], the algorithm in TANN [6] and the algorithm in LEAC [7], which indicates that the actual segmentation effect of the proposed algorithm is higher than the theoretical segmentation effect, and the actual application effect is better. The comparison results of ASSD values of different algorithms are shown in Figure 6.According to the data in Figure 6, the maximum ASSD of the proposed algorithm is 0.1, which is lower than the algorithm in MDAN [3], the algorithm in RVSTGAN [4], the algorithm in DNAS [5], the algorithm in TANN [6] and the algorithm in LEAC [7] by 0.33, 0.38, 0.5, 0.49 and 0.46, respectively; The minimum value of ASSD of the proposed algorithm is 0.1, which is lower than 0.26, 0.31, 0.38, 0.26 and 0.12 of the algorithms in MDAN [3], RVSTGAN [4], DNAS [5], TANN [6] and LEAC [7], respectively.It shows that the ASSD value of the proposed algorithm is lower than that in MDAN [3], RVSTGAN [4], DNAS [5], TANN [6] and LEAC [7], which indicates that the closer the segmentation result of the proposed algorithm is to the theoretical segmentation result, the better the actual application effect. Brazilian Archives of Biology and Technology.Vol.66: e23230335, 2023 www.scielo.br/babtAccording to the data in Figure 9. Three ankle injury images were randomly selected from the test set, and the proposed algorithm was compared with the algorithm in MDAN [3], the algorithm in RVSTGAN [4], the algorithm in DNAS [5], the algorithm in TANN [6] and the algorithm in LEAC [7] for image segmentation, as shown in Figure7.There are many noises and impurities in the segmentation results of the algorithm in MDAN [3], and important details of test image 1 and test image 2 are lost; The segmentation results of the algorithm in RVSTGAN [4] are not well processed, and there are many burrs; The algorithm in DNAS [5] has some missing segmentation; The algorithm in TANN [6] has the problem of losing important details; The segmentation result details of the algorithm in LEAC [7] are lost, and there are many impurities in the image.However, the proposed algorithm has no missed segmentation and false segmentation, and the edge processing is also ideal.There is no burr, and the similarity between the segmented image and the original image is very high, which indicates that the proposed algorithm has high segmentation accuracy. The calculation results of PSNR, MSE, AD, LMSE, NAE for the proposed algorithm and other algorithms are shown in Table 2. 2, we can see that the PSNR of the proposed algorithm is 41.2, which is much higher than other methods, and the MSE of the proposed algorithm is only 0.12.Among the other methods, the MSE of LEAC [7] algorithm is as high as 0.89, with the largest mean square error.For the index AD, the proposed algorithm has the lowest value of 0.01, and among other algorithms, the AD of LEAC [7] algorithm has the highest value of 0.07.The comparison results of LMSE index show that the LMSE value of the proposed Brazilian Archives of Biology and Technology.Vol.66: e23230335, 2023 www.scielo.br/babtalgorithm is only 0.11, and the LMSE of other methods is above 0.3.In the comparison of NAE index, the proposed algorithm still occupies a significant advantage, and the NAE is the lowest, which is only 0.01.Through the above comparison, it can be seen that the proposed algorithm has obvious advantages. CONCLUSIONS In this paper, a segmentation algorithm based on residual double attention U-Net model is proposed for ankle injury image, which highlights the characteristics of the region of interest in the image and reduces the probability of segmentation for other regions.Through the comparative simulation experiments with other methods, the DSC value of the proposed algorithm is between 0.90 and 0.93, the ASSD value is between 0.07 and 0.10, the PM value is between 0.92 and 0.96, the CR value is between 0.93 and 0.95, and the average value of F1 is 95.7%.It shows that the proposed algorithm has no missed segmentation or false segmentation.At the same tithe edge processing is also ideal, and there is no burr.The similarity between the segmented image and the original image is very high, and the segmentation effect is good.However, it is found that the proposed algorithm is difficult to achieve accurate segmentation for the slices with small shapes.Future research can focus on applying the proposed residual dual attention U-Net model for segmenting images of other body parts beyond athlete ankle injuries.Additionally, further exploration can be made to improve the pre-processing techniques for extracting regions of interest. Figure 1 . Figure 1.The framework of the proposed algorithm Figure 2 . Figure 2. Region of interest extraction in CT images Figure 3 . Figure 3. In-depth diagram of ankle joint injury image segmentation Figure 3 is an end-to-end deep network model.The original CT image is input into the model, and the output result is a binary segmentation map.The white area of the image is the segmentation target.The whole framework completes feature extraction through the training of convolution layer[12][13].Each training process can generate high-level context information, and to obtain more accurate edge information, an integration process is added to the segmentation framework[14], the residual double attention U-Net model selects a 1 × 1 convolutional enhancement of the low-level feature depth before cascading.At the end of the frame, the training result was converted into a binary classification problem using the Softmax layer, and the where f u refers to the output vector of the first residual propagation, and s u refers to the enhancement vector of the second residual feedback.( ) G  represents the residual feedback function. 2  is the activation function. ,TT wg WWrepresents different discrete coefficients.T  represents the linear conversion parameter.g b and b  represent different bias parameters. w− represents the content input to the attention module., l ic y is the output result of the attention module.i g represents the gate signal provided by the higher-level context information.att  indicates double attention Brazilian Archives of Biology and Technology.Vol.66: e23230335, 2023 www.scielo.br/babtparameter. , W and c are linear conversion parameters.and b  are offset parameters. Figure 4 . Figure 4. Process of the proposed algorithm Figure 4 reveals that the region of interest is extracted from the original athlete ankle injury image, and the image data is expanded by performing translations, rotations, and flips.The residual structure is utilized to adjust the gradient propagation and residual feedback of the segmentation framework, extract attribute ) Brazilian Archives of Biology and Technology.Vol.66: e23230335, 2023 www.scielo.br/babtwhere are the distance between the pixel point m and pixel point n in the images. Figure 5 . Figure 5.Comparison results of DSC value Figure 6 . Figure 6.Comparison results of ASSD value Figure 7 . Figure 7.Comparison results of PM value Figure 8 . Figure 8.Comparison results of CR value Table 2 . Comparison Results of PSNR, MSE, AD, LMSE, NAE According to the comparison results of different methods in Table
2023-09-06T15:16:39.619Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "1fcc2927dad7353f26ef5ad72fdebcb2f62c5c4f", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/babt/a/3GJ6P6bdVkzHNyg7Zq7Fsbq/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Dynamic", "pdf_hash": "bf3a48583ed0b72864c8808e808b9b1125130162", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
253265039
pes2o/s2orc
v3-fos-license
Spectroscopic Time-series Performance of JWST/NIRSpec from Commissioning Observations We report on JWST commissioning observations of the transiting exoplanet HAT-P-14 b, obtained using the Bright Object Time Series (BOTS) mode of the NIRSpec instrument with the G395H/F290LP grating/filter combination ($3-5\mu$m). While the data were used primarily to verify that the NIRSpec BOTS mode is working as expected, and to enable it for general scientific use, they yield a precise transmission spectrum which we find is featureless down to the precision level of the instrument, consistent with expectations given HAT-P-14~b's small scale-height and hence expected atmospheric features. The exquisite quality and stability of the \emph{JWST/NIRSpec} transit spectrum -- almost devoid of any systematic effects -- allowed us to obtain median uncertainties of 50-60 ppm in this wavelength range at a resolution of $R=100$ in a single exposure, which is in excellent agreement with pre-flight expectations and close to the (or at the) photon-noise limit for a $J = 9.094$, F-type star like HAT-P-14. These observations showcase the ability of NIRSpec/BOTS to perform cutting-edge transiting exoplanet atmospheric science, setting the stage for observations and discoveries to be made in Cycle 1 and beyond. INTRODUCTION Corresponding author: Néstor Espinoza nespinoza@stsci.edu The NIRSpec/BOTS mode is one of the prime modes for transiting exoplanet science onboard the James Webb Space Telescope (JWST) (Beichman et al. 2014;Birkmann et al. 2022). The mode offers precise spectroscopy of transiting exoplanets from 0.6 to 5 µm in a single exposure via its low resolution (R ∼ 100) Prism mode, as well as high resolution (up to R ∼ 2700) measurements using various combinations of dispersers and filters. As such, the NIRSpec instrument has unique capabilities to cover a wide range of science cases, and is set to perform observations of exoplanets of all sizes and temperature regimes, including worlds that could host suitable conditions for life as we know it (see, e.g., Lewis et al. 2017;Lafreniere 2017;Rathcke et al. 2021;Lim et al. 2021). During the commissioning of JWST, Time Series Observations (TSOs) were obtained in all instruments to determine two key properties of the science instruments and modes. The first was to verify that these technically challenging observations were executing as planned. The second objective was to determine if the various instrument modes could be calibrated with sufficient accuracy to precisely measure the small flux variations caused by the transits, and to identify additional calibration needs if limitations were found. For the JWST instruments offering observations in the near-infrared (NIR; here defined as wavelengths up to 5 µm), a common target was decided to study the above mentioned properties of TSOs: the transiting exoplanet HAT-P-14 b (Torres et al. 2010). HAT-P-14 b is a dense (2.3M J ; 1.15R J ), short-period (4.6-day) exoplanet orbiting a relatively bright J = 9.09, 1.5R , low-activity F-star (Bonomo et al. 2017). Given its massive nature, it has a relatively small scale-height (H ∼ 150 km) which, combined with the large stellar radius, should give rise to small variations in the transit depth as a function of wavelength due to the exoplanetary atmosphere (20-60 ppm depending on the assumptions used to calculate this signature). This provided us with an excellent target to commission the NIR-Spec/BOTS mode: a target for which we expect, based on reasonable assumptions, a featureless transmission spectrum down to the precision level of the instrument for a single transit. Any observed variations in the transit depth as a function of wavelength therefore would most likely be due to instrumental rather than astrophysical effects, and thus would allow us to pinpoint any irregularities in either the instrument and/or the data reduction and calibration. The HAT-P-14 b system also has the advantage of being thoroughly characterized via precise ground and space-based photometry (see, e.g., Saha & Sengupta 2021;Fukui et al. 2016;Simpson et al. 2011), radial-velocities (e.g., Bonomo et al. 2017;Torres et al. 2010) and even adaptive optics and astrometric constraints on possible nearby companions (Belokurov et al. 2020;Ngo et al. 2015), which enabled us to identify causes for possible deviations from any solutions we obtained by analyzing JWST/NIRSpec spectrophotometry. Here we present the analysis and results of JWST commissioning observations of a single transit event of the exoplanet HAT-P-14 b using the NIRSpec/BOTS mode; the very same data that were used to enable this mode for scientific use. The article is organized as follows. In Section 2, we present the observations and data reduction of the dataset. In Section 3 we present a detailed analysis of the TSO, along with performance metrics for the mode retrieved from these observations. Finally, we conclude with a discussion of our findings in Section 4 and a summary of our main findings in Section 5. Observations A TSO was obtained during the JWST commissioning campaign on 2022-05-30 between 00:30 and 07:02 UT with the NIRSpec/BOTS mode, targeting the star HAT-P-14 (PID 1118; PI: Proffitt). The objective of this observation was to measure the transit event of the exoplanet HAT-P-14 b in order to measure its transmission spectrum, i.e., the transit depth as a function of wavelength. The 6-hour exposure consisted of 1139 integrations with 20 NRSRAPID groups per integration, taken with the G395H/F290LP grating/filter, which covers the wavelength range from 2.87 to 5.14 µm with a resolution of about R ∼ 2, 700, using the S1600A1 aperture and the SUB2048 subarray. This resulted in data gathered by both the NRS1 and NRS2 detectors, each having a height of 32 pixels and a width of 2048 pixels, which included four columns of reference pixels at the left and right edges of the selected subarray. The pixel scale, for reference, is about 100 milli-arcseconds (mas) per pixel. Figure 1 shows the median rates per integration across the entire exposure, which shows several interesting features. The first and most evident is the clear distorted shape of the spectral trace (blue curve; see Section 2.2 for details on how this was obtained). This covers from 2.7 to 3.7 microns in NRS1, and from about 3.8 to 5 microns on NRS2. The second are a number of bad and hot pixels in the frame. Bad pixel masks are being provided by the NIRSpec team to identify those via the Calibration Reference Data System (CRDS) 1 . The third feature is the lack of significant structure in the background, which suggests it will be relatively straightforward to remove it from observations. Finally, we also note what appears to be some scattered light evident on the right-most ∼ 50 pixels in the NRS1 detector, which is also seen on a few of the left-most pixels of the NRS2 detector. The scale of the count rate in Figure 1 makes this feature appear much more dramatic than it actually is in reality (by design -we wanted to highlight this in the figure): the extra countrate of these enlarged wings only adds of the order of ∼ 1 DN/s/pixel, whereas the peak counts in that region are about ∼ 1, 000 DN/s/pixel for HAT-P-14. Even if this component contained scattered light from nearby wavelengths, the resulting dilution of the extracted spectrum would be negligible. It is important to note that the distortion of the right edge of the spectrum seen in the NRS2 detector made the spectra somewhat hard to extract, as it falls right on the corner of the detector. Based on these commissioning observations, the NIRSpec team therefore decided to move the NRS2 subarray by four pixels in the vertical direction such that the spectrum is fully contained in the subarray, and can be more easily extracted. Therefore, the science observations obtained after commissioning will have a slight offset between the spectral trace seen in NRS1 and NRS2. Spectral tracing We describe the data reduction in detail below, starting from the rates per integration, i.e., the rateint.fits products, after reducing the uncal.fits files available via the Mikulski Archive for Space Telescopes (MAST 2 ) using version 1.6.2 of the JWST pipeline 3 with its default parameters (which we note does not include sky substraction/calibration for this mode/instrument -we detail how we deal with this in Section 2.3). From these products, we use the transitspectroscopy library which contains custom routines designed for transit spectroscopy measurements to reduce the data, which is publicly available via GitHub 4 . First, we independently trace the spectral shape for each integration and for each detector by crosscorrelating a Gaussian with each column, and finding the maximum of the resulting cross-correlation function (CCF). For the NRS1 detector, we start this process from column 500 and for the NRS2 detector, we start the process from column 5, in both cases following the trace all the way through the right-edge of the detector. Outliers due to cosmic-rays, bad and hot pixels were identified on these trace shape measurements by running a median filter through the trace at each integration with a window of 11 pixels in the wavelength direction and finding trace positions deviating more than 5 standard deviations from this trace. The outlier-corrected traces were then smoothed using a spline. The trace positions as a function of time are presented in Figure 2. These show remarkable stability throughout the exposure, even during a high-gain antenna (HGA) move that happened at about half an hour after the start of the exposure, showing deviations within 1/500th of a pixel (0.2 mas) over the 6-hour exposure. There seems to be both high-frequency variations and a slight systematic movement of the trace during the first two hours, but these do not appear to cause any evident systematic trend on the actual spectrophotometry, and seem to be consistent across the two detectors. A detailed analysis of the trace time variability is given in Appendix A. 2.3. 1/f corrections 1/f noise is an important component of the nearinfrared JWST detector noise which can give rise to significant scatter in a TSO if not accounted for and at least partially corrected (see, e.g., Schlawin et al. 2020;Rustamkulov et al. 2022). We perform 1/f (and sky background) corrections at the integration level by first masking all pixels within 15 pixels from the traces, and taking the median of the non-masked pixels at each column, subtracting that from each pixel in that column. This methodology is, of course, not perfect because it doesn't take care of the intra-column component of 1/f noise; however, it has the added benefit of correcting for any sky background signal, as well as any column-tocolumn offset created by 1/f noise. We find that this correction significantly improves the precision of our analysis, as already suggested by the works of Schlawin et al. (2020) and Rustamkulov et al. (2022). Spectral extraction Using the 1/f-corrected integrations and the traces obtained as described in the previous section, we proceed to extract the spectra at each integration. Note this implies we are not flat-fielding our data prior to extraction; the impact of which is pressumed to be small given the precise stability of the spectrum presented in the previous sections. Given that we find a number of residual cosmic rays, as well as bad and hot pixels not corrected by the JWST STScI pipeline, we decided to extract the spectrum using the optimal extraction algorithm for distorted spectra of Marsh (1989), which automatically takes care of outliers in the 2D spectral profile. The implementation we use is the one described in Brahm et al. (2017), with the difference that we use a variance array as input to the algorithm. The spectrum is extracted using a 14-pixel aperture around the trace for NRS1 and NRS2 independently; the trace of the very first integration as obtained by our methods is shown in blue. These were obtained using the raw rates per integration produced by the JWST pipeline, for which the median background has been substracted. Hot and bad pixels in the NRS1 and NRS2 detectors are clearly observed, as well as what appears to be some scattered light at the right-edge of NRS1 and left edge of NRS2 (see text for details). Note that the countrate illustrated in the frame is significantly constrained (between -0.2 and 0.2 DN/s) in order to highlight the (minimum) background structure. Also note the aspect ratio of the frame has been stretched in particular in the 32-pixel cross-dispersion direction for illustration purposes. this aperture size was selected as to be consistent across the extracted wavelength range, as well as to be able to maximize the signal-to-noise ratio while being able to extract the spectrum from as a wide wavelength range as possible (other smaller and larger apertures gave overall similar results). For NRS2, we don't extract the spectrum all the way to the corner; instead, our chosen aperture only allows us to extract the spectrum up to 4.8 microns. Wavelengths are assigned to each column by making use of the JWST pipeline's wavelength map, which is in turn obtained through the NIRSpec instrument model, which was observed during commission-ing to work according to pre-flight specifications, which are excellent for the purposes of this work (Lützgendorf et al. 2022). The extracted median spectrum of HAT-P-14 is shown in Figure 3. As can be observed, only a small fraction of the large number of bad and hot pixels clearly seen in Figure 1 remain after using our optimal extraction procedure. Using these extracted spectra, we then move to a discussion on the analysis and TSO observed performance during commissioning in the next section. Band-integrated light curve analysis As a first-order analysis, we construct the "bandintegrated" light curve of HAT-P-14 b by adding up the light from NRS1 and NRS2 separately. We decided to fit these light curves with a simple model which included a batman transit model (Kreidberg 2015) and a linear trend in time 5 . We used a square-root law to parametrize limb-darkening in these light curve fits as, based on simulations performed by our team similar to those suggested in Espinoza & Jordán (2016), this was one of the laws that performed the best to precisely and accurately recover input transit parameters. As priors for our fit, we used orbital parameters retrieved by fitting TESS transit light curves from Sectors 25 and 26 in the exact same manner as described in Patel & Espinoza (2022), from which we obtained a/R * = 8.34 ± 0.31 and b = 0.90 ± 0.02, and for which we left the eccentricity Wavelength ( m) Figure 3. Median extracted spectrum for HAT-P-14 using optimal extraction on the frame shown in Figure 1 for NRS1 (left) and NRS2 (right). Note we only extract up to 4.8 µm as the rest of the spectral range up to 5 µm falls at the very corner of the detector; this will not be the case for future science observations as the subarray has since been moved by 4 pixels to avoid this (see text for details). Also note the spectra has not been flat fielded, which explains the high frequency variations as a function of wavelength especially in the NRS1 spectrum. and argument of periastron fixed to the values found in the literature (Bonomo et al. 2017, e = 0.11; ω = 106.1 deg) -which we also fixed in our JWST/NIRSpec light curve fits. Finally, we used the parametrization of Kipping (2013) in our band-integrated light curve fits, leaving the transformed limb-darkening coefficients (q 1 , q 2 ) as free parameters in our fits with uniform priors between 0 and 1. A jitter term was also added and fitted to both light curves. The juliet (Espinoza et al. 2019) library was used to perform these light curve fits independently for NRS1 and NRS2, using dynesty as the sampler via dynamic nested sampling (Speagle 2020). The results of those fits are presented in Figure 4; a Table with our priors and posteriors is presented in Table 1. As can be observed, the simple model defined above resulted in an excellent fit to our JWST/NIRSpec G395H data. This is quite impressive, considering we did not remove any data points for the fit, in particular data points at the beginning of the exposure. This suggests that there are negligible persistence effects in the NRS1 and NRS2 detectors, at least at the fluences probed by our observations. The observed scatter in the light curve, however, is larger than expected from pure read-noise and photon-noise statistics as calculated by the JWST pipeline by a factor of about 3; the most likely explanation for this is residual 1/f spatial covariance which causes extra white-noise scatter in a TSOnoise that is not seen nor impacts the analyses if one performs light curve analyses at the resolution level of the instrument (see Section 3.2 for a discussion on this), where we reach photon and read-noise limited precision. Indeed, the overall residual structure of a simple light curve fit like this shows a lack of correlated noise structure. We demonstrate this using Allan variance plots on the residuals, presented in Figure 5. The plots show that as we bin the data in time, the standard deviation decreases closely following a 1/ √ N bin shape (dashed lines in this figure), which is consistent with the expectation for uncorrelated white-noise. It is interesting to note, however, that the NRS1 transit light curve clearly shows a more pronounced exposure-long slope than the NRS2 one (about −140 ppm per hour for NRS1, compared to 40 ppm per hour for NRS2). While we observe some wavelengthdependence of the slope for NRS1 (see next sub-section), the smaller slope seen in the NRS2 light curves is fairly constant for all wavelengths. Given that the NRS1 fluence levels vary relatively little at the high signal-tonoise ratio portions of the spectra as compared to NRS2 (see Figure 1), it is unlikely this is purely a fluencedependent effect. Also, given the significant difference between the slopes on NRS1 and NRS2, it is unlikely this is, e.g., an astrophysical, observatory (e.g., "tilt" events; Rigby et al. 2022) and/or instrument-level effect -all those explanations should give rise to smooth transitions from one detector to the other as well as similar effects on both detectors. The latter two set of hypotheses are also unlikely given the stability of both the traces (Figure 2) and the full-width at half maximum (FWHM; a detailed analysis of the FWHM shape and its time stability is presented in Appendixes A and B) of the spectral profile measured during the observations, which we present on Figure 6 and which have been shown by Schlawin et al. (in prep.) and Beatty et al. (in. prep.) to be excellent tracers of "tilt events". Given all this, it is possible the effect is due to some underlying detector-level effect, but this is currently under investigation. A discussion on possible causes is presented in Section 4. Table 1. Prior and posterior parameters of the band-integrated lightcurve fits performed on the NRS1 and NRS2 data of HAT-P-14 b. For the priors, N (µ, σ 2 ) stands for a normal distribution with mean µ and variance σ 2 ; U (a, b) stands for a uniform distribution between a and b, respectively and log U (a, b) stands for a log-uniform prior on the same ranges. Priors for b and a/R * come from TESS (see text for details). Wavelength-dependent light curve analysis ψ For a more precise estimate on the transit depth, see Figure 11. See text for an explanation of why the band-integrated light curve achieves lower precisions than the light curves at each resolution element sampled by the instrument. † These parameterize the square-root limb-darkening law using the transformations in Kipping (2013). ‡ The linear trend slope was fitted to the standardized times, i.e., to the mean-subtracted time-stamps divided by the standard deviation of the time-stamps. Light curve scatter Before jumping into the analysis of the transit event as a function of wavelength, we perform simple analyses to understand whether the light curve scatter at the native resolution of the instrument is, indeed, being correctly estimated by the JWST pipeline. This latter calculation includes both photon-noise and read-noise characteristics of the detector, including the impact of 1/f noise on a single pixel, but not the covariance this might have with nearby pixels. To perform this comparison, we took the out-of-transit scatter (i.e., the standard deviation of the flux timeseries) on the wavelength-dependant light curves of both NRS1 and NRS2 at the spectral sampling of the instrument (i.e., at each column in Figure 1) and compared that to the out-of-transit median errorbars reported by the JWST pipeline (i.e., adding in quadrature the pipeline-reported uncertainties on a given column, and taking its square-root). We then simply took the ratio of these two numbers, which in an ideal world should be exactly 1. These ratios are presented in Figure 7. As can be seen from Figure 7, at the native resolution of the instrument, the JWST pipeline precisely predicts the noise levels observed on the out-of-transit light curves. There seems to be, however, a slight but significant extra scatter of about 5% in the actual data when compared against the pipeline products for NRS2. The source of this discrepancy is currently under investigation. Next, we perform the same analyses but performing binning of the spectral channels. This is a somewhat standard procedure in transiting exoplanet spectroscopy for which combining the light from different wavelength bins (i.e., columns in Figure 1) helps to boost the signal-to-noise ratio of the light curves, with the band-integrated light curves in Section 3.1 representing the limiting case for this approach. The motivation for doing such an experiment on JWST data is 1/f noise. While we performed some corrections on this effect in Section 2.3, these are most definitely not correcting the effect completely. Column-by-column median subtraction, in particular, can be roughly thought of as subtracting the median every 32 samples of a 1/f time-series. This will procedure thus will remove some but not all the covariance both within and between columns in a given integration. Hence, some residual covariance due to 1/f noise is expected to "leak" into our measurements. Figure 8 shows the same experiment as Figure 7, but after performing spectral binning after spectral extraction of three different widths: 10, 30 and 100 columns/pixels for NRS1 (as a comparison, the bandintegrated light curve for NRS1 implies a 1543-column bin). The results of this experiment shows that one observes a larger scatter than the one one would expect by simply adding in quadrature the uncertainties reported by the JWST pipeline. We believe part of this is related to the residual covariance between pixels of different columns described above, which makes our simple addition-of-variances to estimate the resulting noise of the bin a lower limit on the actual total variance. Part of it could also be instrumental systematics that only become evident once a better signal-to-noise ratio is attained via the spectral binning. A full investigation and implementation of the covariance due to, e.g., 1/f noise on the JWST pipeline is outside the scope of this work (but see, e.g., Schlawin et al. 2020), as well as a detailed investigation of more subtle sources of systematic noise. Despite of this, in our experiments we have found that the best way to work with JWST data from NIR detectors is to work at the spectral sampling level of the instrument (i.e., at the column-to-column level). Then, observables such as, e.g., the transit depth as a function of wavelength can be binned in a post-processing stage. We have found this methodology to give the most accurate and precise results during commissioning. Wavelength-dependent light curve modelling Following the procedures described above, we proceeded to model the wavelength-dependant light curves in the very same way as described for the bandintegrated light curve in Section 3.1. The only difference was that we (1) fixed the orbital parameters to the same ones ingested as priors on the band-integrated lightcurve analysis, (2) fixed the time-of-transit center to the one found in our band-integrated lightcurve analysis and (3) used priors on the square-root limbdarkening coefficients instead of letting them go free in the light curve fitting procedure. The prior was a truncated Gaussian defined between 0 and 1 for both q 1 and q 2 ; the mean of those Gaussians was obtained by using the limb-darkening package described in Espinoza & Jordán (2015), for which we ingested the stellar properties of HAT-P-14 defined in Bonomo et al. (2017) and used ATLAS (Kurucz 1979) intensity profiles to compute model limb-darkening coefficients. Then, we passed the resulting non-linear limb-darkening coefficients through the SPAM algorithm of Howarth (2011) to obtain the final square-root law limb-darkening coefficients we set for the mean of our priors. The standard deviation of the prior for each coefficient was set to 0.1. Following the lessons learned in Section 3.2.1, we work at the resolution level of the instrument and fit each transit light curve with a combination of a transit model and a slope, which we found gave very good results. Figure 9 show some sample light curves fitted following our procedures. As can be seen, the quality of these native-resolution light curves is very good, and our transit model plus linear trend seems to be enough to model most of the systematic trends observed in the data. Figure 4 for NRS1 (top) and NRS2 (bottom). Dashed colored lines show the expected decrease on the scatters as a function of binning for perfect white-noise, i.e., decreasing as the square-root of the binsize, which is a very good match to the actual observed data, suggesting a lack of strong correlated noise structure. Our wavelength-dependant transits allowed us to measure two observables. First, it allowed us to explore the wavelength-dependence of the exposure-long linear trends already described in Section 3.1, but it also allowed us to obtain the main observable used to commission the instrument: the transit depth (and its uncertainty) as a function of wavelength. We discuss our analyses of those two observables in the next sub-sections. Linear trend wavelength dependence From our light curve modelling described in Sections 3.1 and 3.2.2, we were able to retrieve the wavelengthdependence of the linear trend observed both in the NRS1 and NRS2 data. While we obtained these slopes at the resolution element of the instrument, we bin those down to a resolution of R = 200 for easier visualization. Figure 10 show the slopes as a function of wavelength for both NRS1 and NRS2. As can be observed, a slope is observed in both detectors, but the observed slope is much stronger on NRS1 than in NRS2 as already hinted in Section 3.1. . Ratio between measured over expected (from the JWST pipeline) out-of-transit light curve scatter for our HAT-P-14 TSO as seen by NRS1 using different number of spectral bins: 10 (top), 30 (middle) and 100 pixels/elements (bottom) panels (black points; blue lines are median filters shown for illustration). As can be seen, the larger the bin, the larger the deviation of the observed scatter versus the predicted one just from the JWST pipeline error bars. The main suspect for this behavior is 1/f noise (see text). Our results for the wavelength-dependence of the exposure-long slope are very interesting in particular for NRS1. First, it seems at the shortest wavelengths of the NRS1 data the slope strongly decreases in amplitude down to around zero. However, it quickly settles to a value of about -140 ppm/hour at about 3 microns, and stays more or less constant at the redder NRS1 wavelengths. The constant value observed for the NRS2 detector, however, is of about 30 ppm/hour -a factor of almost 5 smaller in amplitude (and different sign). Given the completely different level of slopes observed between the detectors, the stability of other metrics (e.g., trace positions, FWHM), and the fact that the amplitude and sign of the slopes don't smoothly evolve as a function of wavelength from one detector to the other as a function of wavelength, there is a strong suggestion that the slope is somehow introduced at the detector level in the NRS1 detector. We further discuss this possibility in Section 4. HAT-P-14 b's transmission spectrum Next, we retrieve HAT-P-14b's transmission spectrum from our light curve fits. As discussed above, the transmission spectrum we forge is obtained at the resolution level of the instrument, which we then bin down to a lower resolution of R = 100. Figure 11 (and Table 2) shows the transmission spectrum of HAT-P-14 b. To test whether this spectrum is consistent with a featureless spectrum (which is the expectation), we performed a chi-square test under the assumption of a flat spectrum and found a p-value of 0.13 -consistent with the featureless spectrum scenario. Running the test separately on NRS1 and NRS2 yielded p-values of 0.11 and 0.30, respectively -also consistent with the featureless spectrum scenario. It is also important to note that all data points above 2.85 µm at R = 100 have uncertainties below 100 ppm -in particular, the median error bar of the spectrum on NRS1 at those wavelengths is 48 ppm, with the scatter on the spectrum itself being 48 ppm (i..e, both consistent with each other). Similarly, for NRS2, the median uncertainties were of 63 ppm with an actual measured standard deviation on the spectrum of 64 ppm (again, both consistent with each other). As we will show in Section 4, while these precisions are about 20% larger than what tools like PandExo (Batalha et al. 2017) predict, they are about ∼ 10% better than pre-flight expectations from Pandeia (Pontoppidan et al. 2016) if one accounts for the fact that we are fitting for a limb-darkened star and not a flat-bottomed transit as the one PandExo assumes. One final notable result from this transit spectrum is the exquisite precision on the broadband transit depth we are able to obtain by calculating the weighted mean of the transit depth at all wavelengths using this transit spectrum: we obtain a weighted mean transit depth of 6627 ± 8 ppm (purple band in Figure 11), which is in excellent agreement (within 1-sigma) of the one obtained by the JWST/NIRCam short-wavelength photometry observations of HAT-P-14 b (Schlawin et al., in prep.). General TSO performance of NIRSpec/BOTS Via a detailed analysis of the TSO targeting a transit event of the exoplanet HAT-P-14 b, we have shown in Section 3 the excellent sensitivity and stability of NIR-Spec/BOTS to perform precise relative flux measurements, successfully enabling us to understand the two key properties we set as objectives for our commissioning observations: that the instrument is properly working for these kind of scientific objectives and that, indeed, the instrument can be calibrated to perform these precise flux measurements. The fact that the instrument is properly working for precise flux measurements was shown at several stages. In particular, perhaps the most straightforward is that, as shown in Section 3.2 and Figure 11, we are able to obtain a featureless transmission spectrum of the exoplanet HAT-P-14 b, which was the expectation given its massive nature. In addition, the precision with which we measure this spectrum (transit depth errors of 50-60 ppm at a resolution of R = 100) is very close to what was expected prior to launch. We compare our results to pre-flight expectations in Figure 12 via two experiments. The first was to compare the transit depth errors we obtain against predictions from PandExo (Batalha et al. 2017, grey points in Figure 12), from which we observe precisions that are about ∼ 20% larger than those simulations. One important caveat of PandExo, however, is that the tool assumes flat-bottomed transits to perform its transit depth calculations, i.e., it omits the impact of limb-darkening in the transit lightcurves. While in general at the wavelengths probed by our NIR-Spec/G395H observations this assumption might be a fair one to make, the limb-darkening effect on the transit lightcurve is very prominent on HAT-P-14 b due to the relatively large impact parameter of the transits (see Table 1). This can be readily observed in the "U"-shaped transit lightcurve both in Figures 4 and 9. In order to perform an "apples-to-apples" comparison to pre-flight expectations then, on our second experiment we decided to use the input noise information from Pandeia (Pontoppidan et al. 2016) that PandExo uses to generate its simulations to produce noisy limb-darkened transit lightcurves instead. These simulated transit lightcurves had limb-darkening profiles assuming a non-linear limbdarkening law, with coefficients calculated as described in Section 3.2 and generated using batman (Kreidberg 2015). After simulating them at all the resolution elements sampled by NIRSpec/G395H, these were fitted in the exact same manner as we fitted our real transit lightcurves already described in Section 3.2 and the resulting transit depths were binned down to R = 100. The resulting transit depth errors from that experiment -depicted as blue points in Figure 12 -show that our actual NIRSpec/G395H precisions are better by about ∼ 10% than pre-flight expectations. The fact that our results are better than pre-flight expectations in the transit spectrum is interesting but within expectations given the better-than-expected throughput of the JWST NIRSpec/G395H configuration . Furthermore, as the above experiments and the ones shown in Section 3.2.1 show, we believe we are very close (if not at) the "photonnoise" (plus read-noise) limit, given how precisely the JWST pipeline predicts the actual lightcurve scatter at the resolution elements sampled by the instrument. Lightcurve systematics As it was shown in Sections 2 and 3, the second objective of our TSO observations -to confirm that the instrument can be calibrated to perform precise relative flux measurements as the ones needed for transiting exoplanet observations -has been readily met, with the level of systematic effects on the TSO itself being very Transit depth (ppm) Figure 11. Transmission spectrum for HAT-P-14 b as obtained by our NIRSpec/BOTS G395H observations. The highresolution spectrum has been binned to a resolution of R = 100 from 2.7 to 4.8 µm (white points with error bars); the weighted mean transit depth across the bandpass is 6627 ± 8 ppm (purple line with bands denoting 1, 3 and 5 sigma bands around this value). The gap in the middle, at about 3.75 µm, is due to the detector gap between NRS1 and NRS2 also seen in Figures 1 and 3. The spectrum is featureless (p-value > 0.1), as expected for a massive exoplanet like HAT-P-14 b, down to the precision at which we measure the transit spectrum, which is of 50-60 ppm at this resolution (see Table 2). Table 2; white points) against pre-flight predictions from PandExo (grey points) and a simulation using Pandeia plus a limb-darkened transit (blue curve), a better "apples-to-apples" comparison to our observations. Our observations are about 10% better than these latter simulations. small. However, as it was shown in Section 3, the NRS1 detector seems to show an exposure-long slope which had no straightforward explanation. While it seems this might be a detector-level effect, it's unclear what step and/or calibration could be producing it. A superbias, background and/or dark-current-related effect, for instance, would cause a source of dilution at the precision level of the transit spectrum, but it would be hard for it to produce and/or significantly change the amplitude of an exposure-long slope. A linearity problem could, in theory, amplify an existing trend in the data as well (e.g., the trend observed in the NRS2 detector); however, below 2.9 µm and above about 3.6 µm, NRS1 has a very similar fluence level to that observed in NRS2. Looking at Figure 10, it is unlikely thus that this could be the effect producing the differences. In addition, an amplification and/or dilution source like the ones discussed would also imply the same impact on the transit spectrum; however, we don't see a significant difference between the transit spectrum obtained between NRS1 and NRS2 in Figure 11. One possibility under study involves the fact that internal flat calibrations were performed up to two hours before the TSO exposure presented in this work. This could have filled some detector traps akin to those observed in, e.g., HST/WFC3 (Zhou et al. 2017) in NRS1 which are slowly being released during the TSO. This, however, does not directly explain why the observed slope in NRS1 is so different from that of NRS2, despite the fact they are very similar detectors (with a gain difference of only about 10%). A possibility is that while these detectors are of the same family, they do have, indeed, different inherent properties like e.g., responses to charge trapping and/or persistence. Other datasets obtained both with this mode and NIR-Spec/Prism should be investigated in detail to search for the repeatability of the slopes and the corresponding amplitudes observed in our HAT-P-14 b observations. Timing accuracy of JWST time-stamps As shown in Section 3.1 and presented in Table 1, our observations allowed us to obtain a precise combined time-of-transit between NRS1 and NRS2 of 2459729.706791 +0.000094 −0.000093 BJD TDB -an uncertaintiy on the timing of the event of only 8 seconds. While from the lessons learned in Section 3.2 an even better precision is achievable on this timing if the lightcurve analyses are performed at the resolution level of the instrument, this timing precision is enough to test the accuracy of the observatory time-stamps at the tens of seconds accuracy, at least as compared to another mission: TESS. Additional data to the one used in Section 3.1 was recently released by the TESS mission for HAT-P-14 b for sectors 52 and 53 which include a contemporaneous transit to that observed by our JWST/NIRSpec observations presented in this work. Using the same methods as presented in Patel & Espinoza (2022), we performed a joint fit of all the 2-minute cadence photometry from TESS, including Sectors 25, 26, 52 and 53. We retrieve transit parameters from those TESS observations consistent with those presented in Section 3.1 and Table 1, and obtain a transit ephemerides of P = 4.6276618 ± 0.0000018 days and t 0 = 2459766.72798 ± 0.00020 (BJD TDB). In particular, for the transit event observed by JWST/NIRSpec, the TESS observations observe a time-of-transit of 2459803.74927 ± 0.00022 BJD TDB, which is in excellent agreement with the timing of our JWST/NIRSpec observations -the difference between the two being 9 ± 19 seconds -consistent with zero at 1-σ. SUMMARY We have presented spectrophotometric TSO commissioning observations of HAT-P-14 b using NIR-Spec/BOTS onboard JWST, which we used to enable the mode for scientific usage. We measured a transmission spectrum of the exoplanet down to a precision of 50-60 ppm at R = 100, showcasing in turn the excellent stability and sensitivity of this instrument/mode to perform precise transiting exoplanet spectrophotometry. We find the above quoted precisions are, in turn, very close to pre-flight expectations. When compared against PandExo, these precisions seem to be about 20% larger than what the tool predicts, but we find most if not all of this difference is due to the assumption of a flatbottomed transit lightcurve by the tool. If we instead simulate limb-darkened transit lightcurves using the preflight noise expectations from Pandeia (which is the engine both PandExo and the JWST Exposure Time Calculator 6 itself uses), we find the observed transit depth errors are about 10% better than those simulations. While the trace position and FWHM remained fairly constant during our observations -down to 1/500th and 1/1000th of a pixel, respectively -we did find a low-amplitude exposure-long slope on the two detectors used to capture the spectra of HAT-P-14. In particular, the slope observed in NRS1 is much stronger and wavelength-dependent than the one observed in NRS2, and all evidence points to it being a detector-level effect. Investigations to understand this trend are currently ongoing. Through a series of experiments, we also found that 1/f detector noise, as expected from pre-flight experiments (Schlawin et al. 2020;Rustamkulov et al. 2022), seems to be an important component to pay attention to when studying JWST TSOs with NIRSpec/BOTS. In particular, it seems binning spectral channels causes a degradation on the signal-to-noise ratio that might be in part caused by the residual covariance this detector effect ingests into the lightcurves when binning. We found that the best way to work with JWST data from NIR detectors such as the ones in NIRSpec/BOTS is to work at the spectral sampling level of the instrument (i.e., at the column-to-column level). Then, observables such as, e.g., the transit depth as a function of wavelength can be binned in a post-processing stage. This methodology gave the most accurate and precise results during our commissioning analyses. We conclude that the two main objectives of these NIRSpec/BOTS TSOs have been met: our analysis shows that the instrument is properly working to measure the precise relative flux measurements implied by the technique of transit spectroscopy, and that indeed, the instrument can be calibrated to perform these precise measurements. JWST NIRSpec is thus ready to perform cutting edge exoplanetary science for Cycle 1 and beyond. All figures were prepared using Python 3.8.13 with the aid of packages matplotlib 3.5.2, NumPy 1.22.4, SciPy 1.8.1 and juliet 2.2.0. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #01118. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate. , and band-integrated light curve flux residuals (bottom) for both the NRS1 (blue) and NRS2 (red) time-series, normalized to the highest peak. The most obvious peaks accross the different PSD's are labeled with black dashed lines; the period of each of those peaks is indicated on the top panel. Note how most of the peaks in the band-integrated light curve time-series residual PSD don't really appear on the trace and the FWHM PSDs, with the exception, perhaps, of a peak at 3.4 minutes, where the FWHM also peaks. This is not a very significant peak in the band-integrated light curve PSD nonetheless (FAP > 0.1). APPENDIX A. PERIODOGRAM ANALYSIS OF THE TRACE, FWHM AND RESIDUAL FLUX TIME-SERIES As shown in Figure 2, the trace position movement is consistent between the detectors, and shows the same characteristics in time: an apparent break in the position about 1.5 hours since the start of the exposure, and some high-frequency variations. These high-frequency variations are also evident from the FWHM time-series. To characterize the features in these time-series we perform a periodogram analysis on the trace position time-series, as well as the FWHM and the residuals of the band-integrated light curve fits presented in Section 2. These are presented in Figure 13. FWHM (pixels) Figure 14. FWHM as a function of wavelength as measured from our G395H observations from NRS1 (left, blue) and NRS2 (right, red). Note how the FWHM quickly oscillates as a function of wavelength with an amplitude of about 0.6 pixels; we believe the oscillations are almost purely a detector sampling effect and not real variations caused by the optical elements of the G395H trace. The dashed line shows the lower envelope of this FWHM shape, which should be a good estimate of the real FWHM variation as a function of wavelength of the instrument (see text for details). As can be seen from Figure 13, the trace and FWHM power spectral densities (PSDs) have some peaks in common. In particular, the strongest signal at 6.5-minutes appears on both periodograms. While this same peak does not directly appear in the band-integrated light curve residual PSD (bottom panel of Figure 13) a peak in the NRS1 residual lightcurve does show up at about twice this period (12.89 min.). The 1.26-minute peak clearly observed on the trace PSD also seems to appear on the FWHM PSD, and not at all in the band-integrated light curve flux residual PSD. The FWHM does appear to have an extra peak in the PSD at about 3.4 minutes which seems to show up in the flux residuals too, although it's not very significant in this later PSD (we measure a False Alarm Probability, FAP, for this peak of > 0.1). The time-scales of the observed peaks in the trace and FWHM periodograms all seem consistent with the time-scales of the thermal cycling of heaters in the Integrated Science Instrument Module (ISIM) onboard JWST (Rigby et al. 2022). Whether this is an actual causal relationship is under investigation. B. FWHM VARIATION ACCROSS THE G395H TRACE A notable feature to describe that was observed in our G395H observations is the evident variation of the FWHM as a function of wavelength/position in the detector, which we believe is almost purely a sampling effect. To obtain the FWHM at each position along the trace (and hence, as a function of wavelength), we fit a spline to the cross-dispersion profile, subtracting the half-maximum of this spline to itself, and then searching for the roots of it to find the FWHM. The median FWHM as a function of wavelength (obtained by obtaining the median FWHM across integrations) as measured by our G395H observations of HAT-P-14 b are presented in Figure 14. As can be observed, the FWHM seems to vary quite rapidly throughout the detector with an amplitude of about 0.6 pixels. On average, the FWHM seems to slowly increase as a function of wavelength, with the amplitude of the modulation decreasing as a function of wavelength. Moreover, the lenghtscale at which these modulations occur seem to decrease towards the edge of the wavelength range, being larger in the middle. The behavior of the FWHM change across the trace is most likely a sampling effect related to the significantly tilted shape of the trace and the narrow shape of the PSF in the cross-dispersion direction, which makes the profile to be undersampled. There is ample evidence that this is the case in Figure 14 itself: the FWHM variation is much milder as a function of wavelength in the center of the wavelength range, where the trace is much less tilted than at the edges, where the variation is the strongest. We also visually inspected the cross-dispersion profiles at the peaks and valleys observed in Figure 14. There, we observed that indeed, on the peaks, where the FWHM appears to be larger, the trace position is right at the mid-point between two pixels, whereas when the FWHM appears to be smaller the trace is almost exactly positioned in the middle of a pixel. The above suggests thus that the best measurement of the FWHM as a function of wavelength when measured with the methods outlined above would be to measure the lower envelope of our retrieved FWHM as a function of wavelength. This envelope is presented with dashed lines in Figure 14, and is a fit to the local minima of the FWHM as a function of wavelength with a fourth degree polynomial, which we observed gave an adequate fit to them (RMS of 0.0075 pixels): FWHM = c 0 + c 1 λ + c 2 λ 2 + c 3 λ 3 + c 4 λ 4 , with the FWHM in pixels, λ the wavelength in microns and the coefficients being (c 0 , c 1 , c 2 , c 3 , c 4 ) = (−4.6210, 6.1239, −2.2736, 0.3697, −0.0216). This implies the FWHM evolves from about 1.52 pixels at 3 µm to about 1.85 pixels at 5 µm for G395H.
2022-11-04T01:15:50.845Z
2022-11-02T00:00:00.000
{ "year": 2022, "sha1": "e856ec256f991ff6af1cf764e1e83423b56940aa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e856ec256f991ff6af1cf764e1e83423b56940aa", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
163167248
pes2o/s2orc
v3-fos-license
Functional study and pathogenicity classification of PRRT2 missense variants in PRRT2‐related disorders Abstract Aims PRRT2 variants are associated with various paroxysmal disorders. To date, more than 90 PRRT2 variants have been reported in PRRT2‐related disorders. Lack of functional study in majority of missense variants makes their pathogenicity uncertain. We aim to evaluate the clinical significance of PRRT2 missense variants by performing in vitro experiments. Methods We systematically reviewed PRRT2‐related disorders and summarized reported PRRT2 missense variants. Protein expression and subcellular localization of mutant PRRT2 were investigated in mammal cells. American College of Medical Genetics and Genomics (ACMG) guidelines were used to analyze the pathogenicity of PRRT2 missense variants. Results A total of 29 PRRT2 missense variants were identified in PRRT2‐related disorders. Ten variants were observed to affect both subcellular localization and protein level, three variants only affect membrane localization, and two variants only affect protein level. According to ACMG guidelines, 15 variants were finally classified as “likely pathogenic”, three as “benign”, three as “likely benign”, and eight as “uncertain significance” variants. The likely pathogenic variants were concentrated in the C‐terminal of PRRT2. Conclusions The pathogenicity of eight uncertain significance variants needs further investigation. C‐terminal of PRRT2 is crucial for its physiological function. a variable frequency ranging from one per month to hundreds per day. Incomplete penetrance is usually observed in PRRT2 variant carriers. 6,7 The potential mechanisms of PRRT2 variants in PRRT2-related disorders remain largely unclear. PRRT2 consists of four exons and encodes a 340-amino-acid protein with two predicted transmembrane (TM) domains in the C-terminal and one proline-rich domain in the N-terminal. Enriched in cerebral cortex, cerebellum, substantia nigra, and hippocampus, PRRT2 protein was found to involve in synaptic transmission by modulating soluble N-ethylmaleimide sensitive factor attachment protein receptor (SNARE) complex. 8 To date, more than 90 variants in PRRT2 have been included in human gene mutation database (HGMD) and most of them are nonsense or frameshift variants, causing the truncation of the protein. Among them, c.649dupC (p.R217Pfs*8) is the most frequent variant, accounting for 78.5% of mutation carriers. 9 Truncated variants leading to conspicuous reduced protein level have been reported in various in vitro studies. 1,3,8,10 Besides the truncated variants, about 29 missense variants of PRRT2 are documented in PRRT2-related disorders. However, only three of them has been functionally studied. [11][12][13] Therefore, the clinical significance of these alleles of PRRT2 in paroxysmal disorders is difficult to evaluate. It is of great value to perform the functional studies to address the pathogenicity of PRRT2 missense variants. In this study, we summarized the reported missense variants in PRRT2 and performed functional experiments to investigate the alternation of subcellular location and protein expression of PRRT2 missense variants. We further assigned the pathogenicity of the missense variants according to the guidelines of American College of Medical Genetics and Genomics (ACMG). 14 | Missense variants in PRRT2 To identify all the reported missense variants within PRRT2, we searched the HGMD (http://www.hgmd.org) that provided the most up-to-date version of PRRT2 mutation and the PubMed (https ://www. ncbi.nlm.nih.gov/pubmed) from November 2011 when PRRT2 was first reported as a disease-causing gene and up to December 2017. After that, the pathogenicity of PRRT2 missense variants was preliminarily evaluated according to the ACMG guidelines. | Cell culture and transient transfection HEK293T cells and HeLa cells were cultured in DMEM (HyClone) supplemented with 10% fetal bovine serum (Gibco) in a 5% CO 2 incubator at 37°C. Cells were seeded in suitable wells the day before transfection. Transient transfection was performed using the Lipofectamine 3000 according to the manufacture's protocol (Invitrogen). Forty-eight hours of cultivation was needed for the protein expression after transfection. | Western blot analysis To get the protein lysate, cells were rinsed with phosphate buffer saline (PBS) and harvested in lysis buffer. After centrifuging, the supernatants were collected. Western blot was performed as previously described. 15 The GFP (1:5000) and β-actin (1:5000; Sigma-Aldrich) antibodies were used. The blots were semiquantified by gel densitometry using the Photoshop. | Statistical analyses All the experiments were repeated at least three times independently. For the Western blot, proteins were normalized to β-actin. Differences in the mean values between wild-type or mutant PRRT2 were analyzed by one-way ANOVA using GraphPad Prism software. The decreased or abnormally localized mutant protein was considered functionally impaired. | Overview of the missense variants in PRRT2 After a systematic review of PRRT2-related disorders, we found a total of 29 missense variants in the literature (Table 1). Of which, 26 were heterozygous, two (p.P279L and p.R311W) were homozygous, and one (p.G305R) was described in both homozygous and heterozygous condition. 16 The predominant phenotype of these missense variants was PKD, while BFIS and ICCA were also documented. Two variants (p.P138A and p.D147H) had a minor allele frequency (MAF) ≥0.05 in a population database 17 and were predicted to be benign by functional software (Table 1). Another variant (p.P216L) was found to possess 5.2% of 115 controls in an Australian study, 18 although it was predicted to be deleterious (Table 1). Of note, these three variants were not co-segregated with the disease in the family pedigrees reported previously. 17 In addition, eight variants (p.R266W, p.S275F, p.A291V, p.G305R, p.A306T, p.S317N, p.G324E, and p.I327M) were reported to co-segregate with PKD or BFIS in multiple affected family members. [18][19][20][21][22][23][24] They were predicted to be deleterious by SIFT, Polyphen-2, and MutationTaster. All the eight variants but one (p.G305R) were absent in population database and our control. However, they were classified as uncertain significance variants for lack of sufficient evidence. Other 18 variants were also classified as uncertain significance variants combining the population MAF and prediction data. (Figure 1). | Missense variants decreased the protein level The remaining 17 variants, including the benign ones, had undifferentiated expression of PRRT2 protein as wild-type (Figure 1). p.G324E) lost membrane targeting and were located in the cytoplasm (Figure 2A,B), indicating the alternation of subcellular localization of these missense variants. The remaining 16 variants were still retained in plasma membrane (Figure 2A,B). The red-fluorescent labeling plasma membrane was shown in the Figure S1. | Classification of the missense variants in PRRT2 Decreased protein expression or alternation of subcellular localization was considered functionally impaired. We assigned the pathogenicity of the reported missense variants of PRRT2 with functional data. As a result (Table 1) 29 In our study, only the C-terminal amino acid change could result in mislocalization. Possibly, membrane orientation of PRRT2 is imposed by the C-terminal, especially the TM domain. | D ISCUSS I ON In conclusion, a total of PRRT2 missense variants reported is firstly assessed by the ACMG guidelines. It will be of great value for its instructive and meaningful role in clinical molecular diagnosis. Missense variants could decrease the protein level and/or impair plasma membrane localization. C-terminal of PRRT2 is crucial for its physiological function. Functional study is vital for the classification and potential mechanisms associated with PRRT2 should be further explored. ACK N OWLED G M ENTS We sincerely thank the participants for their help and willingness to participate in this study. We also thank Novogene Company to share their whole exome sequencing data with us. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest.
2019-05-25T13:03:02.534Z
2019-05-23T00:00:00.000
{ "year": 2019, "sha1": "f5e887966daf13f53199642bdb195476631d6b20", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cns.13147", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3d58742dca9864d3e236eeb88bc89c37cb528aa", "s2fieldsofstudy": [ "Medicine", "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
234313039
pes2o/s2orc
v3-fos-license
Analysis of the role of autopsy teaching in the context of precision medicine : Autopsy is the foundation of general anatomy teaching and learning, and an indispensable part of medical education, and anatomy is the cornerstone of medical courses. Autopsy is an indispensable and important way to improve the quality of anatomy teaching. Nowadays, with the application of computer technology, the rapid development of molecular biomedicine and the emergence of other reasons, the status and function of autopsy in anatomy teaching have changed to varying degrees. Over the years, the reform of autopsy teaching has been continuously developed and updated. How to improve the effectiveness of autopsy teaching is an important course that requires us to study carefully. In the information age, autopsy teaching is affected by modern teaching techniques and methods. In addition, the source of corpses is very scarce and difficult to preserve. Most colleges and universities gradually adopt different teaching methods to replace autopsy teaching. In the context of precision medicine, autopsy teaching is in an irreplaceable position. This article starts with the importance of autopsy in anatomy teaching, discusses the current situation and future development direction of autopsy in anatomy teaching, and provides necessary help for further deepening the reform of autopsy education and teaching. Introduction Precision medicine is a hot spot in current international medical research. According to the different characteristics of individuals, the disease is divided into different subgroups, and targeted prevention, diagnosis and treatment are carried out [1]. However, this policy has aroused great attention to the quality of future medical staff and patient safety. However, precision medicine is an emerging medical technology, so the current national policy of precision medicine does not have a clear governance implementation plan, which has caused many universities to have a less clear attitude towards the opening of the subject of anatomy, I don't know how to implement this choice [2]. Compared with other disciplines, anatomy pays more attention to actual hand operation. The autopsy teaching method can be said to be an important method to complete the teaching of anatomy, and it can also give medical students a more detailed introduction, and autopsy allows medical students to observe and discover the mysteries of the human body at close range, so any other teaching methods cannot to truly meet the needs of anatomy teaching, it cannot achieve the expected goal of anatomy teaching. Therefore, autopsy is of great significance in anatomy teaching [3]. Autopsy is abbreviated as postmortem examination, which is to identify and determine various pathological changes through the observation of the body surface of the corpse, the naked eye of the internal organs and the morphological examination of the microscope, and combine the clinical manifestations of the deceased before his death, laboratory examinations, and diagnosis and treatment processes, etc. The examination method to clarify the nature of the disease and the cause of death is a basic research method of pathology [4]. The actual operation of autopsy can also give students the opportunity to fully understand the human body and deepen the understanding of the relationship between the morphological structure and direction of each organ [5]. This not only consolidates the theoretical knowledge of anatomy, increases the interest and enthusiasm of students, but more importantly, provides students with opportunities for surgical instruments, training their surgery, observation and independent thinking and problem-solving abilities will lay the foundation for clinical work [6]. Autopsy can generally be divided into: general anatomy, forensic anatomy, and pathological anatomy. The purpose of autopsy has three main points: (1) confirm the diagnosis, find out the cause of death, and improve the level of clinical medical treatment; (2) discover infectious diseases and new diseases in time; (3) accumulate data and specimens for scientific research and teaching [7 ]. Engels once said: "There is no medicine without anatomy" [8]. The questionnaire survey by Pete et al. clearly shows that 71% of people believe that anatomy is the most basic and important teaching method in anatomy teaching [9]. At present, many medical schools at home and abroad have gradually or completely abandoned anatomy teaching. In order to enable major universities to better carry out the courses in this area, two measures can be taken as follows. First, some existing problems are reported to the college and related departments through various channels, so as to strive for more attention and policies from the college and related departments. With support from other aspects, special cases and special cases can be realized; Second, retired technicians and professors with rich experience in autopsy are regularly invited to teach by passing, helping, and leading [10]. At the same time, if funding permits, the operator will be subsidized to attend the pathology academic conference, so as to expand the field of vision and knowledge, and to understand new technologies and equipment. But in fact, the anatomy teaching model can enable students to master the complex relationship between new professional terms and human body structure. During the autopsy, students can discuss in groups, which also helps to develop communication skills among student groups. Consolidate the theoretical knowledge of anatomy and improve the practical ability The practice of autopsy can also give medical students an opportunity to fully understand the real human body, and deepen their understanding of the shape, structure and orientation of each organ. The demonstration is not a mechanical understanding of the structure, but the deeper meaning is to inspire students to think, link the structure and function of the human body, and explain the function on the spot, so that the students can deepen the understanding and memory of the human structure. For medical students to form an accurate concept, the posture of autopsy is different, and the difference must be clearly pointed out. When emphasizing specific operations, it should be emphasized that no matter how the specimen, model or patient is placed, the detailed location of each organ and structure should be described according to the anatomical posture. Teaching teachers should first use themselves to demonstrate, and then every student participates in it. Teachers visit and guide and correct students' mistakes and puzzles at any time. They not only play the leading role of the instructor, but also play the main operation role of the students, which not only enhances the awareness of the medical students to participate in the actual operation. The instructor's key introduction and accurate description of the location of the anatomy will help students form the concept of accurate autopsy operation. Strengthen medical ethics education The remains of the deceased donor became the first step for medical students to enter the medical world. Those donors are the noblest, the most selfless, and the most respectable. Their selfless dedication has benefited medical students for life. Before performing an autopsy, medical students should express their respect to the donor in some ways, such as holding a simple silent mourning ceremony, keeping the operation process serious, and after the body is used up, a memorial ceremony including the family of the deceased will be held. Respect, sympathize and protect the body through zero-distance contact with the body. These ceremonies are also the teaching part of academic education. To sum up, autopsy is still the main teaching method of human anatomy in the future, but teaching based on autopsy alone cannot meet all the needs of current medical courses, and must be supplemented by updated and advanced teaching methods or learning methods. This interdependence between different teaching methods and autopsy constitutes a new type of teaching mode, which may be the future development direction of anatomy teaching. Increasing inter-professional education and providing opportunities for real learning Team-based autopsy activities have also been used to explore the field of interprofessional education (IPE). The implementation of interprofessional education is related to the development of students' cooperative behavior. In other words, inter-professional education helps promote the development of collaborative skills in the workplace, thereby ensuring that clinical errors are reduced and patient treatment outcomes are improved. In the research of the interdisciplinary autopsy course. Autopsy can promote cross-professional learning of medical students. These are all important topics in medical education. Anatomy educators all believe that autopsy teaching is superior to other teaching methods for anatomy teaching. They agree that autopsy teaching helps students to learn anatomy and develop their own skills during clinical work in the future. Autopsy teaching provides medical students with the opportunity to receive "real learning" on a regular basis. Its comprehensive learning model includes teaching methods such as autopsy, pathology, clinical evidence, imaging, etc., which provides strong support for "real learning". Subject A total of 50 fresh autopsy specimens were collected, including 5 fetuses, 7 newborns and 38 adults for autopsy. The youngest is 30 weeks old, and the oldest is 91 years old. Collect heart, lung, liver, spleen, kidney and brain tissue from each autopsy specimen according to the needs of autopsy. And remove the umbilical cord tissue from the fetal specimen. 12 specimens of fetus and newborn; there are also 38 specimens of adult cadavers. In order to study the operation of autopsy and show the specific theories of the autopsy course for medical students, the teacher led the students to conduct experiments in groups of these 50 autopsy specimens, personally perform the autopsy operation, and observe and experience the process at close range. In the process of anatomy, the leading teacher needs to explain each step in detail, and introduce the structure of the human body and the functions and medical terms of each part of the human body in detail. The heart, lung, liver, spleen, kidney, and brain tissues of these 50 cadavers were dissected. These tissues need to be explained in detail, and then experiments are carried out, and the instructional experiment is personally conducted to stain CD105, so as to make the medical students' practical ability Get promoted, and finally lead medical students to complete this experiment. Experimental method (1) Quantitative analysis method: This article collects a large amount of data through WTO, UNCTAD, WORLDBANK and other databases, and conducts a quantitative analysis of Chinese and international autopsy teaching systems, and provides a basis for related research. (2) Analytical method: This paper analyzes the collected data to find the correlation between the role of precision medicine and autopsy teaching in my country, and uses standard analysis to evaluate the role of autopsy teaching in the context of precision medicine. (3) Result comparison method: This article studied the anti-human CD 105 Mab E9 staining results by dissecting the above 50 autopsy cases and taking out the dissected samples to observe and compare the content of anti-human CD 105 Mab E9 in different organs in the human body. Investigation and analysis of experimental data In 50 frozen sections stained with CD105, Mab-E9 positive cells were mainly distributed in the EC cell membrane and cytoplasm, and the cell membrane was stained more than the cytoplasm. Collect heart, lung, liver, spleen, kidney and brain tissues from each anatomical specimen, and extract umbilical cord tissue from fetal specimens. When collecting specimens, make sure to perform an autopsy within 48 hours after death, and prepare materials, or put the specimens in a sterile cryotube for storage in liquid nitrogen within 2 hours. All tissue specimens were washed with 0.01M PBS, and then made 5 7~8um continuous frozen sections on a thermostat. In the brain, heart, liver, spleen, lung and kidney, the positive staining of CD34 is very weak, and only some capillaries, arterioles and veins in the matrix are weakly stained. In the brain, heart, liver, spleen, lung and kidney, some capillary strips and small arteries and veins were positively stained. Table 1 and Table 2 show the results of CD105 positive expression in 50 autopsy specimens. Table 1. CD105 staining results of brain, heart, lung, liver, spleen and kidney tissues of 12 fetuses and newborns Group Capillaries Small blood vessels It can be seen from the above table that the expression of CD105 monoclonal antibody E9 on capillaries and small blood vessels during the preparation of viscera specimens is significantly different from the row×list test. The actual results showed that in the brain, heart, liver, spleen, lung and kidney tissues, the EC positive rate of CD105-labeled capillaries were better than CD34, but there was no significant difference between CD105 and factor VIII-labeled EC positive rates. The EC difference of blood vessels in other organs is statistically significant. However, there are significant differences between CD34 and factor VIII markers, and there is no significant difference in the expression of EC in other organs. CD34 staining of liver and spleen tissues is very weak, sinusoidal EC is slightly stained, only a few interstitial capillaries and small arteries and veins are stained and lightly stained; a few capillaries in various tissues of brain, heart, liver, spleen, lung and kidney The staining results of umbilical cord tissues were similar to CD1005 monoclonal antibody E9 staining results. Conclusion The development of anatomy has gone through centuries. Long-term medical practice has proved that the existence of an autopsy is necessary. Even if medical students do not perform autopsy surgery, even if they receive more advanced technical training, it is difficult for medical students to perform surgery, examination and treatment in the clinic. For students who are new to medicine, learning this course is essential. Anatomy is a necessary teaching method to improve the quality of anatomy learning. Autopsy not only allows medical students to truly understand the structure and organization of the human body,but also consolidates the theoretical knowledge learned in class and in books.In addition it can also be used for actual operations in the above class or demonstrations by the teacher to mobilize medical students to teach anatomy. Interest can cultivate cultivate students' independent thinking and independent hands-on ability in actual operation and later clinical surgery. We also need to gradually explore more effective solutions to make anatomy teaching popular. Scientific experiments have shown that in the speed of information reception and processing, the visual center is significantly faster than the auditory center, with a speed difference of more than 500 times. Three quarters of the human brain is used for vision. In fact, it uses the "brain" to observe the world through visual images. The ability of the right brain to remember graphics is much higher than that of the left brain. Once you remember this number, it is difficult to forget it, even if you forget it, it is easy to remember it after reading it. Using the sensitive characteristics of the human brain to graphics, students can observe more specimens and graphics. When reviewing, they will reproduce specimens and characters in their minds, that is, use image thinking to deepen their memory. In short, autopsy is of great significance to anatomy education and teaching, and will play an important role in the development of the future medical profession.
2021-05-11T00:06:31.387Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "605c4fb2b4b8d696ea0d3bf0058e7f3017933e5f", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/21/e3sconf_aeecs2021_03059.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c2e442b69281248196cc4f0eb816784adec92db9", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Political Science" ] }
228082528
pes2o/s2orc
v3-fos-license
CRITICAL FACTORS AFFECTING LABOR PRODUCTIVITY WITHIN CONSTRUCTION PROJECT IMPLEMENTATION: A PROJECT MANAGER’S PERSPECTIVE The present study aims to identify critical factors affecting labor productivity within the construction project implementation from the project manager’s viewpoint. By a comprehensive review of the previous studies, this study identified 45 critical factors impacting construction labor productivity, which were grouped as primary 6 groups include: manpower, management, work condition, project, and external factors. A total of 56 valid samples were collected by 65 project managers’ respondents who completed a structured questionnaire survey according to their previous participation in or directly implementation construction projects. These critical factors were ranked based on their relative important index and descriptive statistics (i.e. mean and standard deviation). The analysis of the identified critical factors indicated that the most significant critical factors impacting construction labor productivity are ‘ability of construction management, ‘financial status of stakeholders’, ‘work discipline’, ‘design changes’, ‘timeliness of remuneration’, ‘economic conditions’, ‘lack of supervision’, ‘accident’, ‘availability of labors’, and ‘availability of materials’ Introduction For the economy of any country, the value of the construction industry has a significant contribution to the nation's gross domestic product. Despite the applied technological advancements, the construction industry remains a human-intensive industry (i.e. dependent upon effort and performance of the workforce) (Jarkas et al., 2014). Therefore, labor productivity plays a key role in assessing the success of construction projects which reflects the significant effect of this resource in the construction sector, meaning that any enhancement in labor productivity will contribute a high deal to enhance the project effectiveness (i.e., quality, cost, and time performances) (Mahamid, 2013b). In many countries, the construction labor cost would account for between 30% and 50% of the total cost of a construction project, so construction labor productivity as a determinant impacting almost construction projects' profitability (El-Gohary and Aziz, 2014, Hanna et al., 2002, McTague and Jergeas, 2002. Improving labor productivity seems is one of the most important objectives for any organization due to the fact it displays the efficiency and effectiveness conversion of sources into marketable merchandise and it determines commercial enterprise profitability (Wilcox et al., 2000). In this regard in the field of construction, many researchers have been conducted to purpose improvement labor productivity of construction practitioners (i.e., construction managers, engineers, architectures, and builders). Poor construction labor productivity is a major cause of influencing the quality, duration, and cost of construction projects (Mahamid, 2013b). Also, previous studies indicated that the loss of construction labor productivity is affected by various factors related to workforce, management, equipment and tools, materials, technology, and environment (Mustapha and Naoum, 1998, Van Tam et al., 2018, Enshassi et al., 2007, Alaghbari et al., 2019. However, perception of what factors affecting construction labor productivity may different depending on the roles of respondents in the construction project implementation (Perera et al., 2014). Therefore, the aim of this study is that identify and assess the critical factors impacting labor productivity within construction investement project implementation on basis of the awareness of project managers. The findings are expected to build a platform to implement better appropriate tasks towards improving construction labor productivity. Literature Review Productivity has been calculated is the ratio of the produced-outputs to the inputs that used to create the outputs (Coelli et al., 2005). In the construction context, construction labor productivity has been calculated is the rate of between the work units accomplished (i.e., outputs quantity) and the work hours (i.e., inputs for labors) Hosseini, 2012, Enshassi et al., 2007). To enhance construction labor productivity, identifying critical factors that have influence labor productivity in the organization of construction projects is necessary. Therefore, various factors impacting labor productivity in the construction sector have been proposed and categorized by numerous scientific researchers from different countries as represented in the previous researches. The studies of (Lim and Alum, 1995) conducted in Singapore that indicated that the most factors impacting construction labor productivity include difficult with the manpower recruitment, on-site supervisors recruitment, high labor income rate, construction site absenteeism, and problems of communication with oversea construction workers, whereas, in Saudi Arabia, top five factors include workforce experience and skills, lack of communication by construction stakeholders, poor relations between employees and their managers, timeliness of remuneration, and schedule delay . In Iran construction industry, (Zakeri et al., 1996) stated that lack of materials, severe weather and on-site conditions, low quality of equipment and tools, drawing quality, change orders, and proper equipment shortages which were the most factors impacting labor productivity, while, as shown in the research of (Jarkas et al., 2012), the top critical factors that have an important effect on construction labor productivity in Qatar such as supervision, labor skills, lack of materials, lack of experienced labor, communication, shortage of leadership of construction managers, high-temperature, delays in responding to "Requests For Information", shortage of providing labor with transportation, and percentage of work subcontracted. In many years, the topic of factors influencing construction labor productivity has been concerned by numerous researchers. Consequently, various critical factors influencing construction workforce productivity have been demonstrated and grouped by lots of studies from many countries. However, the frequency and impact levels of these critical factors quite different from project to project or nation to nation, and even in the same construction project, depending on specific situations (Olomolaiye et al., 1998). Therefore, a task to divide these factors toward major global categories, it may relate and enclose to the numerous factors is critical. Based on referencing and considering previous studies, the present study synthesized critical factors impacting labor productivity in the construction implementation. A total of 45 critical factors influencing labor productivity within construction project implementation, which are divided into six categories as follows: (1) manpower (7 factors), management (13 factors), motivation (8 factors), work condition (5 factors), project (7 factors), and external (5 factors). Research Methodology The present study was carried out based on a questionnaire survey aimed at effectively collecting all the necessary data. As mentioned above, a total of 45 factors that impact labor productivity within the implementation of construction projects were identified. These factors were then tabulated in the form of a questionnaire. The questionnaire survey was contained two major parts. The structured first part contains demographic information on the participants (i.e., education levels, qualifications, positions, and professional experience) whose main objective was to illustrate the participants in order to ensure reliability in this study outcomes. The structured second part contained the list of these identified factors. To collect needed data, participants were surveyed for interviews on the basis of their previous take part in or directly implementation construction investment projects in Vietnam. Based on their experience, they will assess the impact degree of the critical factors construction labor productivity following a 5-point Likert scale (i.e.,1-Very low effect, 2-Low effect, 3-Moderate effect, 4-High effect, 5-Very high effect). RII = (1) Where: Wi is the rating given to factor by the respondent ranging from one to five; Xi is the proportion of respondents scoring; i is the order score ranging between one and five. Responses from the first part can be obtained through the appropriate response choice. In the second part participants needed to assess the factors that influence construction labor productivity on a Likert scale from 1 (very low influence) to 5 (very high influence). The RII index is applied to evaluate these factors influencing construction labor productivity as perceived by the participants and, therefore, a comparative analysis is possible. The collection of case-specific data was conducted by respondents who engaged with construction projects in Vietnam and working as project managers. A total of 73 samples were distributed by email and face-to-face interviews. Only 65 answers were received, and 56 qualified responses for research, representing an effective rate of 76.7%. Results and Discussions In the present study, there are two software applications were applied to examine the findings, which are MS Excel 365 and SPSS 22. A total of 45 critical factors affecting labor productivity within construction project implementation have been identified and ranked on the basis of their descriptive statistics (i.e., mean and standard deviation), and the RII index. Table 1 indicates the ranking of 7 factors related to the manpower category. The results statistics of project managers' respondents indicate that 'work discipline' with RII=0.789 was ranked the 1 st in this group and ranked 3 th in the overall ranking (Table 8), which proves that this factor has a very high impact on labor productivity within construction project implementation. This finding in the line with some previous studies (i.e., (Van Tam et al., 2018, Durdyev and Mbachu, 2011, Enshassi et al., 2007, Gerges et al., 2011. With RII=0.764, 'labors' experience and skills' factor was ranked 2 nd in this group and assessed 12 th among all 45 factors, which indicates that the factor has a high influence on labor productivity. Followed by 'age of labors' (RII=0.743), 'strength and physical of labor' (RII=0.743), 'Labor absenteeism' (RII=0.739) was ranked 3 rd , 4 th , and 5 th respectively in the category. Finally, 'labor's education level' (RII=0.707), and 'personal problems' (RII=0.686) was assessed at the end of manpower group, and ranked 33 rd , 39 th in overall ranking respectively, which shows that these factors have a low impact on labor productivity within construction project implementation. Management factors group The ranking of 13-factor under management category was provided in Table 2, with RII=0.814, the surveyed project managers ranked 'ability of construction management' it the 1 st in this group. This factor was also assessed is the first factor among 45 critical factors, which proves that this factor has a very high effect on labor productivity within construction project implementation. This evidence was further supported by (Ghahramanzadeh, 2013), who stated that project managers' incompetence is one of the serious issues barrier the labor productivity improvement in the Iran construction sector. The ranking is also in line with the study by (Ghoddousi and Hosseini, 2012), which showed that the competence of project managers as an important factor impacting the labor productivity of construction projects. With the RII ranging between 0.796 and 0.768, four factors have a significant effect on construction labor productivity such as 'financial status of stakeholders', 'lack of supervision', 'availability of labors', and 'availability of materials' which ranked 2 nd , 3 rd , 4 th , and 5 th in this ground and evaluated 2 nd , 7 th , 9 th , and 10 th among al critical factors, in turn. In fact, construction activities are implemented with many resources, one of which financial, labors, materials play an important role. Many buildings needed a large amount of capital and almost of contractors perceive it exceptionally troublesome to bear the high daily execution expenses in the case of laborers' salaries are delayed (Mahamid, 2013a). The outcomes of researches (i.e., (Hai and Van Tam, 2019, Alinaitwe et al., 2007, Kadir et al., 2005, which demonstrated that the limitation of finances as a problem in improving labor productivity. With the RII ranging between 0.704 and 0.675, the surveyed project managers ranked three factors are 'working overtime', 'communication', and 'construction methods' at the end of this group, which reveals that three factors have a very low influence on construction labor productivity. Table 3 provides the ranking of factors relevant to the motivation category, 8 critical factors are identified under this group. The surveyed respondents ranked factors of 'timeliness of remuneration' (RII=0.782) and 'amount of remuneration' (RII=0.764) were ranked the first and the second in this category, and assessed the 5 th , and 11 th overall ranking, respectively. The finding indicates that these factors as determinants impact on construction labor productivity. This ranking was supported by the studies of (Ghoddousi et al., 2014, Tabassi andBakar, 2009), which explained that managers are the good perception that construction craftsman still has to face with low salary, which has been identified as a problem in many countries and late payments have a dramatic effect on the main aspects of productivity in the construction sector (Tam et al., 2004, Jarkas and Radosavljevic, 2013, Perera et al., 2014, Kaliba et al., 2009). With the RII ranging between 0.761 and 0.736, three factors have a significant impact on labor productivity in construction project implementation such as 'work satisfaction', 'promote opportunities', and 'rewards/punishments' which ranked 3 rd , 4 th , and 5 th in this category and 13 th , 16 th , and 26 th among 45 critical factors, respectively. Finally, factors of 'motivation of laborers', 'lack of labor recognition programs', and 'creating competition' were ranked at the end in the motivation group, with RII are 0.718, 0.675, and 0.668 respectively. The ranking reveals that this 3-factor has a low effect on construction labor productivity. Work condition factors group As demonstrated in Table 4, with RII=0.786, the analysis result indicated that 'accident' factor was ranked the 1 st in this group and the 8 th overall ranking, which shows that the factor has a very high impact on labor productivity within construction project implementation. Followed by the factor of 'healthy and safety conditions' (RII=0.754) was assessed the second in the work condition group and 14 th among all factors, which reveals that this factor as a determinant on construction labor productivity. The evidence in the line with the outcomes of some previous studies (i.e., (Ghoddousi et al., 2015, Ghoddousi andHosseini, 2012), which explained that the construction industry is knowns for its poor working conditions and the adoption of health and safety measures in several developing countries. In this category, the factor of 'working security' (RII=0.746) was ranked the 3 rd in this category and 15 th among 45 factors, whereas 'working space' (RII=0.707) was ranked the 4 th under work condition group and 25 th overall ranking. With RII=0.682, 'height of work site' was evaluated at the end of this group and 40 th among all factors. This ranking indicates that the factor has a very low impact on construction labor productivity. Table 5 demonstrates the ranking of factors relevant to the project group, 7 critical factors are identified under this category. With RII=0.786, the surveyed respondents ranked 'design changes' is the 1 st position in this group and the 4 th among 45 critical factors, which indicates that this factor has a significant impact on labor productivity within construction project implementation. This ranking was further supported by the study of (Enshassi et al., 2007), which demonstrated that specification alteration during the construction project organization was the primary factor impacting labor productivity. With the RII ranging between 0.739 and 0.714, three factors have a significant effect on construction labor productivity such as 'effective project s', 'drawing quality', and 'project location' which ranked 2 nd , 3 rd , and 4 th under project category and assessed 20 th , 23 th , and 30 th overall ranking, respectively. Finally, critical factors such as 'design complexity' (RII=0.711), 'sub-contractor' (RII=0.689), and 'project type' (RII=0.650) were ranked at the end of this category. The ranking reveals that these factors have a low influence on labor productivity within construction project implementation. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 8 Number 2 (December) http://doi.org/10.9770/jesi.2020.8.2(45) The results of Table 6 indicate that 5-factor of the external group has been ranked by the RII index under perceptions of project managers. The surveyed respondents evaluated 'economic conditions' (RII=0.779) was the first in this category and ranked sixth overall ranking, which indicates that the factor as a determinant having a significant influence on labor productivity in the construction project implementation. Followed by factors of 'weather conditions'(RII=0.736), and 'regulation and law' were assessed the second and third positions in the group and 24 th and 28 th among 45 identified factors, respectively. The finding was supported by studies of (Kaming et al., 1997, Van Tam et al., 2018, which demonstrated that construction activities are significantly impacted by weather conditions. Finally, with RIIs are 0.696 and 0.675, 'social culture' and 'geological and hydrological conditions' were ranked at the end of this group, which proves that these two factors have a low impact on construction labor productivity. Overall ranking critical factors influencing labor productivity within construction project implementation The overall perceived impacts of all 45 factors were shown in Table 7. As provided, the top five ranking critical factors influencing labor productivity within construction project implementation as follows: 'ability of construction management, 'financial status of stakeholders', 'work discipline', 'design changes', 'timeliness of remuneration', 'economic conditions', 'lack of supervision', 'accident', 'availability of labors', and 'availability of materials'. The ranking reveals that the top ten factors have a significantly important impact on construction labor productivity. Conclusions and recommendations The present study aimed to identify a total of 45 critical factors influencing labor productivity within construction project implementation, which were grouped into the main 6-category that are manpower, management, work condition, project, and external factors. The data was collected by 56 valid surveyed questionnaires with participants of construction project managers, and critical factors were ranked based on their RII index and descriptive statistics. The results highlight the primary factors impacting construction labor productivity in construction projects as perceived by project managers, including 'ability of construction management, 'financial status of stakeholders', 'work discipline', 'design changes', 'timeliness of remuneration', 'economic conditions', 'lack of supervision', 'accident', 'availability of labors', and 'availability of materials'. On the basis of the findings, the following recommendations are suggested as a way to improve labor productivity within the construction project implementation. 1. Project management unit should encourage construction project managers to learn practical skills and real experience about construction management through programs of regular training the help them to keep up to date and aware of valuable project management skills that have to be enhanced. 2. The project management unit should create workshops and training courses to help project managers to improve the managerial experience and skills as well as keep management activities on construction sites to enhance quality and prevent incorrect productions. 3. It is necessary owners should pay progress payment to contractors on time because it affects the contractors' ability to finance the work, leading to a shortage in materials and delay payments to laborers which affect their motivation to work. 4. The project management unit should introduce regulations and rules in the working environment to control the work discipline of the construction workforce. Besides, it is necessary to create recognition programs (i.e. rewards or punishment) to encourage laborers to keep their discipline on site which also makes significant restriction accidents in the construction project implementation. 5. Project managers should supervise and control materials supply for each specific construction project. This schedule should involve the time required to supply materials and the materials available on the local market to supply the required materials in time. In addition, the project management unit should require contractors should also select a reasonable storage location for purchased materials in each project, which should be easily accessible and close to implement projects and to avoid wastage of labor time for multiple-handling materials. 6. The project management unit should reduce design changes by the way that strongly controls the quality of drawings at the design stage order to explore errors or conflicts which can restrict reworks. Also, applying for construction management technology advances (i.e., building information modeling-BIM, scan to BIM,...) in the construction project implementation should be encouraged which can lead to improving project performance and profit maximum. Although some results have been concluded from the present study, the authors encourage other researchers to replicate this topic in many different areas, countries, so that the important factors revealing elsewhere, and the bases platform the related findings can further support to the comprehensive theoretical understanding of the more complex problems of the construction labor productivity topic and the critical factors related with specific socioeconomic conditions and cultural backgrounds.
2020-11-26T09:07:32.527Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "49f8f1840a0ae9cc5a844d8d62d8acf23883c4f8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.9770/jesi.2020.8.2(45)", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bf79e50681453bfb0647fcf38310e025729c639a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Business" ] }
14173505
pes2o/s2orc
v3-fos-license
Microdamage Repair and Remodeling Requires Mechanical Loading Bone remodeling is necessary to avoid microdamage accumulation, which could lead to whole-bone failure. Previous studies have shown that this bone-repair mechanism is triggered by osteocyte apoptosis. Through the use of a rodent hindlimb suspension model and tibial four-point bending model, the effects of disuse on microdamage remodeling was examined. At day 0, male rats were assigned to one of three groups: weight bearing (WB), hindlimb suspension (HS), or hindlimb suspension with daily intermittent weight bearing following damage-inducing loading (HW). Within each group, the rats were further divided into subgroups corresponding to three sacrifice time points [day 14 (WB and HS only), day 18, or day 35]. At day 14, animals were anesthetized, and their left tibiae underwent cyclic four-point bending to produce fatigue-induced microdamage. At sacrifice, the tibiae were examined using 3D micro-computed tomography (µCT), flow cytometry, and histologic and immunohistochemical stains. The results indicate that only the WB and HW groups had a significant increase in intracortical TRAP-positive resorption pits following damage induction, which was paralleled by a significant decrease in microdamage over time in combination with a shift in the osteoclast lineage owing to a decrease in monocytes. These results demonstrate that osteocyte apoptosis may be insufficient for repair of microdamage without the stimulation provided through physiologic loading. In addition, this potentially could have clinical implications for the current therapeutic paradigm for treating stress fractures, where extended non-weight bearing is employed. © 2010 American Society for Bone and Mineral Research. Introduction O steoporotic fracture is a common and expensive health care problem, with 1.5 million fractures in the United States per year, with a predicted cost of $60 billion annually in the United States by 2025. (1) Yet the factors responsible for susceptibility to fracture remain incompletely understood. The existence of microdamage within bone has been reported to induce localized osteocyte apoptosis surrounding individual microcracks, (2,3) which subsequently leads to targeted remodeling. (2,(4)(5)(6)(7)(8)(9) It has been proposed that whole-bone failure in osteoporosis may be a result of positive feedback between microdamage and the resulting remodeling that attempts to repair the damage. (10) Microdamage results in a loss of mechanical integrity of the bone tissue, followed by a potentially greater loss in continuum-level bone strength and/or stiffness owing to resorption at the beginning of the remodeling cycle. The reduced stiffness and strength may result in further damage or overt failure at lower loads than those required in the original intact bone, resulting in a positive-feedback process. This potentially could have important clinical implications. The relationship between microdamage accumulation and resorption may explain a portion of the increase in fracture risk in the elderly population. Hence whole-bone fracture risk potentially may increase if remodeling is altered. Aside from the alterations in remodeling activity owing to age, (11) bone loss associated with aging also may result from disuse owing to reductions in physical activity or infirmity. Disuse models such as prolonged bed rest have shown that urinary levels of bone-formation markers decreased, whereas resorption markers and resistance to insulin-like growth factor 1 (IGF-1) increased, leading to cortical and cancellous bone loss. (12,13) Similar effects have been found in astronauts during long-term exposure to microgravity, during which disuse of the weight-bearing limbs is prevalent. (14,15) To simulate the disuse condition of bed rest and weightlessness, several groups have developed rodent hindlimb suspension models (16)(17)(18) that induce similar effects, such as increased resorption and decreased formation, (19) increased resistance to IGF-1, (10,20,21) and significant reduction of blood flow. (22) Disuse hindlimb suspension models also have been shown to decrease interstitial fluid flow owing to decreased pressure gradients. (23) Several studies suggest that convective transport by means of loadinduced fluid flow may be necessary to provide sufficient transport of larger molecules such as proteins to and from osteocytes. (24,25) The decrease in interstitial fluid flow possibly could contribute to a lack of osteoclast activation during hindlimb suspension. It therefore was hypothesized that the removal of functional load would reduce or inhibit targeted remodeling. Previous studies have shown that supine weight-bearing exercise within a lower body negative chamber (LBNP) counteracts bone loss associated with long-term bed rest. (26,27) However, daily standing for 1 or 2 hours per day during 28 days of hindlimb suspension does not alter the deterioration of cortical bone. (28) Early clinical evidence for recovery of bone repair in a disuse setting with moderate physiologic loading comes from the treatment of running-related stress fractures. Previous treatment methods for stress fractures associated with long-distance running prescribed up to 12 weeks of therapy (dominantly non-weight bearing) before returning to a normal running schedule. (29) However, a recent study (without an experimental basis) decreased the recovery period by implementing earlier cross-training and enabled the athlete to return to function in only 7 weeks. (30) It therefore also was hypothesized that moderate physiologic loading could rescue the potentially impaired microdamage repair process during disuse. The purpose of this study thus was to examine the effects of disuse and intermittent weight bearing on bone remodeling in response to microdamage, potentially providing clinically important insight into the relationship between microdamage accumulation and increased fracture risk in states of disuse. Animals Male 6-month-old adult Sprague-Dawley rats (350 to 450 g) were obtained from Harlan Laboratories (Somerville, NJ, USA). Animals were allowed to acclimate to our animal facility for at least 3 days before being included in the experiment. The procedures used in this study were approved by the University Committee on Use and Care of Animals at the University of Michigan. Animals were housed in individual nonventilated cages in a temperaturecontrolled room (68 to 728F) with a 12/12 hour light/dark cycle. Water and rat chow were provided ad libitum. Four-point bending used as a fatigue model In order to induce microdamage in the tibiae of hindlimbsuspended and weight-bearing animals, two criteria needed to be fulfilled: (1) The model would have to be able to induce repeatable amounts of microdamage noninvasively within a moderate amount of time (1 to 2 hours), and (2) the model could not cause any alterations in animal behavior after loading (i.e., the animals must regain full usage of the hindlimbs shortly after loading). The model chosen was based on the four-point bending model developed by Turner and colleagues. (31) In order to determine the initial effects of hindlimb suspension in combination with loading, two male 8-month-old Sprague-Dawley rats were hindlimb suspended for 14 days. At day 14, the rats were anesthetized, and the left tibiae were loaded for 7200 cycles at 2 Hz using a sinusoidal waveform with a peak load of 107.8 N (DLoad ¼ 50.2 N), resulting in a maximum lateral strain of -7000 mstrain (see ''In vivo strain gauge calibration for load parameters'' below for further details). The 7000 mstrain level was chosen owing to the inability to achieve fatigue failure at a 4000 to 6000 mstrain level within 2 hours (done in another cohort). A 7000 mstrain level was found to achieve fatigue failure at approximately 1.5 to 2 hours. Hence the loading protocol was chosen to be 1 hour at this strain level. After loading, the animals were hindlimb suspended for an additional 3 days to observe any behavioral changes owing to loading and subsequently were euthanized. At sacrifice, the tibiae were dehydrated, stained with basic fuchsin, embedded in polymethyl methacrylate (PMMA), and sectioned at the region of interest (center at 8 mm proximal to tibia-fibula junction with a total length of 5 mm). The sections were examined using confocal microscopy. Confocal microscopic examinations revealed that significant damage could be induced at the region of interest compared with the nonloaded control tibia (determined qualitatively). In addition, no abnormal behavior was observed after loading. The high strain levels chosen (compared with 4000 mstrain) might induce more osteocyte apoptosis. However, since we are examining the effects of microdamage and thus, in effect, the response to osteocyte apoptosis, any additional apoptosis to potentially elevated strain levels was deemed acceptable. In addition, since the loading parameters would be similar between any experimental groups, any additional apoptosis should be similar in magnitude. In vivo strain-gauge calibration for load parameters In order to determine the load parameters required to induce a strain level of -7000 mstrain at the lateral side of the middiaphysis of the tibia undergoing four-point bending, the strainapplied force relationship was determined for six 8-month-old male Sprague-Dawley rats that were split into two groups: (1) hindlimb suspension (HS) for 14 days (n ¼ 3) and (2) normal weight bearing (WB) for 14 days (n ¼ 3). At day 14, all rats were anesthetized, and a small incision was made at the lateral side at the mid-diaphysis of the tibiae. This allowed strain gauges to be placed bilaterally on the lateral side of the tibiae, 8 mm proximal to the tibia-fibula junction. For this, uniaxial EA-06-015DJ-120/LE strain gauges (Vishay Micro-Measurements, Raleigh, NC, USA) were trimmed to 4 Â 1.5 mm and attached with Insta-Cureþ cyanoacrylate (Bob Smith Industries, Inc., Atascadero, CA, USA). The average slope of the lateral strain versus applied force relationship was determined for each group [WB: -65.15me/N (SD 6.31); HS: -56.55 me (SD 18.87)]. No significant difference was found between the HS and WB groups, indicating that the loading regime induces similar strain magnitudes for weight-bearing and hindlimb-suspended animals at day 14. Based on the findings, a slope of -65.15me/N was chosen, resulting in a lateral strain versus applied force relationship of me lateral ¼ À64:93 Â F applied load ðNÞ This relationship would be used for all the loading parameters in the subsequent experiment. Experimental protocol After acclimation (day 0), 160 rats were assigned to one of three groups: weight bearing (WB, n ¼ 60), hindlimb suspension (HS, n ¼ 60), or hindlimb suspension with daily intermittent weight bearing following damage-inducing loading (HW, n ¼ 40). Owing to the progressive development of the experimental hypothesis based on the WB and HS groups, the HW group was obtained and acclimated 4 months later than the WB and HS groups. Within each group, the rats were further divided into subgroups (n ¼ 20) corresponding to three sacrifice time points [day 14 (WB and HS only), day 18, or day 35]. Animals assigned to the HS and HW groups were briefly anesthetized with an isoflurane (2%)oxygen balance and hindlimb suspended using a custom-made hindlimb suspension system that is adaptable with standard SPF-rated ventilated no. 3 rat boxes. Unpublished work has successfully shown that the custom-made model induces similar physiologic changes as previous models, (16,18,32) where the disuse condition of the hindlimbs resulted in a decrease in bone formation and increase in bone resorption, whereas the general well-being of the hindlimb-suspended animals was maintained. (33) At day 14, all animals were anesthetized, and their left tibiae underwent four-point bending to produce fatigue-induced microdamage. Specifically, the left tibia underwent a sinusoidal loading regime (7200 cycles at 2 Hz) with a maximum and minimum load of 107.8 and 57.6 N, respectively. This induced a maximum lateral strain of À7000 mstrain at the mid-diaphysis for both the HS and WB groups. Previous work had shown that loading the right tibia in a ''nonbending'' configuration of the four-point bending setup, as done by Turner and colleagues, (31) induced strain levels between À1000 to À400 mstrain at the lateral side for the prescribed sinusoidal loading regime. Turner and colleagues used this configuration of their four-point bender to evaluate the effects of just pinpoint loading to the bone and muscle tissue by loading the control leg in a nonbending fashion. For our setup, the prescribed nonbending loading regime was not sufficient to induce microdamage within the region of interest (ROI) 8 mm proximal to the tibia-fibula junction at the point of maximum bending. However, it was found that at the points of contact between the loading pads of the four-point bending setup, significant amounts of microdamage were induced. To test the hypothesis adequately, it thus was determined that any microdamage induced at the application of load for the control load would result in remodeling, which would skew the findings owing to remodeling of the ROI of the left tibia undergoing pure bending. Therefore, the right tibia would serve as a nonloaded, completely undamaged control. Once the loading regime was complete, animals were allowed full recovery from anesthesia and subsequently hindlimb suspended or allowed full weight bearing again, in correspondence with their group assignment. Starting at day 15, animals in the HW group were unhooked from the tailsuspension mechanism in their cages and allowed 1 hour of full weight bearing within their respective cages each day, after which they were returned to hindlimb suspension. At sacrifice, right-left pairs of tibiae were carefully dissected free of soft tissue and assigned to one of three treatments within each subgroup: flow cytometry for hematopoietic stem cell (HSC) and monocyte markers (n ¼ 6), basic fuchsin staining for microdamage assessment (n ¼ 7), or histologic and immunohistochemical staining (n ¼ 7). Prior to staining, morphologic analysis was conducted on the last two groups using 3D micro-computed tomography (mCT). Micro-computed tomography (mCT) Once carefully dissected free of soft tissue, the tibiae not used for flow cytometry were scanned on a mCT system (GE Healthcare Systems, London, ON, Canada) and reconstructed with a voxel size of 25 mm. Morphologic parameters were determined for the tibiae at the cortical ROI that experienced the maximum bending during four-point bending (i.e., at the mid-diaphysis). Specifically, the ROI had a length of 4 mm, with its center located 8 mm proximal to the tibia-fibula junction. Bone architectural parameters for this ROI were determined using a custom analysis program and a commercially available voxel analysis software package (MicroView, Version 2.2, GE Healthcare). The following parameters were calculated: tissue mineral content (TMC), tissue mineral density (TMD), cross-sectional moment of inertia (I zz ), and cortical and marrow area. Flow cytometry (FACS) Specimens assigned for flow cytometry were placed in PBS þ 2% normal calf serum (NCS) on ice immediately after dissection. Marrow was flushed from the tibiae and washed in PBS þ 2% NCS, and 10 6 cells were removed and put on ice for subsequent staining. To reduce background noise from red blood cells in the flushed marrow, the collected cells were resuspended briefly in Ack lysis buffer and subsequently washed in PBS þ 2% NCS. To prevent nonspecific binding of selected antibodies, cells were incubated with rIgG, mIgG, and rIgAk (BD Biosciences Pharmingen, San Jose, CA, USA) for 15 minutes at 48C prior to staining. In order to determine the effect of the experimental conditions on the osteoclast lineage, the cells were incubated for 25 minutes at 48C with the following antibodies: anti-mouse CD 117 (c-kit), hematopoietic stem cell (HSC) marker, PE-conjugated, isotype: rat IgG2bk, clone: 2B8 (Beckman Coulter, Inc., Fullerton, CA, USA); anti-rat CD11b (Mac-1 a chain), monocyte-macrophage marker, FITC-conjugated, isotype: mouse IgAk, clone: WT.5 (BD Biosciences Pharmingen). Previous work (34) has shown that the 2B8 clone for mouse CD117 cross-reacted with rat CD117 by FACS (Santa Cruz Biotech, Santa Cruz, CA, USA). In addition, positive staining was determined with the selected CD117 antibody by achieving similar staining for control rat tibiae marrow cells and C57 mouse tibiae marrow cells (data not shown). Once stained, cells were analyzed using a FACS Calibur (BD Biosciences). For each sample, 30,000 events were collected. For analysis of the flow cytometric data, the forward scatter and side scatter gate was set to a region that has been shown previously to include stem cells and monocytes. (35) The CD117þ and CD11bþ populations within the gate were identified as cells expressing specific levels of fluorescent activity above the nonspecific staining and autofluorescence of the isotype control. Basic fuchsin staining On dissection and prior to mCT scanning, specimens assigned for basic fuchsin staining were kept in 70% ethanol. After mCT scanning, the tibiae were completely dehydrated and stained with basic fuchsin according to the protocol of Burr and colleagues. (36) Subsequently, they were embedded in Koldmount fast-curing cold monomer (IDP/Vernon-Benshoff Co., Albany, NY, USA) and sectioned 400 mm transverse to the longitudinal axis of the tibia at the mCT ROI using a Buehler Isomet low-speed diamond-blade saw. The section closest to the center of the ROI for each tibia was mounted on a plastic microscope slide using cyanoacrylate and polished to a final thickness of 150 to 200 mm. The sections were examined with a standard confocal microscope (Zeiss LSM 510-META Laser Scanning Confocal Microscope, Carl Zeiss MicroImaging, Inc., Thornwood, NY, USA) at 40 Â magnification using an HeNe1 laser (543 nm) with a Texas red/rhodamine filter. Images were taken for the entire cortical region and subsequently analyzed using the Zeiss LSM Image Browser (Version 4.2.0.121, Carl Zeiss Micro Imaging, Inc.) to quantify linear microdamage within the cortical region (see Fig. 4A) Only linear microdamage was quantified because it has been shown that remodeling occurs only in bone with linear microcracks, whereas bone containing only diffuse damage fails to initiate a remodeling response. (37) Using light microscopy, the cortical cross-sectional areas were calculated with Bioquant image software (BQ OSTEO, Version 7.20.10, Bioquant Image Analysis Corporation, Nashville, TN, USA) while omitting woven bone areas. Histology and immunohistochemistry On dissection and prior to mCT scanning, specimens assigned for histology and immunohistochemistry were immediately placed in 10% neutral buffered formalin (NBF) for 2 days, followed by 70% ethanol. Subsequent to mCT scanning, specimens were decalcified over 5 weeks using 10% EDTA at 48C, after which specimens were paraffin embedded. The bone within the mCT ROI was sectioned 7 mm transverse to the longitudinal axis of the tibia. ELF97 (TRAP) Two paraffin-embedded sections per tibia (1 mm apart within the mCT ROI, with the first section 7 mm proximal to the tibia-fibula junction) were stained with ELF97 phosphate (Molecular Probes, Eugene, OR, USA) to visualize TRAPþ resorption pits within the cortical and woven bone. The protocol used for the fluorescence-based ELF97 TRAP stain was adapted from the in vitro ELF97 stain protocol developed by Filgueira. (38) Specifically, 50 mL of ELF97 reaction mix (41.15 mL dH 2 O, 0.55 mL sodium nitrite, 5.00 mL 2 mM ELF97, 2.20 mL acetate, and 1.1 mL tartrate) was applied to each selected section, which was incubated at room temperature in the dark for 5 minutes. Sections were rinsed subsequently in dH 2 O, and cover slips were applied using Prolong Gold antifade reagent (Invitrogen/Molecular Probes). Sections were imaged immediately using appropriate fluorescent filters (see Fig. 4C,D). Histologic measurements included TRAPþ intracortical resorption pits per cortical area, percent TRAPþ endosteal perimeter (%TRAPendo), and percent TRAPþ periosteal perimeter (%TRAPperi). These measurements were determined while omitting woven bone areas. The average measurements for the two sections were used for the subsequent analysis. Picro-sirius red Two sections per tibia (1 mm apart within the mCT ROI, with the first section 7 mm proximal to the tibia-fibula junction) were stained with picro-sirius red F3BA in order to quantify woven bone apposition. (39,40) Using polarized light microscopy, collagen fibers were highlighted to distinguish the lamellar and woven bone and enable quantification of woven bone apposition at the periosteal surfaces using Bioquant image software (BQ OSTEO Version 7.20.10) (see Fig. 4B). Histologic measurements were done of woven bone area. The average measurements for the two sections were used for the subsequent analysis. Immunohistochemical apoptosis detection (Apoptag) One section per tibia (taken at the center of the mCT ROI) was used for immunohistochemical osteocyte apoptosis detection using the ApopTag Plus Fluorescein In Situ Apoptosis Detection Kit (Chemicon/Millipore, Temecula, CA, USA). Specifically, the assay detects apoptosis via fluorescent DNA fragmentation labeling, similar to a standard TUNEL assay. (41) Once stained, sections were cover slipped using propidium iodide-antifade solution (Millipore) (see Fig. 4E, F). Sections of female rodent mammary glands were used as positive controls (see Fig. 4G) owing to extensive apoptosis occurring in the tissue 3 to 5 days after weaning of rat pups. (42) All sections were imaged immediately using appropriate fluorescent filters. Histologic measurements included number of apoptosis-positive osteocytes per cortical area. These measurements were determined while omitting woven bone areas. Statistics and graph nomenclature Results are presented graphically both in absolute values for each individual leg and as the difference between the damaged (þ) and undamaged (-) contralateral legs (Figs. 1 through 3). The term delta (D) is used to indicate the differences between contralateral limbs (damaged tibia -undamaged tibia). Statistical analyses were done for the right undamaged tibiae (serving as internal systemic controls) and delta measurements. To compare damaged with undamaged contralateral sides, paired t tests were used. A two-way ANOVA with a post hoc correction was used between experimental groups (WB/HS/HW) and between groups at different time points for both absolute right tibiae values and delta values. Significance was defined as p .05. The analysis was performed using SPSS statistical software (SPSS, Inc., Chicago, IL, USA). In all figures (for both absolute values and delta), significance is denoted with a letter in the range a to f: (a) indicate significant difference between right versus left leg for that group on the particular day; (b) significant difference between WB and group on that day; (c) significant difference between HS and HW on that day, (d) significant difference between day 14 and day 35 within the specific group (WB, HS, or HW), (e) significant difference between days 18 and 35 within the specific group (WB, HS, or HW), and (f ) significant difference between days 14 and 18 within the specific group (WB, HS or HW). Error bars in all figures indicate standard deviations. Results During the experimental protocol, seven animals had to be euthanized and replaced owing to complete fracture during 4-point bending (WB: 1; HS: 3) or to failure in health maintenance indicated by weight loss of more than 15 percent of initial body weight (HS: 2, HW: 1). Significant initial woven bone formation occurred for all three groups. Over time, mineralization increased for all groups, but with the WB group having significantly more woven bone deposited than the HS and HW groups (Fig. 1B), leading to a larger cross-sectional area (Fig. 1A). The woven-bone response resembled the pattern of a stress-fracture response. In addition, mCT showed that tissue mineral content (TMC), tissue mineral density (TMD), and cross-sectional moment of inertia (I zz ) followed similar significant trends as cortical area across groups and time (not shown). Both the HS and HW groups showed a significant increase in delta marrow area at day 35. Quantified confocal microscopic measures of basic fuchsinstained sections revealed that the resulting microdamage (see Fig. 4A) looked similar in nature to microdamage induced in the rat ulnae with lower loads for longer loading durations. (5,(43)(44)(45) In addition, it was shown that significant amounts of microdamage were equally induced at the time of microdamage creation (day 14) for the WB and HS groups ( Fig. 2A). The WB and HW groups showed a significant decrease in microdamage over time, indicating that microdamage resorption had occurred. In contrast, significant damage remained in the HS group for the same time period (see Fig. 2A). Interestingly, the significant decrease in crack density (Cr.Dn.) for the HW group occurred from day 18 to day 35, whereas this occurred between day 14 and day 35 for the WB group. This could indicate a delayed resorption response for the HW group compared with the WB group. Detection of osteocyte apoptosis revealed that the damage induced by fatigue loading resulted in similar and significant increases in apoptotic osteocytes in the cortical bone for all three groups at all time points (see Fig. 2B). The number of apoptotic osteocytes decreased significantly for all groups over time but remained similar between all groups throughout (see Fig. 2B). FACS was used to investigate the osteoclast lineage of the bone marrow. An early marker (CD117) was used for hematopoietic stem cells, whereas the intermediate marker (CD11b) was used for evaluation of the monocyte population. The results showed a systemic increase in CD11b from days 14 to 35 for both the WB and HS groups, whereas a decrease was found for the HW group (Fig. 3A). However, DCD11b was significantly decreased across time for the WB group, indicating that the damaged leg had fewer circulating monocytes relative to the contralateral control limb. The change in DCD 11b for the WB group was significantly different from that in both the HS and HW groups at day 35. The results for CD117 showed no differences in systemic values between groups (see Fig. 3B), whereas it was found that the HS group had a significant systemic reduction in CD117 at day 18 relative to both days 14 and 35. A significant increase in DCD117 across time was found for the WB group, resulting in a significant difference in DCD117 compared with the HS group at day 35 (see Fig. 3B). TRAP staining using ELF97 phosphate showed that control levels of %TRAPendo were significantly lower for the HS group than for the WB group at all time points (see Fig. 3C). It was found that delta %TRAPendo was significantly different between the WB group and both the HS and HW groups at day 18 (see Fig. 3C). In addition, the WB and HW groups had a significant increase in %TRAPperi at day 35, with the increase for the WB group being significantly greater than for the HW group (see Fig. 3D). Histology for TRAP also demonstrated a significant increase in DTRAPþ intracortical resorption pits following loading for the WB and HW groups, which was completely absent for the HS group (see Fig. 3E). In addition, the HW and HS groups were significantly different at day 35 (see Fig. 3E). Finally, it was found that the control level of TRAPþ intracortical resorption pits of the right, undamaged tibiae for the HW group were significantly lower than for the WB group at days 18 and 35. Discussion The damage response corresponds to what has been shown in previous animal models, where fatigue loading induced woven bone formation that was both dependent on and proportional to the amount of induced microdamage. (43,44,46) Periosteal woven bone formation after fatigue damage also has been shown to aid in the rapid recovery of whole-bone strength while increasing fatigue life. (44,47) Hence not only does a significant amount of damage remain in animals that were hindlimb suspended, but the protective mechanism of woven bone formation was not present, suggesting that whole-bone strength remains low in a disuse setting following fatigue damage even with daily shortterm physiologic loading. One explanation for the significantly reduced woven bone response for the HS and HW groups may be a significant reduction in blood flow. Studies using HS models have demonstrated reduced blood flow, (22) and it has been shown that there is a correlation between increased fatigue loading, increased vascularity, and increased woven bone formation. (48) However, it also has been shown that 1 hour of daily loading for hindlimb-suspended animals appears to prevent adverse changes in myocardial contractility and therefore blood flow. (28) Hence the significantly smaller woven bone response for the HW and HS groups seems most likely to be stemming from the reduction in osteoblast responsiveness and bone-formation rate associated with hindlimb suspension. (19,49) Previous studies have shown that excessive fatigue loading results in woven bone formation in addition to increased intracortical resorption. The intracortical resorption response, which has been observed separately in several studies, (2,(4)(5)(6)(7)(8) was evident for both the WB and HW groups following fatigue damage but not for the HS group. The absence of the intracortical resorption response could have been due to changes in the initial osteocyte response to the induced microdamage. However, the evidence from immunohistochemistry indicates that this response was not altered by disuse. In addition, a similar decay of apoptotic osteocytes over time (owing to assay technique) was observed for all groups, indicating that although microcrack removal was rescued for the HW group, it was not due to an increase in initial The lack of microdamage removal and absence of intracortical resorption pits for the HS group could be due to either a loss of the ''targeting'' mechanism of remodeling or a lack of osteoclast recruitment following microdamage. In addition, the potential delay in microdamage removal for the HW group relative to the WB group could be due to the much shorter daily loading duration of the HW group or the physiologic assimilation to daily hourly loading following 14 days of constant hindlimb suspension. Examining the HSC and monocyte population of the osteoclast lineage with flow cytometry revealed a significant increase in systemic monocytes for both the WB and HS groups potentially owing to the overall effect of the direct periosteal trauma. However, over time, the damaged leg showed a significant reduction in monocytes relative to the control leg for the WB group. This suggests a potential shift in the osteoclast lineage, with a larger amount of monocytes being differentiated in the damaged leg for the WB group owing to the recruitment of osteoclasts to the damaged regions. This ''depletion'' of monocytes would trigger a demand for more, resulting in an increase in the HSC population for the WB group, as suggested by the increase in DCD117 over time. Hence the flow cytometry results indicate that the histologic evidence of ''no damage removal'' at day 35 for HS was due to a lack of osteoclast activation and not a change in the targeting mechanisms of remodeling. Although the HW group showed increased microdamage removal over time, the initial increase in systemic monocytes relative to the WB group was followed by a significant decrease, which could be due to a complete shift in the osteoclast lineage. TRAP staining indicated that intracortical resorption following microdamage is deactivated during disuse and that physiologic loading is necessary for the remodeling repair response that follows significant accumulation of microdamage. The decrease in basal level of TRAPþ intracortical resorption pits of the right, undamaged tibiae for the HW group (compared with the WB and HS groups) could be due to the later point in time that the group was acquired. This is supported by studies showing seasonal changes occurring in bone despite a constant diet, established light/dark cycles, and a steady temperature. (51,52) Despite the possibility of these seasonal changes, TRAP staining for the HW group reaffirmed that targeted resorption was rescued owing to intermittent physiologic loading during disuse. The lack of osteoclast activation for the HS group could be due to a decrease in interstitial fluid flow that results from disuse, (23) particularly given that several studies suggest that load-induced fluid flow may be necessary to provide sufficient transport of larger molecules such as proteins to and from osteocytes. (24,25) In addition, the evidence that osteocyte apoptotic bodies induce osteoclastogenesis leading to localized bone resorption (53) suggests that during hindlimb suspension or disuse, the ''activating'' signal for resorption of microdamage may be related to a lack of fluid flow through the canalicular system, resulting in inhibited delivery of resorption-initiating signals from the apoptotic osteocytes. Furthermore, the results for the HW group may indicate that intermittent loading during disuse can provide enough interstitial fluid flow to distribute any ''active'' signal of resorption. These findings are supported by recent studies by Tatsumi and colleagues (54) and Cardoso and colleagues, (9) who both demonstrated a causal relationship between osteocyte apoptosis and activation of bone resorption. While Cadoso and colleagues (9) demonstrated this with fatigue-induced osteocyte apoptosis, the study by Tatsumi and colleagues (54) used a transgenic mouse model for inducible and specific osteocyte ablation through apoptosis. The work by Tatsumi and colleagues parallels our findings by showing a trend toward a decrease in trabecular osteoclasts in addition to a significant reduction in RANKL expression for tail-suspended versus ground-based osteocyteablated transgenic mice. Similar to our findings, this would indicate that the removal of physiologic loading leads to a lack of osteoclast activation. There are a number of limitations associated with this study. The decrease in apoptotic osteocytes over time may be associated with the method of apoptosis detection. The ApopTag Plus Fluorescein In Situ Apoptosis Detection Kit detects apoptosis via fluorescent DNA fragmentation labeling, similar to a standard TUNEL assay. The kit can detect early-stage apoptosis but also will stain DNA fragments in apoptotic bodies. Since the apoptotic cycle is completed in 1 to 2 days, the early apoptotic osteocytes are now replaced with nonattached apoptotic bodies within the lacunae. The processing of the tissue from days 18 and 35 could remove these bodies, leaving an empty lacuna. For this study, only early apoptotic osteocytes were accounted for, thus revealing a decrease from day 14 to day 35, similar to that observed from day 3 to day 7 for vehicletreated rats with induced bone microdamage. (55) In addition, the focus of the apoptosis assay was to determine if the amount of apoptotic osteocytes was similar between groups immediately following microdamage because different osteocyte apoptotic baselines between groups could have given rise to a different osteoclast response. A second unexpected observation is the absence of an increase in apoptotic osteocytes for the right, nondamaged tibia across time for all groups (see Fig. 2B). This is countered by the work by Aquirre and colleagues, (56) who demonstrated an increase in apoptotic vertebral cortical osteocytes for hindlimbsuspended adult mice. The reasons for this discrepancy could be several: (1) the difference in the method of osteocyte apoptosis detection and (2) the rate or occurrence of osteocyte apoptosis owing to hindlimb suspension could be different between the vertebral body and the tibia. Basso and colleagues (57) showed a significant increase in apoptotic osteocytes in the tibia of male rats with 2 weeks of hindlimb suspension. This was done in 5week-old animals, however, which might exhibit a different rate of apoptosis than a more skeletally mature animal. Another explanation for the observation of no changes (or perhaps delayed changes) in osteocyte apoptosis could be the ability of our rats to maintain weight beyond an initial assimilation period (Tables 3 and 4). Tables 1 through 3 contain previously unpublished data from our hindlimb-suspension model development that demonstrated the effects of pure hindlimb suspension and weight bearing up to 35 days. As seen in Table 1, 35 days of hindlimb suspension did not decrease the cortical area significantly (although significant cancellous bone was lost). This parallels the studies by Bloomfield and colleagues, (19) who showed an increase or maintenance in cortical area with a decrease in trabecular parameters when mature rats maintained their weight over 4 weeks of hindlimb suspension. In contrast, studies by Hefferan and colleagues, (58) Allen and colleagues, (59) and Vico and colleagues (60) showed early decreases in cortical area and trabecular bone volume fraction (BVF) during 2 to 4 weeks of HS in mature rats who lost significant weight continuously during the hindlimb-suspension period. Hence longer periods of hindlimb suspension might result in significant increases in apoptotic osteocytes in our model. A final limitation relates to the direct periosteal trauma and reaction caused by the loading pads of the four-point loading system to the cortical contact region. Owing to the direct loading of the periosteum, the sections for histology all were taken at the center of the ROI, that is, approximately 5.5 mm away from the nearest loading point. By doing so, the effect of direct loading was minimized or absent owing to the relatively great distance away from the loading points. However, the potential effect of direct periosteal trauma on the marrow constituents used for flow cytometry cannot be discerned from the effect owing to microdamage. Despite the limitations, the data suggest that individuals with severely limited activity potentially may accumulate unrepaired microdamage, thereby increasing fracture risk. The change in remodeling owing to disuse also may influence how stress fractures are treated. The current treatment of stress fractures by casting and/or prevention of load bearing may need to be reconsidered because the repair of microdamage may proceed more effectively when combined with physiologic loading. Early evidence to support this hypothesis has been found in treatment of running injuries. Standard treatment methods for stress fractures associated with long-distance running typically prescribe up to 12 weeks of therapy (dominantly non-weight bearing) before returning to a normal running schedule. (29) Recent studies, however (without an experimental basis), have decreased the recovery period by implementing earlier crosstraining, enabling the athlete to return to function in only 7 weeks. (30) The results from these early clinical studies parallel the results in this study by demonstrating that an intermittent controlled therapeutic loading regime of the injured limb potentially could increase the rate and extent of recovery from a stress fracture owing to the increase in targeted remodeling. In summary, this study demonstrates that physiologic loading is necessary for a remodeling repair response to occur following significant accumulation of microdamage. In addition, intermittent daily physiologic loading can reverse the loss of remodeling in response to microdamage during disuse. Although intermittent loading cannot rescue the reduction in woven bone production following microdamage, the targeted resorption of microdamage is revived. Last and most important, while a number of studies have proposed that the repair of microdamage is triggered by cell apoptosis, these results demonstrate that this mechanism may be insufficient without the stimulation provided through physiologic loading. Disclosures All the authors state that they have no conflicts of interest. Italic ¼ value significantly different from zero ( p < .05).
2014-10-01T00:00:00.000Z
2009-10-12T00:00:00.000
{ "year": 2009, "sha1": "fd56129113becd365d3d919a527690d073065058", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1359/jbmr.091016", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "851001596446d005596d0154caf1c232f0bcdc28", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118446562
pes2o/s2orc
v3-fos-license
The Expanded Very Large Array -- a New Telescope for New Science Since its commissioning in 1980, the Very Large Array (VLA) has consistently demonstrated its scientific productivity. However, its fundamental capabilities have changed little since 1980, particularly in the key areas of sensitivity, frequency coverage, and velocity resolution. These limitations have been addressed by a major upgrade of the array, which began in 2001 and will be completed at the end of 2012. When completed, the Expanded VLA -- the EVLA -- will provide complete frequency coverage from 1 to 50 GHz, a continuum sensitivity of typically 1 microJy/beam (in 9 hours with full bandwidth), and a modern correlator with vastly greater capabilities and flexibility than the VLA's. In this paper we describe the goals of the EVLA project, its current status, and the anticipated expansion of capabilities over the next few years. User access to the array through the OSRO and RSRO programs is described. The following papers in this special issue, derived from observations in its early science period, demonstrate the astonishing breadth of this most flexible and powerful general-purpose telescope. INTRODUCTION The Very Large Array (VLA) is an imaging radio interferometer located in west-central New Mexico, operated by the National Radio Astronomy Observatory (NRAO) 2 . It comprises 27 antennas of 25-meter diameter positioned along three equiangular arms of length 21 km, nine antennas per arm. The array provides images of astronomical objects in all four Stokes parameters, with a diffraction-limited maximum resolution of 1.4 arcseconds at 1.4 GHz and 40 milliarcseconds at 50 GHz, utilizing the well-established techniques of aperture synthesis, as described for example in Thompson, Moran, and Swenson (2001). The VLA is an exceptionally flexible telescope, in part due to its ability to reconfigure -there are four standard configurations of maximum baseline lengths of 1, 3.4, 11, and 36 km, providing a wide range of resolutions and image surface brightness sensitivities. Descriptions of the VLA as originally designed are given found in Thompson et al. (1980) and Napier et al. (1983). A picture of the VLA in its most compact configuration is shown in Fig. 1 The VLA was designed and built in the 1970s utilizing the best technology available at the time. Upon its completion in 1980, the array could observe in four frequency bands with a maximum bandwidth of 100 MHz per polarization. Its innovative digital correlator provided a maximum of 512 spectral channels, spanning a maximum of 3 MHz for each of the 351 baselines in a single polarization, or it could provide full Stokes visibilities for polarimetric imaging but without any spectral resolution. These capabilities were ground-breaking at the time, and were well-matched to the key science goals for the telescope, which included imaging the Dopplershifted hydrogen emission from nearby galaxies, and resolving the fine-scale structure of powerful radio galaxies, quasars, and supernova remnants. With 27 antennas and a 2-dimensional array, the VLA was designed for sensitivity, speed, and flexibility of operation. Besides the ability to change its physical scale, changes in frequency band and corrrelator mode could be effected in seconds, enabling astronomers to acquire a range of information on astronomical objects not possible on any other centimeter-wavelength radio telescope. These attributes encouraged users of the VLA to image radio emission from processes far removed from those given in the original proposal as science goals for the array -indeed, much of the VLA's most original and influential observations are in fields unanticipated or unknown at the time of construction. Most components of the VLA's design remained unchanged for twenty years following its completion -in particular, the signal transmission and correlation capabilities remained at their 1980 levels, essentially freezing the array's bandwidth and spectral resolution. During this interval, and continuing on to this day, spectacular improvements in signal transmission and processing have taken place, making it clear that a minimum of an order of magnitude improvement in the array's sensitivity, frequency coverage and spectral resolution could be obtained at modest cost by implementing these new technologies. During this same interval, the breadth and range of astronomical science had changed dramatically, with ever increasing emphasis on rapid time response, fast imaging, precision polarimetry, high sensitivity, wider frequency coverage, and higher spectral resolution. In response to these expanding needs, the NRAO proposed to the NSF in 2000 a far-reaching expansion of the VLA's capabilities, essentially marrying modern high-speed wide-band digital and wide-band receiver technologies to the sound infrastructure already in place. Following a high ranking by the 2000 Decadal Review National Research Council (2001), the EVLA Project began in 2001 and will be completed by the end of 2012. The Expanded Very Large Array Project is a partnership among the U.S., Canada, and Mexico, with a total budget of $96M (inflation-adjusted, in 2011 dollars). KEY EVLA GOALS AND CAPABILITIES The technical goals of the EVLA are based on a comprehensive review of potential science enabled by a minimum tenfold increase in capabilities over the VLA. The identified science capabilities were organized into four major themes: • The magnetic universe: measuring the strength and topology of cosmic magnetic fields; • The obscured universe: enabling unbiased surveys and imaging of dust-shrouded objects that are obscured at other wavelengths; • The transient unverse: enabling rapid response to, and imaging of, rapidly evolving transient sources; • The evolving universe: tracking the formation and evolution of objects in the universe, ranging from stars through nearby galaxies to galactic nuclei. Within each theme, it was readily demonstrated that order-of-magnitude improvements in VLA performance would result in spectacular new science by the worldwide user community. Based on these conclusions, the fundamental goals of the EVLA Project are: • Complete frequency coverage from 1 to 50 GHz, via eight new or improved receiver bands, utilizing state-of-the-art technology. See Table 1 for basic characteristics. • New antenna electronics to process eight signal channels of up to 2 GHz each. • High-speed wide-band digital samplers within each antenna to sample up to 8 GHz bandwidth in each polarization. • A new wide-bandwidth fiber-optical data transmission system to conduct digital signals from the antennas to the correlator. • A new wide-bandwidth, full polarization correlator providing a minimum of 16384 spectral channels per baseline. c The System Equivalent Flux Density is a measure of the antenna sensitivity: SEFD = 2kTsys/Ae. It is the flux density of a source which doubles the system temperature. b An estimate of the effective bandwidth available, free of RFI a The expected rms noise in a 1-hour integration at high elevation and under good weather conditions. For the Continuum case, the bandwidth utilized is that listed in column four. For the Line case, a bandwidth corresponding to 1 km/sec velocity resolution is assumed. • A new real-time control system for the array, and new monitor and control hardware for the electronics. • New high-level software that provides easier and more flexible management of EVLA capabilities to its users. The WIDAR Correlator A key component of the EVLA is its new wideband digital correlator, known by the acronym WIDARS 3 . This is a 10 peta-32-bit ops/sec special-purpose computer which produces the cross-power spectral visibilities for all baselines in the array. A description of its design is given in Carlson and Dewdney (2000). Its key astronomical capabilities are summarized below: • 16 GHz maximum instantaneous input bandwidth • A minimum of 16384 spectral channels per baseline, and a maximum exceeding 4 million. • Full polarization capabilities on all baselines and channels. If full polarization is not needed for the science, correlator resources can be reallocated to provide higher spectral resolution for the parallelhand correlations. • Generation of 64 independent "spectral windows" 4 , each of which is separately tunable in frequency and bandwidth. The spectral window width is variable, by factors of two, from 128 MHz down to 31 kHz. • The ability to improve spectral resolution by utilizing correlator resources freed up with a reduction of the spectral window width, or by reallocating resources from unneeded spectral windows or polarization products. The spectral resolution is adjustable from a maximum of 2000 kHz to a minimum of 0.12 Hz. • A minimum integration time of 100 msec with the standard minimum 16384 spectral channels, and less with a reduced spectral resolution. Besides these basic capabilities for regular observing modes, there are a number of speciality modes, including: • A phased array mode, where the signals from all antennas are combined in phase, and made available for external capture and analysis, such as for VLBI applications and pulsar processing. • A specialized pulsar binning mode, providing up to 2000 phase bins with temporal resolution as short as 200 µsec with all spectral channels, and as short at 15µsec with a reduced spectral resolution, enabling rapid imaging of objects such as globular clusters and the Galactic Center where multiple pulsars are expected to lie within the antenna primary beam. • Up to eight simultaneously-operating subarrays, each with a different target point and correlator configuration. • An external data capture capability, allowing antenna or phased array outputs to be externally recorded for off-line processing. The WIDAR correlator is Canada's contribution to the EVLA project, and was designed and built to meet or exceed the requirements of the EVLA by the HIA correlator group, located at DRAO near Penticton, BC, Canada. A more thorough description of the EVLA's design, including that of its correlator, is found in Perley et al. (2009). EVLA CAPABILITIES GROWTH A compact summary of the expansion in capabilities of the EVLA in comparison to those of the VLA is provided in Table 2 The conversion of the VLA into the EVLA was scheduled to take more than a decade. It was therefore considered vital to maintain operation as a productive scientific facility throughout the conversion process. This required designing in a backward compatibility between the newly-converted antennas and the original correlator. This process has been very successful, enabling nearly seamless continuing observing, with the array only shut down for a single 7-week period between January and March 2010 in order to move hardware from the old VLA correlator to the WIDAR correlator upon the latter's implementation. This has enabled the NRAO to offer steadily increasing scientific capabilities to the user community ahead of the completion of the construction project. The growth in capabilities can be separated into two parts: that provided by the antennas, including the receivers and the data transmission system, and that provided by the correlator. Figure 2 shows the current and anticipated availability of the eight receiver bands. Full outfitting of four receiver bands are now complete -these are the 4 -8 GHz band, and the three highest frequency bands, spanning 18 through 50 GHz. A critical component, not illustrated in the figure, is the growth in data transmission capabilities. The current maximum total bandwidth which can be transferred from antenna to correlator is 4 GHz, available in two pairs of oppositely polarized signals of bandwidth 1 GHz each. Implementation of the full 16 GHz capability will not be available for science observing until at least mid-2012. Growth in Correlator Capabilities The WIDAR correlator is capable of a diverse range of observing modes. Following the decommissioning of the old VLA correlator in January 2010, a basic set of WIDAR correlator capabilies were defined and offered for the first full EVLA standard configuration cycle that were modelled on the capabilities of the VLA, and in fact more than doubled the total available bandwidth per polarization, to 256 MHz, and dramatically increased the number of available spectral channels at the maximum bandwidth. At the same time the commissioning of the correlator proceeded to focus on delivering the maximum 2 GHz per polarization bandwidth currently available in preparation for the configuration cycle beginning in September 2011 in the most compact configuration. The jump from 256 MHz to 2 GHz marks a potential increase in dataset size by a factor of 8, and with the total 8 GHz bandwidth expected in 2012, there will be a further increase. In its highest spectral resolution modes, with typically hundreds of thousands of channels per integration, visibility dataset sizes will potentially be several terabytes. By the beginning of EVLA full operations in January 2013, it is expected that up to 8 GHz bandwidth (for those bands supporting these bandwidths; see Table 1) will be available, with up to 4 million channels for spectroscopy. Science Commissioning The delivery of the full science capabilities of the EVLA, as opposed to the correlator modes and updated electronics and receivers, requires the development of new observing, calibration, and post-processing procedures compared with the VLA, plus the software to support them. The WIDAR correlator is fundamentally a spectral line correlator, so that even standard "continuum" observations of astronomical sources are carried out in a spectral line correlator mode. Furthermore, the vastly increased sensitivity of the EVLA provides new opportunities such as fast "on-the-fly" mosaics of large areas, large (in number) source surveys, and high time resolution observations. All these observing modes are new to the EVLA, and each will be commissioned by EVLA staff. At the time of EVLA full operations many new observing modes and capabilities are expected to be available, but some more specialized modes may take longer. The wide bandwidths of the EVLA present a special problem at its lower frequencies (ν 10 GHz, although higher frequencies are also affected to some extent), for which the radio spectrum suffers considerable external interference (RFI) that is both temporally and spatially variable. The development of automated flagging and interference excision procedures and software is a key area of commissioning throughout 2011 and 2012. 3.4. EVLA Early Science Programs EVLA Early Science began in March 2010 with the array in its most compact, D-configuration, and will continue through the end of the EVLA construction project until full operations commences in January 2013. It includes an Open Shared Risk Observing (OSRO) program for the general user community, and a Resident Shared Risk Observing (RSRO) program. The OSRO program offers the correlator capabilities described in Section 3.2, above. The RSRO program offers participants full access to the growing capabilities of the WIDAR correlator for peer-reviewed science projects, in exchange for a period of residence at the Domenici Science Operations Center (DSOC) in Socorro to assist with the EVLA commissioning. It is intended to accelerate the development of the EVLA's full scientific capabilities by gaining enhanced resources and expertise through community participation, at the same time as more quickly optimizing the scientific productivity of the EVLA. To date, 27 individuals have passed through the DSOC as RSRO visitors. The papers in this Special Issue of the Astrophysical Journal Letters utilize data obtained through both of these Early Science programs. USING THE EVLA Observing time on the EVLA is open to all astronomers world-wide. There are no quotas or reserved blocks of time. Starting in 2011 time on the EVLA is scheduled on a semester basis, with each semester lasting six months. Proposal deadlines are 5pm (1700) Eastern Time on February 1 and August 1. 5 The February 1 deadline nominally covers observing time in August or later, and the August 1 deadline nominally covers observing time in February or later. At either proposal deadline, requests for future array configurations may also be considered. Astronomers prepare and submit observing proposals using the on-line Proposal Submission Tool (PST). The PST permits the detailed construction of a cover sheet specifying the requested observations, using a set of online forms, and the uploading of a scientific and technical justification to accompany the cover information. Funding opportunities are available for students at US institutions and may be requested via the PST. Proposal evaluation involves technical reviews by NRAO staff and panel-based science reviews by community members. The results of these reviews are cross-reconciled by the community-based Time Allocation Committee, leading to an approved science program. Information from approved proposals flows to the webbased Observation Preparation Tool (OPT), an on-line tool for creating observing scripts. Astronomers use the OPT to specify sources to be observed, instrumental setups, and timing information, all packaged as a Scheduling Block (SB). Astronomers also use the OPT to submit their Scheduling Blocks to a dynamic queue. NRAO staff use the Observation Scheduling Tool (OST) to examine the queue of pending scheduling blocks and the current observing conditions. The OST then applies heuristics to select the optimal scheduling block and send it off for execution by the monitor and control system. Such dynamic scheduling enhances science data quality and the array's ability to discharge time-sensitive science. The OST can also be used to execute a fixed-date scheduling block, as required for a coordinated observation with other telescopes. As an observation progresses, NRAO staff monitor the array's health, maintain an observing log, and ensure that science data are being archived. At the conclusion of an observation the observing log is e-mailed to the astronomer. This log includes a link to the Archive Access Tool (AAT), an on-line search tool that the astronomer can use to locate and download the archived data. Those data are proprietary to the proposal's authors for a period of 12 months. DATA POST-PROCESSING, PIPELINES, AND ALGORITHM DEVELOPMENT It is clear that the data post-processing needs of the EVLA will far exceed those of the VLA. The salient features of the EVLA that will drive the post-processing software development are the capabilities to produce data covering: (i) a large instantaneous bandwidth (at the lowest three frequency bands the high-frequency end is twice the frequency of the low end), and (ii) a large number of spectral channels (usually flexibly arranged in non-contiguous multiple spectral windows of varying width and frequency resolution). These two features re-sult in high data rates (> 5 MB/s) as a matter of course, with the WIDAR correlelator capable of producing much higher rates (up to 350 GB/s!), and also therefore with the prospect of dealing with large (> 1 TB) datasets. Therefore, calibration and imaging of the data from the EVLA will present problems that must be solved by a combination of a scalable post-processing package, algorithmic improvements, and processing and I/O speed gains. A large fraction of data collected by the EVLA will be taken in one of a number of standard observing modes, for example, low frequency continuum, high frequency continuum, HI (neutral hydrogen) spectral line, or polarization. Given the considerable experience in reducing data taken in similar kinds of modes with the VLA, it is reasonable to assume that reduction of this type of data can be mostly automated. The post-processing package (see below), when combined with some information collected from the astronomer in the observation preparation stage (in the Observation Preparation Tool -for instance what a particular source is meant to actually be used for in the post-processing), during actual observing, and with some heuristics (rules for what to do given certain situations) should be sufficient to complete such automatic reductions. While we are not currently providing an automated reduction pipeline for EVLA data, we plan to do so in the near future. When this occurs, all data taken in any of the standard modes on the EVLA will be processed with this pipeline, and the results (mainly so-called "reference image cubes") made available via the science data archive, subject to proprietary constraints. While these reference image cubes may be sufficient to give the investigators (and others) an idea of data quality and crude source characteristics, there is some concern that it will be extremely difficult to ever provide completely reliable automatic pipeline products as a final data product from the EVLA. For that, some human intervention, either by NRAO staff, or the astronomers themselves, may be needed. This is an active area of investigation within the observatory. For all data which cannot be reliably reduced via a pipeline, or for astronomers who wish to modify or extend what is done within the pipeline, there must be a post-processing software package capable of performing all steps necessary to turn the measured visibilities into final image cubes. For the VLA, several packages have been used for data editing and calibration over the years, but for nearly the entire lifetime of the VLA, AIPS (Greisen (2003)) has been the primary software package for data processing. There are problems with AIPS, however, that prevent its continued long-term use for EVLA data post-processing. The post-processing package of choice for the EVLA project is CASA (Common Astronomical Software Applications, see, e.g., McMullin et al. (2007)), because of its scalability, ability to be parallelized, scriptability, expertise within NRAO, and commonality with ALMA. Not everything that is needed for EVLA data reduction (and certainly not for robust pipeline-reduction) is currently available within CASA, however, so there is a program of implementation of missing pieces within the package. In addition, a vigorous program of algorithm development within NRAO is ongoing, to address items that are not only not implemented within CASA, but have no generally accepted algorithmic solution at all. Notably, automatic flagging of suspect data, wide-field wide-bandwidth full polarization imaging, RFI detection and excision, and ionospheric corrections are all areas of active research, and will be implemented within CASA as soon as possible after accepted algorithms are developed. Many of these areas of research are of course not unique to the EVLA, so developments at other observatories and telescopes are closely watched so that we can take advantage when appropriate. Finally, in order to support the scale of computing that is needed for both pipeline and hands-on post-processing of EVLA data, we are planning to provide a mid-sized computing cluster (10's of nodes) for both the automatic pipeline reduction of standard mode observations, and for somewhat more interactive reduction by astronomers. We believe this, along with improvements in the speed of the CASA code itself, is sufficient to support the needs of the astronomical community. We assume that most access to the cluster will be either by the NRAO-controlled automatic reduction pipeline, or by batch jobs submitted by remote users. We are committed to remaining flexible in this plan, however, as this will be a new era for post-processing of interferometer data, and we must be able to react to new developments, pressure from the community, and other realities as we go forward. SUMMARY The EVLA is a major expansion to the highly flexible and productive VLA. By expanding the bandwidth to 8 GHz/polarization, adding receivers to provide full frequency coverage from 1 to 50 GHz, and implementing a new correlator with superb spectral resolution and flexibility, the EVLA will provide orders of magnitude improvement in scientific capability over the VLA -capabilities that will ensure that the EVLA will be the premier general-purpose cm-wave imaging radio telescope for at least the next decade, serving the world user community for investigations into, and understanding of, celestial radio transients, the evolution of radio-emitting objects in the universe, the structure and strength of celestial magnetic fields, and the dusty obscured regions opaque to most other wavelengths. A project of this magnitude requires the talents and dedication of hundreds of individuals, far too numerous to list here. To all of these we offer our thanks for their efforts to make this project a success. It is a pleasure to acknowledge the three funding agencies supporting this project: The U.S. National Science Foundation, the Canadian National Research Council, and the Mexican Consejo Nacional De Cencia y Technologia.
2011-06-02T23:17:02.000Z
2011-06-02T00:00:00.000
{ "year": 2011, "sha1": "8fdd1a878af693aeaff148f6e899700e87dcc689", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1106.0532", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8fdd1a878af693aeaff148f6e899700e87dcc689", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
172001901
pes2o/s2orc
v3-fos-license
Parmenides’ Poem: Riddle from B 5 : The paper constitutes a short analysis of the poem of Parmenides from Elea “On Nature”. The author posits that this text is the original aim of ontology. In the author’s opin ion, the most important thesis of the poem is to be found in the fragment B 5, in which she recognizes the ancient motive of the self-knowledge (“the inner Way of Truth”). The primary purpose of the analysis is to interpret the mythological language and to reconsider terminology, e.g. Way of Day and Way of Night, Dike and Moira, thymos , plankton noon . Furthermore, the thinking of Parmenides is briefly interpreted in comparison with Heraclitus, Anaximander, and Archytas. Parmenides' Poem: Riddle from B 5 Małgorzata Bogaczyk-Vormayr (Uniwersytet im.Adama Mickiewicza w Poznaniu, bogaczyk@amu.edu.pl) In this short essay I attempt to examine the poem of Parmenides from Eleathe text of unusual beauty which fascinates many scholars.The poem is full of unsolved mysteries and yet is capable of clarifying certain moments of Greek philosophical thought, or of enchanting us with a single piece in which we find something of utmost importance: a sentence, metaphor or an expression that becomes some kind of recurrent phrase when we reread the text.In my interpretation of the poem I give special attention to fragment B5. On Nature is the first text which dwells on the philosophical concept of being in terms of a theoretical explanation of that which exists, which is the original aim of ontology.Parmenides was the first philosopher to address the topic of the issue of truth and ascribe the meaning to it which modern philosophy has sometimes interpreted as a postulate of genuineness and efficiency in stating judgments.It was Parmenides, however, who founded that constantly necessary philosophical attitude that uncovers itself as a kind of pattern and amounts to what we used to call "the Greekness" -the position we nevertheless assume as potentially given.I would like to reflect on what that attitude means and what kind of effort it demands from the poem's hero.The proper theme of the poem is the truth itself together with the way to which it leads -The Way of Truth (also Way of Light), which has a well-rounded nature (B 1,29; B 5; B 8,42-44) in contrast to the linear Way of Appearance/Opinion (pistis alethes -Way of Night, which lacks the true reliability).It illustrates the journey of a youth which begins with a poem announced by goddess Dike and ends with a return, which in my interpretation should be assigned to fragment B5 instead of the last part of the poem: […] ξυνὸν δέ μοί ἐστιν, ὁππόϑεν ἄρξωμαι· τόϑι γὰρ πάλιν ἵξομαι αὖϑις. For me, where I am to begin from is the same for to there I will come back again (B 5) 1 . In its description or characteristics the idea of well-rounded truth does not merely amount to a representation of perfection and completeness of being.Any reference to mythical meaning of that roundedness is insufficient because even if it plays the key role for interpreting the poem, neither the fragments where Dike ties meaning to judgment by reason (B 7,5) rather than the habits of experience and sensational perception nor the person of Moira presuming the delimitation (peras) for any possible discourse can explain fragment B 5 as such.Only a number of questions posed by the poem before the picture of "all things" is announced by Dike might explain thereafter the meaning of these two concepts: delimitation and a circle. As a result, the main subject is the source of true cognition together with the importance of the Way of Opinion and human cognition in general and as such it is equivalent to searching directly for the answer of the questions already posed by Milesian philosophers, Pythagoreans and Heraclitus.Parmenides entered philosophy when it had already recognized the duality of the world as some kind of cosmic divide which some understood as a source for establishing the highest principle.Some, such as Heraclitus, consider it as a source of intuition of unity which manifests itself in the form of logos but which does not answer the question of whether we can find any fixed element in the continuous flow of events or unending process of becoming and changing that takes place in cosmos.Others, such as Anaximander, believed that the first philosophical intuitions considered apeiron as the world principle and that philosophy is open to infinity.It was Parmenides who answered the question positively, hence developing for philosophy the proper notion of being together with its opposition: non-being.He understands philosophy as delimiting being (to eon) in the limits (peras) of the structures of logic. The poem in the text stands for a mythical image of unveiling the truth. The following descriptions suggest the circumstances under which truth is discovered: maidens guiding the way push back veils from their heads, gates of the roads of Night and Day with "the bronze posts fastened with bolts and rivets" and a "stone threshold", Dike "holding the keys that fit them" and finally "making a gaping gap of the doors" (B1,10-18).Dividing Dike delimits truth from opinion, marking the moment of passage from darkness and from that what is unclear or obscure (to apeiron) to that what is uncovered -the truth conceived as aletheia.C. F. von Weizsäcker wrote about Parmenides: He begins his seemingly abstract poem with a vivid image of his own journey to wide opened gates of wisdom, so the goddess Dike could order him to look carefully and after that he finally could see (Weizsäcker 2002, 477; translation mine). Both Dike and Themis, or Moira, point at the limits of cognition and judgments about being which amount to rational reasoning (noein).Thus, they state the procedure of inquiry (method) characterizing the Way of Truth and having the nature of a sentence legitimized by law: "It is or it is not" (B 8,16). The mares which carry me as far as my spirit ever aspired were escorting me, when they brought me and proceeded along the renowned road of the goddess, which brings a knowing mortal to all cities one by one.On this path I was being brought, on it wise mares were bringing me, straining the chariot, and maidens were guiding the way. (B 1, 1-5) In Greek imagination the mares, chariot and guides are a prototype of a journey to face Helios or Zeus and it is bound with the intent of presenting some kind of revelation or awakening.However, "the mares which carry me" are for one restrained by the end of the journey that is "the renowned road" and for another they carry the youth as his "spirit ever aspired".How should we understand the spirit (thymos) of the youth and the wisdom of the mares?I assume that the correct interpretation of fragment B1,4-5 should be as follows: I proceeded there, for on this path wise mares straining the chariot were bringing me and maidens were guiding the way.The mares stand for a symbol of "a knowing mortal's" cognitive faculties -"he man who knows" and can be carried "through all places" (B1,3). The Greek notion thymos can designate courage, heart, will and also spirit (soul) or mind.In the latter case, it is conceived as a location of thoughts and an origin of memory in terms of what can be evoked (a meaning similar to thymos pherein).Interpreted as such, thymos is a condition for searching and becomes a characteristic of what we call the unity of philosophical attitude.Only the authentic attitude of one who questions and inquires can be rewarded with some kind of mystical initiation.As the Pythagorean Archytas of Tarentum has put it: To become knowledgeable about things one does not know, one must either learn from others or find out for oneself.Now learning derives from someone else and is foreign, whereas finding out is of and by oneself.Finding out without seeking is difficult and rare, but with seeking it is manageable and easy, though someone who does not know how to seek cannot find. (DK 47 B 83)2 That approach stands in opposition to the attitude of "equally deaf and blind, amazed, hordes without judgment", led by "their wandering mind" (plankton noon) and who, because of not seeing the difference between being and non-being, are neither able to examine that what is nor to judge the nature of being (B 6,1-9).The only justified way assumes that what is definitely embodied in the structures of a rational discourse and held fast by Dike (B 8,13-15).The truth announced by the goddess does not entail that some state of being is from now on definite, as cognition does not mean any mystical initiation but some kind of return to oneself --that what is should be recognized by one's own mind.The Way of Truth has yet to be "far from the beaten path of humans".Weizsäcker explains it in his analysis of the poem: Adopting a new idea, a scholar experienced some kind of enlightenment and he has seen something that nobody could have seen before. Nevertheless he is not justified to refer to the enlightenment neither for his own use, nor to convince others.He has to be certain whether he really had seen it and that he could do following the consequences of his new idea and testing them by means of an already acknowledged experiment or a freshly designed one.He is the one obliged to make an attempt of falsifying his discovery.If it is true it will stand up the falsification and as a result it makes clear that what was not properly understood until then.Any discovery is justified like the light brought to darkness -it enables us to see (Weizsäcker 2002, 475). Such intuition has come to our mind when we recognize that ektos meaning from afar or beyond something, and not just outside.Then it is no longer about going along the road conceived as linear which is "far from the beaten path of humans" but going along an inner way of human cognitive faculties.The way of absolute truth is an affirmation of complete positiveness and a negation of non-being.In Parmenides' language, the verb "be" means "be something" (einai -einai ti), and similarly, "to think" means "to think something".Being, held fast by Dike and de finite, is the only thing which can be thought and articulated.In that way it is characterized as follows: ἡ μὲν ὅπως ἔστιν τε καὶ ὡς οὐκ ἔστι μὴ εἶναι [...] The One, that it is and that it is not possible for it not to be (B 2,3) οὔτε γὰρ ἂν γνοίης τό γε μὴ ἐὸν -οὐ γὰρ ἀνυστόνοὔτε φράσαις. For neither may you know that which is not (for it is not to be accomplished) nor may you declare it.(B 2,7-8) The main goal then is to judge something esti tauta -that it is true, so in that sense we can assess the veritative directions for the use of einai.Even einai ti, meaning "to be something" (something definite), relates to the Greek meaning of "being" in a certain way of being truly. 3The truth means what can be stated, and that is why noein refers to what Parmenides articulates by phrasthai in B 2,8, but also in the following fragments: That which is there to be spoken and thought of must be.For it is possible for it to be, but not possible for nothing to be.I bid you consider this. I will not permit you to say or to think from what is not.(B 8,7-8) Parmenides appointed the concept of non-being as irrational and illogical (alogon)of that what we can call non-reasonable, unjustified, meaningless and dumb.Also, because it would imply that it is somehow related to non-being, it is not conceivable that being could have appeared or perish.According to fragment B8, wandering mortals admit that being perishes, so they acknowledge non-being -changing places with that which is "whole and unchanging" and shackled by Fate (Moira).What for the mortals has two forms: being and non-being, for Parmenides should be treated as one, so it does not undermine the world ordering (diakosmos).Because mortals think about "light" and "night" (B 9) as the divide of being and non-being, they split the necessary principle of the unity of being together with analogically conceived (in accordance to eiokonta; B 8,60) "all the ordering as it appears", represented by a perfect well-rounded structure which is a complete implementation of peras. The following opposition should amount for an accurate model: The Way of Truth The Way of Appearance/Opinion Announced by the goddess = recognized what was not available to all, subject to falsification. Admitted by the mortals = assumed as certain, but neither verified nor falsified. Presumed = analogous; "the ordering as it appears" Admitted belief = treated as true; an assumption that non-being is speakable. Being and its delimitation interpreted in terms of necessary unity. Distinction and opposition between two forms of things. Only the Way of Truth enacts the postulate of the beginning and end of the road stated in fragment B 5: from anywhere to there.That relationship which is basic for Parmenides, reminds us of the illustrious statement of Heraclitus: ξυνὸν γὰρ ἀρχὴ καὶ πέρας ἐπὶ κύκλου περιφερείας The beginning of a circle is also its end. (DK 22 B 103) 4 In that sense, the beginning and the end are the same.It means that the only possible way is that in which peras represents the return, making it impossible to be passed over or questioned.Like B 2 and B 3, fragment B 5 can also be easily and unambiguously assigned to the Way of Truth -the order of, in other words, aletheia, thus excluding its linear representation.Solely the Way of Appearance leads from any point that "mortals posited convinced that it is true" (B 8, 39).While the "anywhere" indeed seems to refer to any point, it has to remain within the closed structure of a circle.People experiencing reliable truth by appointing any principle or law to it, so as to lend it to inner examination or reflection by those people, play the key role here. It seems plausible that the correct interpretation of Parmenides' poem should be taken from the perspective provided by the thesis of fragment B 5, so we could intuitively capture "all things" announced in a presumed whole as referring to the circular, inner Way of Truth. 5It is from this way that the reliable verification of discovery begins and so also begins the reflection upon any human experience. dikan epherein as meaning to follow somebody, should be emphasized here), i.e. to what extent he "was being brought" and to what extent Dike reveals to him what he has been looking for?In other words: how far has the young hero of the poem made the decision about the journey by himself and then chosen the Way and passed along it?What could he himself have seen using his own cognitive faculties?Fragment B 1 reads:
2019-06-01T13:14:16.607Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "970df346dd30ab413f900d1efa5773f21368b91b", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.14746/eip.2016.2.6", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "425180cc5bfe6bb907ba0040bdb309a8dc53f345", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
17998609
pes2o/s2orc
v3-fos-license
Perspectives in Global Helioseismology, and the Road Ahead We review the impact of global helioseismology on key questions concerning the internal structure and dynamics of the Sun, and consider the exciting challenges the field faces as it enters a fourth decade of science exploitation. We do so with an eye on the past, looking at the perspectives global helioseismology offered in its earlier phases, in particular the mid-to-late 1970s and the 1980s. We look at how modern, higher-quality, longer datasets coupled with new developments in analysis, have altered, refined, and changed some of those perspectives, and opened others that were not previously available for study. We finish by discussing outstanding challenges and questions for the field. Introduction The field of global helioseismology -the use of accurate and precise observations of the globally coherent modes of oscillation of the Sun to make inference on the internal structure and dynamics of our star -is about to enter its fourth decade.The observational starting point for global helioseismology was marked in the mid-to-late 1970s by several key papers: First, the observational confirmation by Deubner (1975), and independently by Rhodes, Ulrich and Simon (1977), of the standing-wave nature of the five-minute oscillations observed on the surface of the Sun, which was proposed by Ulrich (1970), and Leibacher and Stein (1971); and then the discovery that the oscillations displayed by the Sun were truly global whole-Sun, core-penetrating, radial-mode pulsations (Claverie et al., 1979).Previous observations of pulsating stars had revealed many objects that were oscillating in one, or at most a few, modes.The rich spectrum of oscillations displayed by the Sun was a different matter entirely.Christensen-Dalsgaard and Gough (1976) pointed out the great potential that a multi-modal spectrum could offer: the information content of the observations could potentially be so great as to allow a reconstruction of the internal structure of the star.More than 30 years on, exquisite observational reconstructions of the internal structure and dynamics of the Sun are in everyday use, thanks to helioseismology. Several thousand modes of oscillation of the Sun have to date been observed, identified, and studied.The oscillations are standing acoustic waves, for which the gradient of pressure (p) is the principal restoring force.The modes are excited stochastically, and damped intrinsically, by the turbulence in the outermost layers of the sub-surface convection zone.The stochastic excitation mechanism limits the amplitudes of the p modes to intrinsically weak values.However, it gives rise to an extremely rich spectrum of modes, the most prominent generally being high-order overtones.Detection of individual gravity modes -for which buoyancy acts as the principal restoring force -remains an important goal for the field.The g modes are confined in cavities in the radiative interior, and if observed would provide a much more sensitive probe of conditions in the core than the p modes. The small-amplitude solar oscillations may be described in terms of spherical harmonics Y m ℓ (θ, φ), where θ and φ are the co-latitude and longitude respectively: Y m ℓ (θ, φ) = (−1) m c ℓm P m ℓ (cos θ) exp(imφ). (1) In the above, the P m ℓ is a Legendre function, and the c ℓm is a normalization constant.The p modes probe different interior volumes, with the radial and other low-degree (low-ℓ) modes probing as deeply as the core.This differential penetration of the modes allows the internal structure and dynamics to be inferred, as a function of position, to high levels of precision not usually encountered in astrophysics.The Sun has not surprisingly been the exemplar for the development of seismic methods for probing stellar interiors.Extension of the observations to other Sun-like stars (asteroseismology) has demonstrated that the Sun-like oscillations are a ubiquitous feature of stars with sub-surface convection zones. In this review our aim is to look at some of the recent advances that global helioseismology has made for studies of various aspects of the internal structure of the Sun.We discuss the observational challenge posed by the detection and identification of individual g modes.We also seek to provide in each section some historical context for discussion of the contemporary results and challenges.We round out the review by looking to the future, in particular the need for continued multi-instrument, multinetwork observations of the global modes; and we finish by listing some important questions and challenges for global helioseismology. The Standard Solar Model and the Abundance Problem The first important inference to be made by helioseismology on the internal structure concerned the depth of the solar convection zone.Gough (1977) and (independently) Ulrich and Rhodes (1977) realized that a mismatch between computed p-mode frequencies (Ando and Osaki, 1975) and the observed frequencies (actually the locations in frequency of ridges in the k-ω diagram) could be reconciled by increasing the depth of the convection zone by about 50% compared to typical values used at the time. Investigations soon followed into the compatibility of solar models, having different heavy-element abundances, with the observed p-mode frequencies.An important aim was to see whether models having low initial heavy element abundances, but significant accretion rates, could reconcile the "solar neutrino problem" and at the same time be consistent with the seismic data (Christensen-Dalsgaard, Gough, and Morgan, 1979).The only seismic frequencies that were available when this work was done were those for modes that penetrated the near-surface layers.As noted above, these seismic data favoured a deep convection zone.This meant that the seismic data were also at odds with a low surface abundance of helium: a deeper zone goes hand in hand with a higher helium abundance.Given that the amounts of helium and heavier elements are inexorably tied together, this seemed to suggest the heavy element abundance could not be low as well. More robust conclusions were possible once frequencies on the core-penetrating, low-ℓ modes became available, from the early 1980s onwards (e.g., see Christensen-Dalsgaard and Gough, 1980;1981).Once the Kitt Peak data (Duvall and Harvey, 1983) had "filled the gap" between the high-ℓ and low-ℓ data that were already available, inversions for the internal sound speed were possible.These data, and more modern data, have resulted in detailed inference on the solar structure.For instance, the position of the convection zone (e.g., Basu and Antia, 1997) and the convectionzone helium abundance (e.g., Däppen et al., 1991;Basu and Antia, 1995) are both known to high precision.These inferences on the solar structure, and the ability to determine sound-speed and density differences between solar models and the Sun, provide a means to test how well solar models fare against the Sun, and thereby allows tests of the input physics to be made.For example, the inversions of Christensen-Dalsgaard et al. (1985) suggested there were problems with computation of the astrophysical opacities.Significant improvements were made in the OPAL opacity tables (e.g., Iglesias and Rogers, 1991;1996).Further improvements to the standard models followed with the routine inclusion of diffusion and settling (e.g., Demarque and Guenther, 1998;Cox, Guzik, and Kidman, 1989;Christensen-Dalsgaard, Proffitt, and Thompson, 1993).Tests are also possible for the equation of state (Christensen-Dalsgaard and Däppen, 1992;Basu and Christensen-Dalsgaard, 1997;Elliott and Kosovichev, 1998), and as a result of the improvements that followed most modern solar models are now constructed with the OPAL2001 equation of state (Rogers and Nayfonov, 2002). Early inversions, such as those by Christensen-Dalsgaard et al. (1985), appeared to rule out the possibility of a low helium and low heavy-element abundance.While the issue of the solar helium abundance has been settled because it can be inferred from the very precise p-mode frequencies, the heavy-element abundance question is live again, and it may be said that the field is confronted with a "solar abundance problem".Whereas in the late 1970s and early 1980s, the question of which abundances fitted the helioseismic data best was an open question -the helioseismic data were new, and the community was still feeling its way on fully exploiting the data -today we know the answer to that question.The issue is instead very much open because estimates of the photospheric abundances, provided by spectroscopy, have been revised downwards. One of the important inputs into these models is the heavy-element abundance (Z) or alternatively, the ratio of the heavy element to the hydrogen abundance (Z/X).Solar models that have shown a good agreement with results from helioseismology were constructed with the solar abundance as given by Grevesse and Noels (1993), or more recently by Grevesse and Sauval (1998;henceforth GS98).The GS98 table shows that Z/X = 0.0229, i.e., Z = 0.0181 for the Sun.The situation has changed recently.Asplund et al. (2000Asplund et al. ( , 2004)), Allende Prieto, Lambert, andAsplund (2001, 2002) and Asplund et al. (2005), find that the solar heavy-element abundances need to be reduced drastically, based on what they claim are better calculations with improved models of the solar photosphere.This lead Asplund, Grevesse, and Sauval (2005;henceforth AGS05) to compile a table of solar abundances, with Z/X = 0.0166 (i.e., Z = 0.0122).This has resulted in considerable discussion in the community since the sound-speed and density profiles of models constructed with AGS05 do not agree well with the Sun.This disagreement can be seen in Figure 1 where we show the density and sound-speed differences between the Sun and two solar models, one constructed with the GS98 abundances and the other with the AGS05 abundances. The mismatch between the models with low Z and the Sun is most striking in the outer parts of the radiative interior, a result of the fact that the low-Z models have a much shallower convection zone than the Sun.There are however, differences in other regions too: all standard models with AGS05 abundances have low helium abundance in the convection zone (e.g., Montalbán et al., 2004;Guzik, Watson, and Cox, 2005;Bahcall, Basu, and Serenelli, 2005); the seismic signatures of the ionization zones do not match observations (Lin, Antia, and Basu, 2007); and the helioseismic signatures from the core do not match observations either (Basu et al., 2007). There have been several attempts to reconcile low-Z solar models with helioseismic data.Given that the largest discrepancy is at the base of the convection zone, the first attempts involved modifying the input opacities.It was found that large changes in opacity, in the range 11% to 21%, would be needed at temperatures relevant to the base of the convection zone to resolve the problem (Montalbán et al., 2004;Basu and Antia, 2004;Bahcall et al., 2005).However, later re-calculation of the opacities by the OP group (Badnell et al., 2005) showed an opacity increase of only 2%.Other attempts included increasing the diffusion coefficient (e.g., Montalbán et al., 2004;Basu and Antia, 2004;Guzik et al., 2005).A large change was needed to get the correct position of the convection-zone base, which resulted in an extremely low convection-zone helium abundance.Attempts were also made to increase the metallicity of the models by increasing the abundance of uncertain elements such as Neon, which does not have any photospheric lines (Antia, and Basu, 2005;Bahcall et al., 2005b).However it is not clear if such an increase is justified in the case of the Sun (Schmelz et al., 2005;Young 2005).Other attempts involve ad hoc prescriptions of mixing at the tachocline (Turck-Chièze et al., 2004;Montalbán et al., 2006) or mixing by gravity waves (Young and Arnett, 2005).Late accretion of low-Z material by the zone has been tried as well (Guzik et al., 2005;Castro, Vauclair, and Richard, 2007).None of the models match the Sun unless two or more modifications are used (Montalbán et al., 2004) and even in those cases the changes in physics have to be fine-tuned carefully. Given that attempts to adjust physical inputs in an ad hoc manner have not resulted in low-Z solar models that agree with the Sun, Antia and Basu (2006) tried to derive Z for the Sun using signatures of the heavy-element ionization zones.The method was similar to that used by Basu and Antia (1995) to determine the He abundance of the Sun.They found a solar Z of 0.0172 ± 0.002, a value close to the GS98 value and much larger than the AGS05 value.The errors in the result are not affected by errors in opacity.While the Antia and Basu (2006) result was obtained from helioseismic signatures in the upper convection zone, using data from the solar core Chaplin et al. (2007a) also concluded that Z in the Sun has to be high.They found that the mean molecular weight averaged over the inner 20% by radius of the Sun is in the range 0.7209 to 0.7231 and that the corresponding surface Z is in the range 0.0187 to 0.0239. The obvious discrepancy between the low-Z models and the Sun creates a problem that has not yet been resolved.If the new, lower abundances are correct then the obvious culprit is missing or incorrect physics in the solar models.If however, the lower abundances are incorrect, then we can be confident that the input physics in our models is within errors.This supposition is supported by the seismically determined value of Z.The seismic Z determinations have been achieved using techniques that depend on different inputs, and despite the differences in techniques and dependencies on different inputs, all seismic estimates of Z/X are consistent with the higher GS98 abundances, and they agree with each other as well. If the convection-zone abundances of the Sun are indeed consistent with the low abundances compiled by AGS05, then almost all the input physics that goes into construction of stellar models must be much more uncertain than has been assumed to be the case.It is also possible that some fundamental process is missing in the theory of stellar structure and evolution, but it is difficult to speculate what that could be.On the other hand, if the GS98 abundances are correct then the currently known input physics is consistent with seismic data, and the AGS05 abundances need to be revised upwards.It is easier to list reasons why the new abundances could be incorrect.These include the fact that the 3D convection simulation used in the atmospheric models may not have the correct thermal structure (Ayres, Plymate, and Keller, 2006).There may be some effects from having a grid of finite resolution: Scott et al. (2006) found significant changes in the line bisectors high in the atmosphere as they changed their resolution.In addition there could be problems with the non-LTE effects used in the line-formation calculations and atomic physics. In conclusion, disagreements between helioseismic estimates and recent spectroscopic estimates of the solar heavy-element abundance call for continuation of the careful examination of solar atmospheric models, and also models of the solar interior. The Internal Rotation and Dynamics The time-averaged internal rotation profile revealed by global helioseismology has in many respects been something of a surprise (see Thompson et al., 2003 for an excellent review on the internal rotation).The known pattern of differential rotation at the surface was observed to penetrate the interior, down to the base of the convection zone, but not in the manner expected.The seismic data then revealed the solar tachocline, a narrow region in the stably stratified layer just beneath the base of the convective envelope, which mediates the transition from differential rotation above to a solid-body-like profile in the radiative zone below.This solid-body-like profile, with its "slow" (i.e., surface-like) rotation rate, was of course the other surprise. The paradigm for the spin-down of Sun-like stars involves the action of a dynamo in the outer envelope.Magnetic braking slows the rate of rotation in the envelope.The question then arises as to the degree of coupling between the radiative interior and the envelope.Coupling allows the envelope to draw on the large reservoir of angular momentum residing in the core.This has two consequences.First, it will delay the rate at which the envelope is spun down.And second, it will bleed momentum from the core, thereby altering the rotation in the deep interior.The extent to which the core and envelope are coupled therefore plays an important rôle in the dynamic evolution of a star.In the pre-helioseismic era, the conventional wisdom was that the core and deep radiative interior of the Sun would be expected to rotate much more rapidly than the layers above. Further to answering questions posed by stellar evolution theory, models and conjectures on the rotation were also of considerable interest for attempts to constrain one of the well-known tests of Einstein's general theory of relativity, the advance in the perihelion of Mercury test.Observations of the surface oblateness of the Sun, by Dicke and Goldenberg (1967), had suggested that the solar gravitational quadrupole moment was large enough to give a significant contribution to the perihelion advance.This created something of a problem, for it reduced the fraction of the contribution that could be set aside as being relativistic in origin to the point where what was left over conflicted with the prediction for the general theory of relativity.Interest in competing theories of relativity was reinvigorated (e.g., the theory of Brans and Dicke (1961), which could be "tuned" into agreement with the oblateness observations). The result of Dicke and Goldenberg had important implications for the Sun's interior structure, for it suggested a rapidly spinning core might be needed to account for the apparently large surface oblateness.The concept of rapid internal rotation also happened to be in vogue at the time for another reason: it offered a possible way to solve the solar neutrino problem.In the presence of rapid internal rotation thermal pressure would no longer be required to carry the full burden of support to maintain the star in equilibrium, meaning temperatures in the core could be lower than previously thought.A nice snapshot is provided by three papers: Ulrich (1969); Demarque, Mengel, and Sweigart (1973);and Roxburgh (1974). What of the outer layers?Pre-helioseismic models of rotation in the convection zone gave a pattern in which the rotation was constant on cylinders wrapped around the rotation axis (e.g., Glatzmaier, 1985;Gilman and Miller, 1986).Small-diameter cylinders intersect the solar surface at high latitudes, while larger cylinders do so at low latitudes, in the vicinity of the solar equator.In order to match to the surface differential rotation, material lying on the surface of a small cylinder must rotate less rapidly than plasma on a larger cylinder.An important consequence of the rotation models was that they therefore predicted an increase of rotation rate with increasing radius. Observed Rotation: the Deep Interior Helioseismic inference on the rotation of the deep interior demanded estimates of the rotational frequency splittings of the core-penetrating low-ℓ p modes.The first estimates of the rotational frequency splittings (Claverie et al., 1981) suggested the core might indeed be spinning more rapidly than the outer layers, but by nowhere near enough to give the surface oblateness claimed by Dicke and Goldenberg.The first inversion for the internal rotation (Duvall et al., 1984) showed that the outer parts of the radiative interior were actually rotating at a rate not dissimilar to the surface, a result that all but ruled out the possibility of a significant gravitational quadrupole moment. The intervening years have seen a steady, downward revision of the magnitudes of the quoted estimates of the low-ℓ rotational frequency splittings (e.g., see discussion in Chaplin, 2004).By the mid 1990s this downward trend had flattened out.Alas, the trend was not solar in origin.It was the result of having longer, higher-quality datasets available, coupled to a better understanding of the subtleties and pitfalls involved in extracting the frequency splittings (e.g., Appourchaux et al., 2000a;Chaplin et al., 2001a).In short: estimates from short datasets tend to overestimate the true splittings because there is insufficient resolution in frequency to properly resolve individual components (which is why the initial estimates of Claverie et al., were high).This is surely one of the best examples helioseismology can offer on how accumulation of data from long-term observations (coupled with a better feel for the analysis) can give significant improvement on accuracy of inference on the internal structure. Inversions made with the modern, high-quality data (e.g., Eff-Darwich, Korzennik, and Jiménez-Reyes, 2002;Couvidat et al., 2003;García et al., 2004) give wellconstrained estimates on the rotation down to r/R ⊙ ≃ 0.25, where the rotation rate is observed to be similar to that in the mid-latitude near-surface layers.We comment below in Section 3.3 on how the solid-body-like rotation might be enforced in the radiative interior. The conclusion that the quadrupole moment is of insufficient size to give a significant contribution to the perihelion advance of Mercury has been upheld by the modern observations of slow rotation in the interior (e.g., Pijpers, 1998;Roxburgh, 2001).Contemporary measurements of the solar shape -made, for example, with MDI data (Emilio et al., 2007) -show a minute oblateness.The possibility of rapid internal rotation providing a solution to the solar neutrino problem was therefore moot, not only because of the slow rotation, but also because agreement between the sound-speed profiles of solar models and the Sun showed the problem was not one in solar physics (e.g., Bahcall et al., 1997), a result confirmed by observations made by the Sudbury Neutrino Observatory (e.g., Ahmad et al., 2001;2002;Ahmed et al., 2004). But what of the rotation rate in the core itself?This remains very uncertain.Use of the p modes presents several difficulties: only a small number of the modes penetrate the core, and those that do have only a modest sensitivity to the rotation.It will be through the measurement of the rotational frequency splittings of g modes that a clear picture of the rotation in the core will properly emerge.Mathur et al. (2007) have demonstrated that by augmenting the p-mode splittings with splittings of a small number of g modes, it should be possible to obtain precise, and reasonably accurate, estimates of the rotation profile throughout a substantial fraction of the core.This is surely sufficient reason alone to redouble our effects to detect individual g modes.We discuss the current status of the observational claims in Section 5 below. Observed Rotation: the Convection Zone and Tachocline Initial glimpses of the rotation in the near-surface layers were provided by Rhodes, Ulrich, and Deubner (1979).However, it was Brown (1985) who presented the first evidence that demonstrated the surface differential rotation penetrated the convection zone.Further studies, using inversion of the rotational frequency splittings, followed (e.g., Brown, and Morrow, 1987;Kosovichev, 1988;Brown et al., 1989;Dziembowksi, Goode, and Libbrecht, 1989;Rhodes et al., 1990;Thompson, 1990).By the end of the 1980s, the rotation inversions were able to show that the differential rotation underwent a marked transition at the base of the convection zone to something resembling a solid-body-like profile below (Christensen-Dalsgaard and Schou, 1988).The tachocline (Speigel and Zahn, 1992) had been discovered. Analysis with the more extensive modern data (e.g., Antia and Basu, 2003) indicates that the characteristic thickness of the tachocline is only a few per cent of the solar radius.The tachocline is oblate, and slightly thicker at the solar equator.The steep gradient in rotation present across the tachocline -much stronger than anything present elsewhere in the outer layers -means it is of considerable interest to the dynamo modelers, and is an attractive site in which to locate stretching, and winding-up, of magnetic field (poloidial to toroidal) by the Ω effect (e.g., see Tobias, 2002). What of the rotation in the convection zone itself?The pattern revealed by analysis of the modern data (Figure 2) does not match the rotation-on-cylinders prediction.Rather, the rotation is approximately constant on lines inclined some 27 o to the rotation axis (e.g., Gilman and Howe, 2003).Furthermore, the rotation rate decreases with radius in the low-to mid-latitude layers very close to the surface (e.g., Corbard and Thompson, 2002).While there is a general consensus that the differential rotation in the convection zone is driven by thermal perturbations the challenge remains to understand in detail the mean observed profile (e.g., see Rempel, 2005;Miesch, Brun, and Toomre, 2006).Some of the most striking results of helioseismology have come from the detection of small, but significant, temporal variations of the rotation rate in the convection zone, which carry signatures of the solar activity cycle.We shall discuss these variations in Section 4 below. How is the Tachocline Confined, and Solid-Body Rotation Enforced? The existence of the tachocline has raised several fundamental questions regarding the dynamic evolution of the Sun.This thin layer matches the transition in rotational behaviour above and below, and must mediate or act as the intermediary for the transfer of angular momentum from the immense reservoir in the core to the outer envelope and beyond, as the star evolves.In order for the rotation to change its character, something must be acting in the radiative interior to mix angular momentum in latitude, so that the differential rotation from the convective zone above sprev1.tex;3/02/2008; 2:39; p.8 is smoothed out, or removed, below.The mechanism must be anisotropic, in the sense that it must be much less efficient at mixing angular momentum in the radial as opposed to the latitudinal direction in order to explain the narrow width of the tachocline.Mixing by anisotropic turbulence is one possibility.Another possibility is the effect of a fossil magnetic field threading the radiative interior.Gough and McIntyre (1998) considered circulations that penetrate from the convection zone into the tachocline, which are then diverted by a weak magnetic field further down.The magnetic field acts to prevent the tachocline spreading out in radius, in effect forming a firm lower boundary.The pattern of rotation at low and high latitudes acts to keep field "bottled up" in the radiative interior.This magnetic field need only have a strength that is a tiny fraction of that at the surface in order to give the required effect.What is more, the magnetic field is then also a prime candidate for enforcing the solid-body-like rotation which is present in the radiative zone (see also Eggenberger, Maeder, and Maynet, 2005).Brun and Zahn (2006) have also recently looked at the problem of confinement of the tachocline by a magnetic field.However, their results imply that a fossil field in the radiative interior cannot prevent the radial spread of the tachocline, and that, furthermore, it also cannot prevent penetration of the differential rotation from the convection zone into the radiative interior. Another mechanism that has received attention as a possible means to enforce solid-body rotation in the deep interior is angular momentum transport by internal gravity (buoyancy) waves, which are excited at the base of the convection zone.It has recently been demonstrated (Charbonnel and Talon, 2005) that such models can in principle redistribute angular momentum efficiently over time from the core to the outer envelope.In the presence of shear turbulence, the gravity waves also give rise to shear-layer oscillations that resemble the "quasi-biennial" oscillations observed in the Earth's atmosphere (Talon, 2006). The Changing Sun A rich, and diverse, body of observational data is now available on temporal variations of the properties of the global p modes.The signatures of these variations are correlated strongly with the well-known 11-year cycle of surface activity, and as such the accepted paradigm is that the "seismic" solar cycle is associated with changes taking place in the outer layers, not the deep radiative interior, of the Sun. Evolutionary changes to the equilibrium structure of the Sun will of course also leave their imprint on the p modes, by virtue of a very slow adjustment of the interior structure as the star ages sedately on the main sequence.The frequencies of the low-ℓ p modes are predicted by the standard solar models to decrease by ≈ 1 µHz every 6 × 10 6 years due to the evolutionary effects.If we alter the timescale to something more practical from an observer's point of view -say ten years -the evolutionary change is reduced to only ≈ 10 −6 µHz.Measurement of such tiny frequency changes, against the backdrop of variations due to the solar cycle, and instrumental noise properties, is beyond the current scope of the data.The observed variations of the p modes, due to changes in the outer layers, are some five orders of magnitude larger than the predicted evolutionary variations. The search for temporal variations of the properties of the p modes began in the early 1980s, following accumulation of several years of global seismic data.The first positive result was uncovered by Woodard and Noyes (1985), who found evidence in observations by ACRIM for a systematic decrease of the frequencies of low-ℓ p modes between 1980 and 1984.The first year coincided with high levels of global surface activity, while during the latter period activity levels were much lower.The modes appeared to be responding to the Sun's 11-year cycle of magnetic activity.The uncovered shifts were, on average, about 0.4 µHz.This meant that the frequencies of the most prominent modes had decreased by roughly 1 part in 10 000 between the activity maximum and minimum of the cycle.By the late 1980s, an in-depth study of frequency variations of global p modes, observed in the Big Bear data, had demonstrated that the agent of change was confined to the outer layers of the interior (Libbrecht and Woodard, 1990). The passage of time, and accumulation of data from the new networks and instruments, has allowed us to study the frequency variations in unprecedented detail, and to reveal signatures of subtle, structural change in the sub-surface layers.It has led to the discovery of solar-cycle variations in the mode parameters associated with the excitation and damping (e.g., power, damping rate, and peak asymmetry).Patterns of flow that penetrate a substantial fraction of the convection zone have been uncovered as well as possibly (but controversially) signatures of changes in the rotation rate of the layers that straddle the tachocline.Let us say a little more about these observations, and what they might mean for our understanding of the solar variability. Structural Changes The modern seismic data give unprecedented precision on measurements of the pmode frequency shifts.From observations of the medium-ℓ frequency shifts -for example in GONG and MDI data -it is possible to produce surface maps (Figure 3) showing the strength of the solar-cycle shifts as a function of latitude and time (Howe, Komm, and Hill, 2002).These maps bear a striking resemblance to the butterfly diagrams that show variations in the strength of the surface magnetic field over time.The implication is that the frequency shift of a given mode depends on the strength of that component of the surface magnetic field that has the same spherical harmonic projection on the surface.This dependence is also observed in studies of the frequency shifts of the less numerous low-ℓ modes (see Chaplin, 2004, and references therein).The precision in the medium-ℓ data is such that significant frequency changes can now be tracked on timescales as short as nine days (see Tripathy et al., 2007).Meanwhile current results on frequency shifts of high-ℓ modes (ℓ > 100)extracted using global helioseismology techniques (Rabello-Soares, Korzennik, and Schou, 2007) -show trends that match to the medium-ℓ and low-ℓ shifts (e.g., sprev1.tex;3/02/2008; 2:39; p.11 Chaplin et al. 2001b).In particular, when the frequency shifts are multiplied by the mode inertia, and then normalized by the inertia of a radial mode of the same frequency, the modified shifts are found to be a function of frequency alone.The high-ℓ modes provide important information since they are confined in the layers close to the surface where the physical changes responsible for the frequency shifts are also located. Detailed comparison of the low-ℓ frequency shifts with changes in various discaveraged proxies of global surface activity provides further tangible input to the solar cycle studies.This is because different proxies show differing sensitivity to various components of the surface activity.Chaplin et al. (2007b) recently compared frequency changes in 30 years of BiSON data with variations in six well-known activity proxies.Interestingly, they found that only activity proxies having good sensitivity to the effects of weak-component magnetic flux -which is more widely distributed in latitude than the strong flux in the active regions -were able to follow the frequency shifts consistently over the three cycles. What is the physical mechanism behind the frequency shifts?Broadly speaking, the magnetic fields can affect the modes in two ways.They can do so directly, by the action of the Lorentz force on the plasma.This provides an additional restoring force, the result being an increase of frequency, and the appearance of new modes.Magnetic fields can also influence matters indirectly, by affecting the physical properties in the mode cavities and, as a result, the propagation of the acoustic waves within them.This indirect effect can act both ways, to either increase or decrease the frequencies.The exact nature of the physical changes is still somewhat controversial, although Dziembowski and Goode (2005) have recently made important headway on the problem.Their analysis of MDI p-mode frequency shifts suggests it is indirect effects that dominate, in particular changes to the near-surface stratification resulting from the suppression of convection by the magnetic field.They suggest that the magnetic fields are too weak in the near-surface layers where the p-mode shifts originate for the direct effect to contribute significantly. It is also interesting to note that Dziembowski and Goode found small, but significant, departures for the lower-frequency p modes of a simple scaling of the frequency shifts with the inverse mode inertia.The nature of these small departures suggests that there is a contribution to the low-frequency shifts from deeper layers, due to the direct effect of the magnetic fields.Similar departures in behaviour had also been seen and noted by Chaplin et al. (2001b). Variations of global f -mode frequencies reveal information on changes in a thin layer which extends some 15 Mm below the base of the photosphere.The most recent observations (e.g., Lefebvre and Kosovichev, 2005) suggest that as activity rises there is an expansion between r/R ⊙ ∼ 0.97 and 0.99, and possibly a contraction above r/R ⊙ ∼ 0.99.It is, however, not yet possible to reconcile the observations with theoretical predictions of the variations (see Lefebvre, Kosovichev, and Rozelot, 2007; also Sofia et al., 2005). Variations very close to the surface, in the He ii ionization zone at a depth ≈ 0.98 r/R ⊙ , have also been revealed by analysis of medium-ℓ p modes.From appropriate combinations of mode frequencies, Basu and Mandel (2004) uncovered apparent solar-cycle variations in the amplitude of the depression in the adiabatic index, Γ 1 , in the He ii zone.These variations presumably reflect the impact of the changing activity on the equation of state of the gas in the layer.These results have since been confirmed, using a different method to extract the acoustic signatures sprev1.tex;3/02/2008; 2:39; p.12 of the He ii zone, and with only low-ℓ frequencies (Verner, Chaplin, and Elsworth, 2006). The results discussed above pertain to changes taking place very close to the surface.What of possible changes deeper down?Chou and Serebryanskiy (2005), and Serebryanskiy and Chou (2005), have found intriguing evidence of signatures in the p-mode frequency shifts that may reflect changes taking place near the base of the convection zone.The authors suggest that the signatures they uncover are consistent with a fractional perturbation to the sound speed, at depth 0.65 to 0.67 r/R ⊙ , of size a few parts in 10 5 (assuming the perturbation may be described as a Gaussian with a FWHM of 0.05 r/R ⊙ in radius). Surface maps, such as the frequency-shift map shown in Figure 3, may also be made for variations observed in the mode powers and damping rates (Komm, Howe, and Hill, 2002), which, like the frequency maps, show a close spatial and temporal correspondence with the evolution of active-region field (Figure 4).Meanwhile, peak asymmetry is the most recent addition to the list of parameters that show solarcycle variations (Jiménez-Reyes et al., 2007).Careful measurement of variations in the powers, damping rates and peak asymmetries -all parameters associated with the excitation and damping -allows studies to be made of the impact of the solar cycle on the convection properties in the near-surface layers. Torsional Oscillations One of the most striking results from helioseismology has been the discovery that the so-called "torsional oscillations" -which modulate the observed pattern of surface differential rotation -penetrate a substantial fraction of the convection zone.The surface torsional oscillations were first observed by Howard and La Bonte (1980).The observations showed bands of plasma at particular latitudes rotating either slightly faster or slower (by a few per cent) than the level expected from the smooth, underlying differential rotation.What is more, the bands shifted position as the solar cycle progressed, tracking toward the equator on a timescale that suggested they carried the signature of the effects of the cycle. The modern, high-quality seismic observations (Figure 5) reveal that these bands of flow are present within the convection zone (Howe et al., 2000a;Antia and Basu, sprev1.tex;3/02/2008;2:39;p.13 2000;2001;Vorontsov et al., 2002).Furthermore, an additional strong, poleward branch has been revealed, which appears to penetrate the entire convection zone.The amplitudes and phases of the signals show a systematic variation with position in the convection zone (Howe et al., 2005).The observed behaviour of the flows is strongly suggestive of being a signature of magnetic effects from the solar cycle. An obvious candidate is the back-reaction of the magnetic field on the solar plasma, via the Lorentz force.Lorentz force feedback was proposed originally by Schüssler (1981) and Yoshimura (1981) as a means to explain the surface torsional oscillations.Later models looked at the effect of the Lorentz force from small-scale magnetic field on the turbulent Reynolds stresses (e.g., Küker, Rüdiger, and Pipin, 1996).A thermal mechanism was also proposed by Spruit (2003) to explain the low-latitude branch, having its origin in small gradients of temperature caused by the magnetic field.Incorporation of the Lorentz force feedback into dynamo models (which are then termed "dynamic") can reproduce observed features of the torsional oscillations (e.g., Covas, Moss, and Tavakol, 2005).Rempel (2007) has recently forced torsional oscillations in a mean-field differential rotation model, which includes the effect of the Lorentz force feedback in meridional planes.He found that while the poleward propagating high-latitude branch could be explained by Lorentz force feedback, or thermal driving, the low-latitude branch is most likely not due to the Lorentz feedback, and probably has a thermal origin.Howe et al. (2006) conducted experiments using artificial data containing migrating flows like those seen in the real observations.Analysis of these artificial data suggests inferences made on the depth of penetration, and the amplitude and phase, of the solar torsional oscillations are likely to be real, and not artifacts of the analysis.With the collection of more data, coupled to a better understanding of the sub-surface torsional oscillations, it should be possible to constrain the perturbations driving the flows (e.g., Lanza, 2007).This might lead to the possibility of obtaining indirect measures of the strength of the magnetic field with depth in the convection zone (direct measurement of the field is not yet possible). The 1.3-yr Periodicities Near the Tachocline Claims that the rotation rate in the layers just above and below the tachocline varies on a timescale of ≈ 1.3 years (Howe et al., 2000b; see Figure 6) remain controversial.When they were first uncovered by the analysis, the changes appeared to be most prominent in the low-latitude regions just above the base of the convection zone.At the same time there were suggestions of variations in anti-phase some ≈60 000 km deeper down, in the outer parts of the radiative zone.The variations uncovered by the analysis then all but disappeared when mid-latitude regions were tested, while a periodic-looking signal of period closer to 1 year was found when attention was focused at latitude 60 o . The result is controversial principally for two reasons.First, independent analyses have failed to reveal the same quasi-periodic variation of the rotation rate (Antia and Basu, 2001;2004;Corbard et al., 2001).Second, the quasi-periodic signal appears to have all but disappeared in more recent data, collected from 2004 onwards, having also been absent over the period from ≈ 2000 to ≈ 2002 (Howe, 2006).The intermittancy need not necessarily imply the phenomenon is an artifact, and the claims continue to draw considerable interest.The fact that the signals uncovered above and below the tachocline are in anti-phase -meaning as one region speeds up the other slows down -suggests that, if real, they may be signatures of some form of angular momentum exchange between the interior and envelope, mediated by the tachocline.Intriguingly, there are also reports in the literature of quasi 1.3-year periodicities in observations of sunspots and geomagnetic indices (see Howe, 2006, and references therein). The Observer's Holy Grail: G Modes and Very Low-Frequency P Modes As the title of this section suggests, the drive to detect the gravity (g) modes (and also the very low-frequency p modes) has assumed an added significance as time has passed.Detection of the g modes presents a major observational challenge, because the amplitudes of the modes are predicted to be extremely weak at the photospheric level.Early claims of detections of low-ℓ g modes (e.g., Delache and Scherrer, 1983;Fröhlich and Delache, 1984) were to prove unfounded, and it has become increasingly apparent that unambiguous detection of the modes will demand very long datasets, excellent instrumental noise performance at low frequencies, and ingenuity in both the observations (e.g., possibly from new approaches to the observations) and the analysis. Upper limits on the amplitudes of individual g modes -as set from analysis of the long, high-quality modern datasets (e.g., Appourchaux et al., 2000b;Gabriel et al., 2002;Wachter et al., 2003;Turck-Chièze et al., 2004) -are far superior (i.e., much lower) than those set in the early analyses (where the datasets were much shorter, and usually the quality of the data was inferior, to contemporary levels).For some methods of analysis the limits are approaching the level of 1 mm s −1 per mode.While predictions of the g-mode amplitudes, based on the assumption of stochastic excitation in the convection zone (e.g., Gough, 1985;Andersen, 1996;Kumar, Quataert, and Bahcall, 1996), may be rather uncertain -predictions for the same range in frequency can differ by more than an order of magnitude -it is worth noting that some of the predictions are not too dissimilar from the current best observational upper limits (e.g., see Elsworth et al., 2006). Early attempts to detect low-ℓ g modes concentrated largely on the very lowfrequency asymptotic regime (e.g., see Provost and Berthomieu, 1986), where the near constant spacing in period offered potential advantages for detection algorithms.However, amplitudes are expected to be appreciably larger in the higher-frequency part of the g-mode spectrum.This is why the lack of any convincing detections lower down shifted the focus of attempts in the late 1990s toward the higher-frequency (non-asymptotic) range (e.g., Appourchaux, 2003), where modes with mixed g and p characteristics are also expected.Searches were made, unsuccessfully, for signatures of individual g modes (although a few potential candidates were claimed by Turck-Chièze et al., 2004).Now, things have gone full circle.Searches in the very low-frequency asymptotic regime are back in fashion.As noted above, analysis methods can take advantage of the near-regular spacing in period, and increase the effective signal-to-noise ratio by looking for the cumulative effect of several g-mode overtones.Searches by García et al. (2007) for the cumulative signature of ℓ = 1 g modes in almost 10 years of GOLF data have yielded what may be regarded as the first serious claim of a detection.The analysis technique has the advantage that it is both elegant, and simple.The low-frequency part of the frequency power spectrum is first pre-whitened.The prewhitened part is then presented in the form power spectral density against period, and its periodogram computed.The cumulative signature of the ℓ = 1 overtones is manifested as a peak in the periodogram, at a period corresponding to the period spacing between the overtones. García et al. claim a statistically significant peak in the GOLF periodogram, which lies at roughly the period predicted by the standard solar models.Is this really the signature of ℓ = 1 g modes?Experiments performed by the authors with artificial data suggest the individual mode peaks must have widths in the frequency power spectrum that are commensurate with damping times of several months.Excitation of gravity waves at the base of the convection zone might lead to the comparatively heavy damping implied by these values (Dintrans et al., 2005), although the theoretical work is not yet sufficiently developed to enable accurate damping-rate predictions to be made for the global low-ℓ g modes. The validity of conclusions drawn on all searches for low-frequency modes rest on a robust and proper use of statistics.Some methods assess the likelihood that prominent features are part of a broad-band noise source, the so-called H0 hypothesis; while others test against the likelihood that signal (e.g., a sine wave or a damped wave) is buried in broad-band noise, the so-called H1 hypothesis (see Appourchaux, 2003, andChaplin et al., 2004, for brief discussions on low-frequency detection methods).Some of the tests bring more prior information to bear than others: for example, tests are often predicated on the assumption that the g modes are very lightly damped, like the low-frequency p modes, meaning individual components will appear as spikes in the frequency power spectrum.It is also important to recognise that quoted likelihoods depend on the question being asked of the data.For example, does one flag a prominent spike as a candidate mode if it has less than a 1% likelihood of appearing by chance in a range of 10 µHz of the spectrum?Or does one perhaps demand that it has less than a 1% chance of appearing in a range of 100 µHz of the spectrum?Quoting a 1% likelihood in these two cases means two different things (the second criterion being a more demanding limit).And should one fold in the fact that the number of bins in a fixed range of frequency increases as more data are collected on the Sun? The use of prior information, and the need to fix a priori choices for hypothesis testing (which has an element of subjectivity), suggests that a Bayesian (and not a frequentist) approach is the best route to assessing the likelihoods associated with searches for the g modes.This is the approach currently advocated by the Phoebus collaboration, which is leading the way on development of analysis techniques in this area (Appourchaux, 2008). Driving and Damping the Modes The first calculations of the excitation rates of the p modes suggested the modes were unstable (e.g., see the discussion in Christensen-Dalsgaard, 2004).We now know they are in fact stable, being stochastically excited and intrinsically damped by the convection (see Houdek, 2006, for a recent review).Theoretical modelling of global excitation and damping based on an analytical (or semi-analytical) approach (e.g., Houdek et al., 1999;Dupret et al., 2004;Samadi et al., 2005) can give predictions of two independent quantities: the damping rates (η), and acoustic powers (P ) (the latter corresponding to the rates at which energy is pumped into, and then dissipated by, the modes).It is therefore incumbent on the observers to provide as accurate and precise measures of these parameters as possible. The parameters that are usually extracted directly by the observers are the widths (∆) and heights (maximum power spectral densities) (H) of the peaks in the frequency power spectrum.The linear damping constants (η) are related to the peak widths via: (2) If observations are of sufficient length to resolve mode peaks in the frequency power spectrum, the observed heights of the mode peaks are given by: where the V are mode amplitudes (written here for Doppler velocity) and the I are the mode inertias.There has recently been a shift toward making comparisons of observational and theoretical mode amplitudes using the H (e.g., Chaplin et al., 2005;Belkacem et al., 2006a), as opposed to the V (or V 2 ) as had previously been the practice.The H is after all what one "sees" for the vast majority of modes in the frequency power spectrum (i.e., provided they are well resolved).Measurement of the acoustic powers (P ) from the peak-bagging estimates of H and ∆ is fraught with potential pit-falls.Equations ( 2) and (3) imply that: There is a strong anti-correlation of the fitted H and ∆ in fits made to peaks in the frequency power spectrum.The effect cancels when estimates of V 2 are sought, since V 2 ∝ H∆; but estimates of P contain another factor of ∆.Some other means of estimating the damping, that is much less strongly correlated with H, would offer a way around the problem. The appearance of the mode inertia (I) in Equation ( 4) gives rise to further complications.Because different instruments show different Doppler velocity responses with height in the photosphere, the I are instrument (i.e., observation) dependent.Baudin et al., (2005) have demonstrated the importance of attempting to correct for this effect.Without proper normalization between results of different instruments, differences in estimates of P arise. The frequency dependence of the acoustic powers, P , is a particularly important diagnostic.The most recent comparisons of theory and observation have shown good agreement over the main part of the low-ℓ mode spectrum (e.g., Chaplin et al., 2005;Belkacem et al., 2006a, b).Samadi et al., (2007) have also found that absolute predictions of P by three-dimensional numerical simulations tend to be lower than the P given by the semi-analytical models.With regard to the damping, the comparison, by Chaplin et al. (2005) of the observed and theoretical H of the radial modes has demonstrated much more clearly than before the shortcomings in the theoretical computations of the low-frequency damping rates. An interesting question for the analytical models concerns the description of the temporal behaviour of the dynamics of the small-scale turbulence.A Gaussian or Lorentzian function is usually adopted.Chaplin et al., (2005) found that when they adopted a Lorentzian description the predicted H of the low-frequency modes severely overestimated observed values.They concluded that a Gaussian description gave a much better match to the observations.The results of Samadi et al. (2003Samadi et al. ( , 2007) ) in contrast tend to favour the Lorentzian description.Results from numerical simulations indicate that neither description is strictly correct (Georgobiani, Stein, and Nordlund, 2006): variation in the behaviour of the small-scale turbulence is observed with both temporal frequency and depth in the convection zone. The two sources of excitation are the fluctuating turbulent pressure (Reynolds stresses) and gas pressure.In the numerical simulations (e.g., Stein et al., 2004) there is some cancellation between the sources.This cancellation is not shown by the analytical models, which may explain why the analytical models overestimate the p-mode amplitudes of stars hotter than the Sun (see discussion in Houdek, 2006). The Road Ahead Global helioseismic studies continue to make great progress, but many outstanding questions and challenges remain.If there is one thing that 30 years of global helioseismology has taught us, it is that there are two highly desirable requirements on the observational data: i) that they should provide long-term, high-duty-cycle monitoring of modes from low to high ℓ; and ii) that they should offer observational redundancy, and complementary data. Requirement i enables us to use the global p modes to "sound" the solar activity cycle, i.e., to monitor in detail the temporal behaviour of the seismic Sun on timescales commensurate with the cycle period, and preferably longer to facilitate comparisons of one cycle with another.It also enables us to obtain extremely precise and accurate estimates of fundamental mode parameters -long datasets are vital for detection of the very low-frequency modes -and to measure other parameters that would not otherwise be determined robustly (e.g., multiplet frequency asymmetries in low-ℓ modes).We are therefore in a position to be able to make precise and accurate inference on the internal structure and dynamics (both the time-averaged properties, and the properties as a function of time). Requirement ii enables us to confirm the solar origins of subtle, but potentially important, phenomena in the data.For example, detection of weak very low-frequency modes in two or more contemporaneous datasets significantly lowers the probability of a false detection having been made.Complementary Doppler-velocity and intensity observations, and observations by Doppler velocity instruments in different atmospheric absorption lines, create opportunities for studies of the physics of the photosphere, studies which can in turn be used to obtain more accurate estimates of mode frequencies from better understanding and modelling of the peak asymmetry. To fully exploit the potential science benefits that global helioseismology has to offer, we need continuation of operations of the two main ground-based networks, GONG and BiSON.As new science results drive the need for different data products, the ground-based networks are in a position to implement "responsive" changes to their instrumentation (we come back to the issue of new observational requirements later).In the post-SOHO era, BiSON will continue to provide, and will then be the only bespoke source of, high-quality low-ℓ data from its Sun-as-a-star observations.Continuation of GONG, in its current multi-site configuration, would provide highquality, high-duty-cycle resolved-Sun products, particularly on higher-ℓ modes, to go alongside the HMI resolved-Sun data (due for launch on SDO in early 2009). It is important to remember that to make optimal use of the low-ℓ modes for probing the solar core we need contemporaneous medium-and high-ℓ data It is worth stressing the important rôle the high-ℓ modes can play in this regard, in that they can be used to constrain the hard-to-model near-surface layers, thereby cleaning things up for more accurate inference on the structure deeper down.However, reliable measurement of the high-ℓ frequencies presents something of a challenge, because of the sensitivity of the frequencies to instrumental effects (e.g., see Korzennik, Rabello-Soares, and Schou, 2004;Rabello-Soares, Korzennik, and Schou, 2007). In order to further improve the accuracy of the inversions we must continue studies into optimizing combinations of frequencies from different instruments (e.g., the lowℓ Sun-as-a-star BiSON and GOLF data with the resolved-Sun MDI and GONG, and in the future the HMI, data).As datasets get longer, and quality improves, so new subtle effects come to light that must be properly allowed for when the datasets are analyzed.In the last few years we have developed a much better understanding of the underlying frequency bias between resolved-Sun and Sun-as-a-star frequencies.But more work is needed.New instrument combinations inevitably present their own unique problems. Bias comes not only from instrumental effects, but also from the analysis pipelines (e.g., see Schou et al., 2002;Basu et al., 2003).Hare-and-hounds exercises on realistic artificial data are a valuable tool for uncovering, and understanding, such effects.The solarFLAG group is currently concluding a second round of hare-and-hounds exercises testing peak-bagging on low-ℓ modes in Sun-as-a-star data (see Chaplin et al., 2006 for results on the first round of exercises).Significant improvements to the peakbagging at medium and high ℓ are being made by Jefferies and Vorontsov (2004) and Korzennik (2005).The approach of Jefferies and Vorontsov -parametric modelling of the spectrum using a small number of free parameters -is novel.The approach has the potential (and indeed the ultimate aim) to "remove the intermediary" -by which here we mean estimation and subsequent use of individual mode parameters, like the frequencies -to instead give the desired constraints on models of the internal structure by maximizing the likelihood of the solar model parameters directly on the frequency spectra.The importance of detailed work on the peak-bagging codes and philosophies should never be overlooked. Before we finish, let us go back briefly to the observations.New observations on the modes in intensity (from low to medium ℓ) will be provided by the SODISM and PREMOS instruments on PICARD (due for launch in early 2009).While a prototype next-generation GOLF instrument (GOLF-NG) is about to begin ground-based trials in Tenerife.The SODISM and GOLF-NG instruments are testing new techniques in the observations, which will hopefully increase the likelihood of detecting the low-frequency g modes.SODISM will look to the solar limb to maximize the signalto-background ratio in the g modes.GOLF-NG will make simultaneous observations at different heights in the solar atmosphere.The aim will be to take advantage of changes in the coherence of the granulation signal with height to beat down the solar noise background.Extension of this capability to resolved-Sun observations is clearly a desirable goal (Hill, 2008). So, looking to the future, we must advocate strongly for the continuation of unbroken, high-duty-cycle "seismic monitoring" of the Sun, at low, medium and high ℓ.There are exciting challenges for the observations and analysis: for example, to detect and identify individual low-ℓ g modes, and to measure their properties, in particular the frequencies and frequency splittings; to use long-term monitoring of the global p modes to detect evidence of long-term secular change in the Sun's seismic properties; to use the long-term monitoring to enable comparisons of different 11-year activity and 22-year magnetic cycles, using low, medium and high-ℓ modes; and to be able to fully isolate, and then subtract from the mode frequencies, the influence of the near-surface layers, and to thereby reveal a "cleaner" picture of the structure of the deep interior. Some key questions for global helioseismology to address include: what is the solar composition as a function of radius in the interior, and is the solar abundance problem a problem in the spectroscopic abundance determinations, or a problem in the standard solar models (e.g., is there something missing from the models)?What is the strength of the magnetic field at the tachocline, and in the convection zone, and what are the implications for solar dynamo models?What is the rotation profile as a function of radius in the solar core, and what are the implications for models of the dynamic evolution of Sun-like stars? Figure 1 . Figure 1.The relative sound-speed difference (panel a) and density difference (panel b) between the Sun and a standard solar model constructed with the GS98 metallicity [model BP04(Garching)], and also a standard solar model constructed with the AGS05 metallicity [model BS05(AGS,OPAL)] of Bahcall, Basu, and Serenelli, 2005.The model with GS98 Z/X has a CZ He abundance of Y CZ = 0.243 and CZ base at R CZ = 0.715 r/R ⊙ .The AGS05 model has Y CZ = 0.230 and R CZ = 0.729 r/R ⊙ . Figure 2 . Figure 2. The mean, time-averaged rotation profile in the convection zone and outer parts of the radiative zone.Left-hand panel: 2D cutaway, showing the mean profile obtained from GONG (upper half) and MDI (lower half) observations.Right-hand panel: mean rotation profile at different latitudes.(Courtesy of R. Howe.) Figure 3 .Figure 4 . Figure 3. Mode frequency shifts (in µHz) as a function of time and latitude.The values come from analysis of GONG data.The contour lines indicate the surface magnetic activity.(Figure courtesy of R. Howe.) Figure 5 . Figure 5. Variations in the solar internal rotation, relative to the rotation at the epoch of solar minimum, as determined by analysis of MDI observations.(Courtesy of S. Vorontsov.) Figure 6 . Figure 6.The temporal variation of the rotation rate in the equatorial regions near the base of the convection zone at 0.72 r/R ⊙ (top) and at 0.63 r/R ⊙ (bottom) from GONG (circles) and MDI data (triangles).The average rotation rate has been subtracted.(Courtesy of R. Howe.)
2014-10-01T00:00:00.000Z
2008-01-28T00:00:00.000
{ "year": 2008, "sha1": "4526e4c33948f442166187fdfa1988811a60d844", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11207-008-9136-5.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "4526e4c33948f442166187fdfa1988811a60d844", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
256107345
pes2o/s2orc
v3-fos-license
FLInt: single shot safe harbor transgene integration via Fluorescent Landmark Interference Abstract The stable incorporation of transgenes and recombinant DNA material into the host genome is a bottleneck in many bioengineering applications. Due to the low efficiency, identifying the transgenic animals is often a needle in the haystack. Thus, optimal conditions require efficient screening procedures, but also known and safe landing sites that do not interfere with host expression, low input material and strong expression from the new locus. Here, we leverage an existing library of ≈300 different loci coding for fluorescent markers that are distributed over all 6 chromosomes in Caenorhabditis elegans as safe harbors for versatile transgene integration sites using CRISPR/Cas9. We demonstrated that a single crRNA was sufficient for cleavage of the target region and integration of the transgene of interest, which can be easily followed by loss of the fluorescent marker. The same loci can also be used for extrachromosomal landing sites and as co-CRISPR markers without affecting body morphology or animal behavior. Thus, our method overcomes the uncertainty of transgene location during random mutagenesis, facilitates easy screening through fluorescence interference and can be used as co-CRISPR markers without further influence in phenotypes. Introduction The ability to engineer transgenic and mutant animals has afforded one of the biggest revolutions in life sciences. Caenorhabditis elegans is a popular laboratory animal, with ten thousand strains carrying exogenous, recombinant DNA available. The first transgenic C. elegans animals were generated by microinjection into the worm's gonad to establish extrachromosomal arrays (Stinchcomb et al. 1985). These arrays are, however, unstable, do not follow Mendelian inheritance and get lost mitotically, leading to mosaic animals in which not all somatic cell express the transgene. Classical approaches rely on the use of genetic selection markers (Mello et al. 1991), however, when the ectopic DNA is not accompanied by a visible marker, this effect can be misinterpreted as a lack of phenotype. Several strategies have been proposed to circumvent this phenomenon, from the enrichment of the transgenic animals using antibiotic selection (Giordano-Santini et al. 2010;Semple et al. 2010;Radman et al. 2013) to rescue from strong phenotypes such as temperaturesensitive lethality (pha-1(ts)) (Granato et al. 1994) or paralysis (unc-119) (Maduro 2015), however, none of them succeeded in eliminating the mosaic expression. Furthermore, extrachromosomal arrays contain large copy numbers of the injected DNA, which often causes overexpression artifacts, but have the advantage that transgenes become visible even beyond their native levels. For example, many fluorescent tags to endogenous proteins are poorly visible due to their low expression levels and promoter activity (Walker 2000;Das et al. 2021). The problem of unstable inheritance can be mitigated by integrating the transgenic array. Traditional integration methods are based on random mutagenesis, either using a gene gun (Praitis et al. 2001), that allows integration at low frequencies, or chemicals like UV/TMP, X-ray irradiation (Mariol et al. 2013) or singlet oxygen generators (miniSOG) (Noma and Jin 2018). However, cumbersome and timeconsuming screening efforts are necessary to identify the integrants, and the locus of integration remains unknown unless subsequent mapping experiments are conducted. In addition, the mutagenesis causes extensive DNA double-strand breaks, and thus, the resultant animals needs to be backcrossed several times and verified to ensure minimal genetic variability. Even though targeted, MOS-transposase directed, single copy integrations (Frøkjaer-Jensen et al. 2008, recombination-mediated cassette exchange (Nonet 2020(Nonet , 2021, and CRISPR transgenesis (Friedland et al. 2013;Dickinson et al. 2015;Paix et al. 2017) are available, extrachromosomal arrays were and still are the standard in many laboratories for fast and efficient generation and screening of transgenic phenotypes. Over the last few years, many different methods have been proposed and demonstrated for site-directed CRISPR/Cas9 mediated locus-specific integration of ectopic DNA such as extrachromosomal arrays (Yoshina et al. 2016;El Mouridi et al. 2022) or single copy transgenes (Silva-García et al. 2019;El Mouridi et al. 2022) into safe habor integration sites. These methods rely on a crRNA that recognizes a single site in the genome and facilitates Cas9-mediated double-strand DNA breaks. The subsequent nonhomologous end joining (NHEJ) or homology-directed repair probabilistically integrates the co-delivered ectopic DNA. Even though these methods overcome many of the above-mentioned shortages of unstable transgenesis and variable expression, so far, there are only a limited number of target sites available (e.g. ben-1, dpy-3, MosSCI) (Frøkjaer-Jensen et al. 2008;Yoshina et al. 2016;El Mouridi et al. 2022). Recently, Frokjaer-Jensen and colleagues generated a library containing 147 strains carrying single copy loci expressing the red fluorophore tdTomato in somatic nuclei, in addition to 142 nuclearly localized GFP strains (Frøkjaer-Jensen et al. 2014), which have aided mapping and genetic experiments (Fay 2006;LaBella et al. 2020;Noble et al. 2020;Das et al. 2021). Originally, these strains were generated as dominant genetic markers and can also be used as landmarks to map genetic position of mutants and transgenes. Because the integrated transgenes of many of these strains locate to intergenic regions and are transcriptionally active, we reasoned that these loci would satisfy many if not all conditions as further safe-harbor integration sites. Here, we leverage these strains and demonstrate that a single crRNA can cut the tdTomato (or GFP) DNA sequence at high efficiency, affording a selection of 147 (142 for GFP) possible integration sites, 121 of which are intergenic (Frøkjaer-Jensen et al. 2014). Moreover, the loss of tdTomato fluorescence during the integration not only facilitates screening purposes, but can also be used as co-CRISPR marker during gene-editing at distant loci. Importantly, we show that the integration of a model transgene per se does not affect worm physiology, and even intragenic insertions appear to be phenotypically silent. This method has considerable advantages in multiplexed genome engineering, when the co-CRISPR locus cannot be unlinked easily from the editing site. Lastly, we propose future extensions of FLInt for the use of single copy GFP sequences as a dominant marker for homology-directed repair through genetic conversion of the GFP to BFP chromophore with a single nucleotide change. Animal maintenance Nematodes were cultivated on NGM plates seeded with E. coli OP50 bacteria using standard protocols (Stiernagle 2006;Porta-de-la Riva et al. 2012). The integration efficiency of all tested target strains is listed in Supplementary Table S1. All transgenic strains in this study are listed in Supplementary Table S2. The parental strains carrying eft-3p::tdTomato::H2B and eft-3p::gfp::NLS used as the identified landing sites from miniMos (Frøkjaer-Jensen et al. 2014) were maintained and cultured at 20 • C prior to injection. Molecular biology Gibson assembly was regularly used for plasmid construction. Briefly, specific primers were designed, and PCR was performed using KOD DNA polymerase (Sigma Aldrich). The amplification of DNA fragments was done following manufacturer's instructions into a Bioer GeneExplorer thermal Cycler. The visualization of DNA fragments was done using an Azure c600 (Azure Biosystems) gel imaging device. Gibson assembly was performed by mixing fragments of the different DNAs at a 3:1 ratio (insert: vector) and a 2X homemade Gibson Assembly Master Mix. The bacterial transformation was done using either NEB 5-alpha or 5-alpha F'Iq Competent E. coli. Off-target assessment of the crRNA We assessed off-target gene editing of the loci mentioned in the previous section (see Supplemental Data File 1). With the offtarget analysis using CRISPOR (Concordet and Haeussler 2018), we selected a candidate gene, C55B7.3 (I:1.17 +/− 0.000 cM), for verifying whether it could be recognized and edited while integrating the transgenes on the tdTomato locus. The C55B7.3 gene was amplified from the integrated strains generated by tdTomato excision. Ten animals were pooled from 15 strains (MSB1110, MSB1111, MSB1112, MSB1113, MSB1115, MSB1116, MSB1117, MSB1118, MSB1119, MSB1120, MSB1121, MSB1122, MSB1123, MSB1124, and MSB1125). The lysates were prepared using a variation of the single worm DNA extraction described in Williams et al. (1992). Briefly, 10× PCR buffer from BIOTAQ DNA Polymerase (Bioline, Cat. No. BIO-21040) was diluted to 1× and supplemented with proteinase K (Fisher Scientific, Cat. No. 10181030) at 0.1 μg/μL final concentration. Each worm was lysed in 10 μL lysis buffer and incubated at 65 • C for 10 min and 95 • C for 2 min in a thermal cycler. 90 μL of milliQ water were added to the lysis reaction and 1 μL used as template for PCR. The PCR primers were designed by CRISPOR; forward primer (5 ′ -TCGTCGGCAGCGTCCTTCCCGAGCAAGAAGGGTG-3 ′ ) and reverse primer (5 ′ -GTCTCGTGGGCTCGGTGGAACTTACCGTCACCG AAG-3 ′ ). The PCR amplicons were sequenced using the 5 ′ -CTTCCCGAGCAAGAAGGGTG-3 ′ primer. The off-target effect was assessed by comparing the sequencing data to the wild-type nucleotide sequence. Microinjection Similar to the preparation of the conventional injection mix (transgene DNA + co-injection markers) (Rieckher and Tavernarakis 2017), this method requires an additional portion of CRISPR reagents. The CRISPR mix was prepared by mixing 14 μM of crRNA, 14 μM of Alt-R CRISPR-Cas9 tracrRNA (IDT), and milliQ water. The crRNA-tracrRNA dimer was induced by incubating the mix at 95 • C for 5 min and RT for 5 min. Then, Streptococus pyogenes Cas9 nuclease (IDT) was added to form the ribonucleoprotein complex. The CRISPR mix was aliquoted into PCR tubes (2 μL each) and stored at −20 • C for further use. The injection mix was prepared by mixing the purified plasmid DNA (Zymo D4016 PLASMID MINIPREP-CLASSIC) with DNA ladder (1 kb Plus DNA Ladder, Invitrogen), 100 ng/μL DNA in total (see Supplementary Table S7). We added the 2 μL of CRISPR mix (mentioned above) into the 8 μL injection solution to make a total of 10 μL. The mix was centrifuged at the highest speed for 8-10 min before injecting. The transgenic strains used as the P0 animals were established by miniMos technique (Frøkjaer-Jensen et al. 2014) expressing tdTomato and GFP in all cellular nuclei. We selected the following transgenic landing sites (see also Supplementary Table S1; note, all oxTi transgenes carry the Cbr-unc-119(+) rescue construct, in All transgenic animals that we used as background strains are available in CGC. Using FLInt with an germline competent Cas9: Besides the use of recombinant, purified Cas9 protein, we leveraged an integrated, germline expressing Cas9 to perform ectopic transgene integration with the FLInt method. To do so, we used the transgenic strain EG9615 carrying the optimized Cas9 gene which is expressed in (Schwartz et al. 2021). We generated MSB1247 carrying the integrated Cas9 and tdTomato landing site by crossing EG9615 (other strains with fluorescently labeled Cas9 locus are available to guide transgene selection) with EG7944 (oxTi553 V). Then, we followed the method for FLInt integration (described earlier), without the addition of Cas9 protein in CRISPR mix. We injected 30 P0 animals with myo-3p::mCherry marker, isolated 80 positive F1(s), and eventually obtained 5 integrated lines (integration efficiency = 6.25%). We noticed that the animals carried extrachromosomal array mostly showed tdTomato excision, indicative of RNP formation from the co-injected crRNA-tracrRNA. We found that the integration efficiency was not different from the previous trial using RNP, suggesting the similar probability of arrays to be integrated. This result demonstrated that the integrated Cas9 can be an alternative option for gene editing in C. elegans at reduced cost. To further optimize this technique, it might be possible to co-integrated sgRNA (tdTomato) with Cas9 gene in order to reduce the FLInt reagents. However, the only concern of this technique is the landing site of Cas9 on chromosome II that need to be outcrossed. Visual screening of transgenic animals The screening of the fluorescent progenies from P0 was performed using a fluorescent stereomicroscope (SMZ25, Nikon Instruments) equipped with a white-light LED light source (Lumencor, Sola S2). We searched for the nonred animals with co-injection marker expression, called positive F1, 3-day postinjection. Then, we singled them out into new NGM/OP50 plates. The individual positive F1 were cultured for 3 days at 25 • C, and plates were searched for F2 progenies with high transmission frequency (approx. 75%). Six F2(s) of each of those plates were singled out. After 3 days, the F3 progenies were checked for homozygous expression of the co-injection marker and, if integration had taken place, the integrated lines were characterized. The F3 progenies from the same F1 are determined as identical transgenic line. We calculated the integration efficiency by (no. of integrated line/no. of positive F1) × 100. Integrated copy number analysis with qPCR qPCR was used for detecting and measuring the copy number of the integrated pCFJ90 (myo-2p::mCherry) of 9 integrated strains (MSB884, MSB886, MSB898, MSB905, MSB911, MSB912, MSB913, MSB914, and MSB915). Sample preparation was done by culturing worms in peptone-enriched plates with NA22 as food source. When plates were full of adult worms, they were washed off the plates with M9 buffer, excess bacteria eliminated by successive washes and lysed in 500 μL lysis buffer supplemented with proteinase K (see Off-target assessment of the crRNA section above). The genomic DNA was purified using the Zymoclean Gel DNA Recovery Kit (Zymo Research). qPCR analyses were carried out by AllGenetics & Biology SL (www.allgenetics.eu). Briefly, absolute qPCR was performed with primers indicated in Supplementary Table S4. The qPCR experiment was performed in triplicate for each sample and controls. The qPCRs reactions were carried out in a final volume of 20 μL, containing 10 μL of NZY qPCR Green Master Mix ROX plus (NZYTech), 0.4 μM of the amplification primers, 2 μL of template cDNA, and ultrapure water up to 20 μL. The reaction mixture was incubated as follows: an initial incubation at 95 • C for 10 min, followed by 40 cycles of denaturation at 95 • C for 15 s, annealing/extension at 65 • C for 1 min. A 5 point 10-fold serial dilution of a known number of copies of the genes under study was used to establish the standard curve and evaluate the reaction efficiency. These dilutions were also performed in triplicate. The Y-intercept and slope were also obtained from the standard curve. Copy number was calculated by the formula: copy number = 10 (Cq -Yintercept)/(slope) . Copy number of integrated transgenes was obtained by normalizing with rps-25. Screening for loss of tdTomato fluorescence as a 'co-injection' marker Having multiple transgenes or multicolor phenotype could negatively affect animal health as it constitutes a metabolic burden and limits the degrees of experimental freedom during microscopy experiments (e.g. multicolor imaging acquisitions). Importantly, the above-mentioned integration protocol and simplicity of the screening procedure also facilitates the integration of transgenes without the use of visible markers, e.g. such as the myo-2p::mCherry. To demonstrate this, we generated a dualfluorescence CRE/lox reporter strain (based on SV2049) with constitutive BFP expression and conditional, CRE-dependent mCherry expression, with the ubiquitous tdTomato expression from the landing site in the background (MSB934). After injecting this strain with a plasmid encoding for an intestinal CRE (ges-1p::CRE) together with tdTomato CRISPR mix, we confirmed loss of tdTomato and a BFP/mCherry color switch in intestinal nuclei in the F1. Importantly, the intestinal red fluorescence is indicative for the tissue specific CRE-recombination, that would otherwise be obscured had the tdTomato cleavage not taken place. To isolate homozygous integrants, we followed the CRE-dependent BFP/mCherry color switch during the F3 (Supplementary Figure S3). We also demonstrated the co-injection marker free integration using the binary UAS/GAL4 expression system (Wang et al. 2017), and integrated a panneuronal rab-3p::GAL4 driver construct in the background of a silent UAS::GFP effector strain carrying the tdTomato landing site. Following our experimental pipeline, we obtained positive F1 that panneuronally expressed GFP signal with the loss of tdTomato (Supplementary Figure S3). Our results demonstrate that the negative selection due to fluorescence interference of the tdTomato landing site facilitates the screening step in C. elegans transgenesis and serves as a safe harbor for transgene expression. Integration of extrachromosomal array using FLInt The integration of the existing extrachromosomal array was done first by crossing the strain of interest to the desired tdTomato marker strain. A CRISPR injection mix containing 14 μM of crRNA against tdTomato, 14 μM crRNA against Ampicilin resistance gene (AmpR), 28 μM of tracrRNA and Cas9 endonuclease was injected in the resulting strain and the progeny scored for loss of tdTomato expression. One hundred percent transmission of the extrachromosomal marker was used as an indicator for integration. Screening integrations with PCR To follow the double-strand break, excision and integration efficiency at the tdTomato site, we designed PCR primers that bind to several regions along the tdTomato gene (Fig. 1a, Supplementary Table S4); (1) A forward primer that binds to the region upstream the tdTomato gene, in the eft-3 promoter (FWD1: 5 ′ -TTTATAATGAGGTCAAACATTCAGTCCCAGCGTTTT-3 ′ ) (2) another forward primer that binds to the middle of the gene in both tandem repeats, downstream the excision sites (FWD2: 5 ′ -GACCCAGGACTCCTCCCT-3 ′ ), (3) the reverse primer, that binds at the end of tdTomato ORF (REV: 5 ′ -TTACTTGTACAGCTCG TCCATGC-3 ′ ). This strategy gives rise to 3 bands when genotyping tdTomato (Fig. 1a,c). We utilized this technique for investigating the tdTomato gene before and after being excised by CRISPR/ Cas9. The full-length tdTomato is recognized by the 4 binding sites of the 3 primers amplifying 3 different band sizes: 1.7 kb, 1.1 kb, and 0.4 kb (Fig. 1c, lane 1). The excised tdTomato splits the middle chunk of gene, losing one primer binding site. Only 2 PCR bands (1.1 kb and 0.4 kb) were detected (Fig. 1c, lane 2). Lastly, in integrated strains only the smallest band (0.4 kb), outside of the integration region is amplified (Fig. 1c, lanes 3-5). To avoid competition between the 2 different FWD primers, the following PCR conditions proved optimal: FWD primer (1) = 2 mM; FWD primer (2) = 0.2 mM; REV primer = 2 mM; Tm = 55 • C; extension time = 1 min. Screening for lat-1::loxP::ΔmCherry insertions using tdTomato as Co-CRISPR The insertion of a loxP site into lat-1 locus was done using CRISPR/ Cas9. To excise lat-1 gene, we introduced the crRNA (5 ′ -ATGTACACGCATCAAAGATA-3 ′ ) (IDT), tracrRNA (IDT), and Cas9 (IDT). The loxP site and additional sequence (ΔmCherry) insertion and PAM mutation was induced by the HR template (Supplementary Table S6) with 35-nt homology arms (IDT). The CRISPR mix was prepared followed the details above and injected into the gonad of the background strain EG7944 (oxTi553 V [eft-3p:: tdTomato::H2B]). The concentration of the homology repair template was 167 ng/μL. The screening of F1 was done after 3 days using the fluorescent microscope. The candidate F1(s) were selected from the jackpot plates based on the loss of tdTomato fluorescent signals among the F1 population. The candidates were singled out onto new NGM/OP50 plates before genotyping. To genotype the loxP insertion, worms were lysed and genotyped as detailed in the Off-target assessment of the crRNA section with primers 5 ′ -CGATGTTGACAACTGAAGTGA-3 ′ and 5 ′ -GGTAATTTC TGACATGGCTCA-3 ′ . The edits were observed in an electrophoresis gel by the shift of the edited DNA band (417 bp) compared to the wild-type (291 bp). The efficiency of lat-1::loxP::ΔmCherry insertion from each jackpot plate was calculated by (no. of edits/no. of candidate F1) × 100. Screening of GFP color switch as the HDR-mediated co-CRISPR marker The HDR-mediated fluorescent conversion from GFP to BFP (P4) was done with the eft-3p::GFP::NLS background strains, EG8888 [oxTi936 X] and EG8958 [oxTi1022 I]. The single point mutation of gfp gene was triggered by DNA double-strand break via CRISPR/ Cas9 approach followed by the HDR that introduces the change of amino acid from the background (Y66H). To do this, the crRNA against gfp (5 ′ -CTTGTCACTACTTTCTGTTA-3 ′ ), tracrRNA, Cas9 nuclease, and the HR template (5 ′ -TTAAATTTTCAGC CAACACTTGTCACTACTTTCTGTTATGGT GTTCAATGCTTCTCG AGATACCCAGATCATAT-3 ′ ; see Supplementary Table S6), purchased from IDT, were injected into the P0 animals. After 3-day post injection, the F1(s) progenies were screened for the loss of GFP single which replaced by the expression of BFP in the nuclei. The candidates were then singled out and screened for few generations to obtain the homozygous genotype. Fluorescence microscopy The fluorescence signal of the worms was observed using a confocal microscope (Andor DragonFly 502, Oxford Instruments) attached to a Nikon Eclipse Ti2 inverted microscope body through either a 20× 0.75 oil or a 60× 1.3 oil immersion lens and a back-illuminated sCMOS camera (Sona, Andor). The tdTomato fluorescence signal was excited with a 561 nm laser beam (power intensity 30%, exposure time = 200 ms) and the emitted signal transmitted using a 594 nm filter. The GFP fluorescence signal was excited with a 488 nm laser beam (power intensity 60%, exposure time = 100 ms) and transmitted using 521 nm filter. The mCherry fluorescence signal was excited with a 514 nm laser beam (power intensity 40%, exposure time = 300 ms) and transmitted through a 594 nm filter. The P4 and BFP fluorescence signals were excited with a 405 nm laser beam (power intensity 40% and 20%, respectively, exposure time = 400 ms and 200 ms, respectively) and transmitted using a 445 nm filter. The fluorescence signal was captured using Z-scan protocol (0.7 step size) through the confocal apparatus (Andor DragonFly). Healthspan assessment The wt (N2), full-length tdTomato (EG7944), excised tdTomato (MSB910), 3 myo-2p::mCherry integrated lines (MSB1115, MSB1118, and MSB1122) and myo-2p::mCherry (extrachromosomal array) animals were cultured and their development, locomotion, body length, and lifespan compared (supplementary Figure S2). The fluorescence intensity and development were done in N2, EG7944, MSB910, and MSB1115. Development was assessed based on the worms size over time from L1 to egg-laying adult stage. Synchronized L1 (Porta-de-la Riva et al. 2012) were seeded onto NGM/OP50 plates and incubated at 20 • C. We captured the worms at L1 stage (prior to seeding), L3 stage (24 h after seeding), L4 stage (40-48 h after seeding), and egg-laying stage (72 h after seeding) based on the developmental timeline of N2 (Porta-de-la Riva et al. 2012). We imaged tdTomato fluorescence intensity in young adult worms using the Z scan protocol (step size = 1.7 μm) with 20× magnification (20×/0.75 MImm objective lens). The maximum Z-projection was performed using ImageJ (Fiji). Then, the ROI was drawn using segmented line across the body edge. The average intensity was measured and collected from individual worms. On the last day of developmental assessment, the adult animals were placed on the new plate and the moving trace on bacterial lawn captured. The locomotion behavior was observed under the lab-built microscope (WormTracker Das et al. 2021). We took the sinusoidal wave appearing in the bacterial lawn after the worm passed as reference of the body angle during locomotion. Body length was captured in a lab-built microscope (WormTracker Das et al. 2021) using 2x magnification and measured in ImageJ. By using the segmented line tool, the body length was measured from the nose tip to the tail tip. The lifespan assay was conducted by counting number of dead and alive worms in FUDR plates until the whole population diminished. The decrease of viability of each strain were plotted as the survival curve. The mean of lifespan was calculated from the average of age from individual animals in each population. Then, the mean of lifespan was compared with wt strain. Statistical analysis No statistical method was applied to predetermine sample size based on data variability. All data sets were first tested for normality using KS test or Tukey adjusted ANOVA for multiple comparisons as indicated in the Figure legends. Single tdTomato transgenes as safe harbor landing pads for exogenous transgenes To demonstrate that the single-copy tdTomato loci can function as versatile sites to integrate transgenes into the genome of C. elegans, we designed a single crRNA against the tdTomato ORF (Fig. 1a, see Methods) that is not predicted to have a full length off-target binding probability. Because tdTomato is a tandem-dimer gene of a single fluorophore, the successful Cas9 cleavage will cut the DNA twice, excising a large portion of the gene. The concomitant loss of fluorescence should, in principle, facilitate the screening process, and therefore speed up the identification of successful integrations. The tdTomato site serves a dual function: as a landing site and a visual marker for transgenesis. We therefore coined the method "FLInt", for Fluorescent Landmark Interference. We first sought to test whether the selected crRNA cleaves the tdTomato sequence. We reasoned that successful double-strand DNA break results in a loss in tdTomato fluorescence in the filial generations. Indeed, many animals in the F1 of an injected P0 have already lost their tdTomato fluorescence, which is readily identifiable in a normal fluorescence stereomicroscope (Fig. 1b). Some animals, however, showed a considerably lower fluorescence, indicative for a single edit on one parental chromosome. We also frequently observed a mosaic pattern in the somatic cells of the F1s, possibly due to cleavage after the first cell division. These animals would eventually give rise to nonred animals in the F2 generation according to Mendelian segregation. In jackpot broods, we frequently observe 25% of nonred animals from a single injection. We benchmarked the DNA cleavage efficiency for the tdTomato against the widely used, highly efficient dpy-10 protospacer (Arribere et al. 2014) and coinjected 2 μM for both crRNAs together with recombinant Cas9 (Paix et al. 2017). We then screened for nonred and Dpy animals as a readout for simultaneous cleavage of both DNA strands at the dpy-10 and tdTomato locus. From the total 13 jackpot broods we screened, we found 34% red, wildtype animals, 56% nonred, Dpy animals and 0% red, Dpy worms. Since all Dpy animals had also lost the tdTomato fluorescence in the F1, we reasoned that the crRNA for tdTomato is, at least, as efficient as the highly efficient dpy-10 protospacer (Arribere et al. 2014;El Mouridi et al. 2017). In addition, we found 10% nonred, wild-type worms (Table 1), suggesting a slightly higher efficiency of the tdTomato protospacer and making FLInt an extremely well suited candidate method for transgene integration at many potential sites across the genome. Together, these results not only indicate that the selected crRNA for tdTomato efficiently guides Cas9 for subsequent DNA cutting, but also that it does so at a high efficiency, allowing identification of events already in the F1. As a last test for the suitability of FLInt as safe habor sites, we assessed if the tdTomato crRNA causes any unwanted offtarget effects. The genotyping of 9 different strains at the most likely predicted off-target site (containing 4 mismatches), however, did not identify any further edits (Supplementary Data File 1). Likewise, we also did not detect gross defects in healthspan and locomotion or any other behavioral phenotypes compared with N2 wild-type animals (Supplementary Figure S2). We also observed that the strains generated on the oxTi553 allele reverted the parental phenotype at 25 • C (Frøkjaer-Jensen et al. 2014). This suggests that the edits are not interfering with the normal physiology of the animal and have nearly wild-type behavior (Supplementary Figure S2). Having established highly efficient DNA cleavage using the tdTomato crRNA, we proceeded to inject 20 P0 animals with the CRISPR mix and a myo-2p::mCherry plasmid as transgene-ofinterest (TOI) into the eft-3p::tdTomato::H2B V strain (EG7944) (Supplementary Figure S1a), following loss of red nuclear fluorescence from the tdTomato and gain of mCherry expression in the pharynx during the filial generations (Supplementary Figure S1a). Consistent with our prior observations, we found that some F1 had already lost the strong tdTomato nuclear fluorescence displayed by the P0, an indication of the successful disruption of both homologous chromosomes in the first generation after injection. We singled out animals positive for red pharynx (Supplementary Figure S1b), noticing that most of the transgenic animals that expressed mCherry had also lost nuclear tdTomato expression. To distinguish between expression from the extrachromosomal array and integrants, we selected 6 F2 animals from high transmission plates (myo-2p::mCherry, loss of nuclear tdTomato) and eventually obtained 1-3 integrated lines based on 100% transmission frequency in the F3 from one injection (see also Supplementary Table S1). Often, the TOI does not lead to a visible phenotype, for example effector or driver strains in bipartite expression systems (Wang et al. 2017;Das et al. 2021;Porta-de-la Riva et al. 2021). To follow the integration of such transgenes, we developed a PCR genotyping strategy (Fig. 1a, d, see Methods) using 3 primers that target the region around tdTomato for amplification, with different amplicon sizes according to the genetic recombination occurred. We selected animals from the 3 different populations: tdTomato::H2B (no excision), nonfluorescent (loss of tdTomato) and nonfluorescent/myo-2p::mCherry (expectedly tdTomato inserted) to isolate their DNA for genotyping. In the parental strain with tdTomato expression, the 3 primers would anneal (one of them twice) and 3 bands of different sizes would be amplified. Expectedly, we found that loss of tdTomato signal in absence of the transgenic marker was genomically accompanied by the loss of the longest DNA band, indicative for a successful Cas9 activity, and repair through NHEJ. However, in myo-2p::mCherry homozygous animals carrying the successful integration, we were unable to amplify the region flanked by the 2 crRNA target sites. We reasoned that the region with inserted transgene could not be amplified due to the large size of the multicopy transgene, which could be up to millions bases in length (El Mouridi et al. 2022). However, the small band corresponding to the end of the tdTomato gene and downstream the expected integration site (0.4 kb) was amplified, serving as a positive control for PCR (Fig. 1c, lane 3-5). Taken together, these results established that ectopic transgenes can be integrated by CRISPR using site-specific crRNAs into the tdTomato landing sites as multicopy transgenes with very high efficiency and reliability. During the expansion of the injected animals, we consistently observed different integration efficiency based on the culturing Table 1. crRNA efficiency of tdTomato and dpy-10. Table with the crRNA cleavage efficiency at the tdTomato locus at oxTi553 compared with the well characterized dpy-10 locus. Similar efficiencies have been found for tdTomato sites distributed over all 6 linkage groups (see also Fig. 3). Phenotypes No. of worms Percentage conditions. Similarly to what had been previously described for integrations through the miniMos technique (Frøkjaer-Jensen et al. 2014), we hypothesized that the temperature at which the P0 is grown after injection might affect the integration efficiency. To investigate this, we reared the injected P0 at 16 • C or 25 • C for 2 days until we screened F1 for positive transgenesis events. The F2 and F3 progenies, however, were invariably raised at 25 • C. After obtaining the integrated lines, we found that culturing the P0 at 25 • C promoted higher integration efficiency compared with 16 • C (Table 2). At this temperature, we obtained 100% success rate, with integrated lines from every injection round (3/3). Conversely, only one integrated line was obtained of the P0 incubated at 16 • C (1/4, Table 2). This result is in agreement with previous reports in vertebrates and plants showing that Streptococus Pyogenes Cas9 efficiency is higher at elevated temperatures (Moreno-Mateos et al. 2017;LeBlanc et al. 2018) and suggests that transgene integration is temperature-dependent. We were then curious to understand how many copies of the coinjected plasmid were integrated into the safe harbor locus and how this related to the relative amount of DNA injected. Previous integration methods suggested a large variability of integrated copies, ranging from few copies (derived from biolistic transformations (Sarov et al. 2012)) to hundreds after integrating traditional extrachromosomal arrays with random mutagenesis (Noma and Jin 2018). We thus injected varying ratio of coinjection marker/transgene together with the tdTomato CRISPR mix into EG7944 oxTi553 V or EG7846 oxTi700 I and quantified their integrated copy number using quantitative PCR. We found that a higher plasmid ratio led to a higher copy number, which in turn led to a higher transgene expression from the co-injection marker (myo-2p:mCherry) (Fig. 2). Thus, a careful titration of injected plasmid would thus facilitate a balanced expression (in our hands ranging from 20 to 150 copies of the transgene) of the desired transgenes in a known safe harbor locus. Taken together, highly efficient integration methods reduce the time consuming screening required in traditional transgene integration procedures. Compared with the conventional method using UV/TMP, in which worms are propagated for several generations during 3-5 weeks before the screening (Mariol et al. 2013) and require posterior outcross to nonmutagenized worms, our method establishes integrated lines within 9 days post injection, essentially bypassing the formation of an extrachromosomal array. Besides, the loss of fluorescence provides a visual, dominant marker for screening, allowing fast identification of positive F1 among the background phenotype. In addition, since the loss of fluorescence only takes place after cleavage of both chromosomes, rapid screening of homozygous edits is facilitated, even granting the omission of another injection marker than the loss of the same tdTomato. For example, we successfully integrated ges-1p::CRE and rab-3p:GAL4 into their effector strain and recovered transgenic after a single injection (Supplementary Figure S3 and Methods). Thanks to previous work in the generation of the many miniMos strains, (Frøkjaer-Jensen et al. 2014), a single crRNA can be used on the tdTomato present in single copy in 147 different loci from strains that are available in Table 2. The effect of temperature on FLInt mediated integration. EG7944 animals carrying oxTi553 were injected with myo-2p::mCherry and DNA ladder and followed for integration. . 2. Transgene copy number for 9 different transgenes. Different relative concentrations of myo-2p::mCherry were injected together with the CRISPR mix and a target plasmid (ratio, left bars) and integrated into the same tdTomato locus (oxTi). The resulting homozygous transgene copy numbers were quantified by qPCR (right bars). The middle plot shows the pharynx fluorescence and the inset shows the integrated copy number as a function of the injected plasmid ratio. the CGC, providing high flexibility in designing transgenic animals and downstream experiments. GFP as an alternative FLInt landing site Often, the choice of the co-injection marker is guided by the TOI. Thus, the tdTomato sites are incompatible if the TOI already contains a tdTomato fluorophore. Likewise, if the transgene encodes for a nuclear localized mCherry, downstream analysis can be confusing. With the aim of posing an alternative in those cases, we approached the single copy GFP marker strains described in (Frøkjaer-Jensen et al. 2014) to assess if they could serve as a convenient alternative. We designed a pair of crRNAs to disrupt the GFP ORF (Supplementary Figure S4b) generating a deletion and verified that this event led both to a potent loss of GFP expression and to the generation of integrants (Supplementary Figure S4b). We then compared the GFP protospacer against the tdTomato protospacer by means of injecting a CRISPR mix that contained these crRNAs into a transgenic animal that had both landing sites. In the screening, we found around 34% of nonred/nongreen animals in the F1 and similar frequencies of animals with either loss of green or loss of red (Supplementary Figure S4b,c), suggesting that both crRNAs have comparable cutting efficiencies if their cognate loci. Together, these demonstrate that the single copy GFP loci serve as good alternative targets for FLInt. However, due to abundant gut autofluorescence and the generally weaker fluorescent signal, transgene screening is more difficult than in the tdTomato strains. The efficiency of transgene integration varies with chromosomal position Having shown that the single copy tdTomato can be used as FLInt landing sites, we wondered if the entire zoo of 147 possible landing sites would accept exogenously delivered transgenes with similar efficiencies. We thus selected a random set of landing sites distributed over all linkage groups, tested the integration potential of myo-2p::mCherry into 6 different background strains, one in each chromosome and calculated the integration efficiency from a standardized experiment (same injection mix, different landing sites, see Methods). We found that the most successful and efficient were on chromosomes I and II (oxTi556 I, 6.49%; oxTi564 II, 6.46%), followed by the landing sites on chromosomes X and III (oxTi668 X, oxTi619 III), 6.2% and 5.8%, respectively (Fig. 3a, Supplementary Table S1). These results showcase that, even when integration is possible on all linkage groups, it may be more probable in some than others, based on causes that are external to the transgene but internal to the landing site of the TOI. In addition to being on different chromosomes, landing sites are located at diverse genetic positions within each linkage group (LG1:1.23 and 22.30, LG2: −0.38 and −12.17, LG3:1.23 and 11.8, LG4:0.09 and −26.93, LG5:0.29, and LGX:1.29 and −4.88). In general, we found that the tdTomato landing sites in the center of Table S1. c: Fluorescence intensity for the single copy tdTomato transgenes at the indicated sites. d: Plot of the tdTomato fluorescence intensity vs chromosomal position. the chromosome have higher efficiency compared with the farther tdTomato landing sites (Fig. 3b). This higher tendency for integrations at central sites also reflects the fact that more tdTomato landing sites are found closer to the center in the genetic map of each chromosome (compare Supplementary Figure S4b in Frøkjaer-Jensen et al. (2014). In our hands, only one landing site (LG4:−26.93) was not accepting any transgene after many trials, even though cutting efficiency was comparable to other loci. One explanation for the difference in integration efficiency at the chromosome arms may be the accessibility of the cellular machinery to the locus of interest. It has been previously described that C. elegans chromosome arms are regions rich in chromatin silencing marks (4 Mb from chromosome ends (Liu et al. 2011) and anchored to the nuclear envelope, being part of the peripheral heterochromatin, and with reduced accessibility of the DNA repair machinery (Ikegami et al. 2010). Along the same lines, the safe-harbor insertion sites described in the recently published MosTI work (El Mouridi et al. 2022) are also located in the permissive chromatin environment provided by the chromosome centers (Ho et al. 2014). Even though we developed the integration procedure with a standardized injection mix to ensure that the results are independent of the DNA content, we also noted that the integration efficiency was superficially unchanged with the different target plasmids we used (see Supplementary Table S3). When the linkage group does not matter, we would recommend getting started with LG5, for which we obtained up to 23% integration efficiency in one of our trials. We also observed a high efficiency of integration (6.25% on oxTi553) with the use of animals that constitutively express Cas9 in the germline (Schwartz et al. 2021), but here the Cas9 needs to be crossed into the different strains carrying the desired landing site before the injection. Lastly, we asked if the different locations of the insertion sites could possibly lead to differences in downstream transgene expression. Because the integrated DNA exist as multicopy transgene with varying copy number (e.g. Fig. 2), we compared the fluorescence intensity of the original tdTomato loci (eft-3p::tdTomato::H2B) among the 6 linkage groups used in our experiments (EG7846 I, EG7860 II, EG7900 III, EG7905 IV, EG7944 V, and EG7985 X) and found that the tdTomato intensity from each strain is different (Fig. 3c) but uncorrelated with the genetic position based on their insertion sites. It should be noted that in this work we only assessed a small data set with the aim to gather more information that can help us infer the expression levels of our TOI in our candidate sites. However, a previous and exhaustive study of more than a hundred strains found that somatic transgene silencing is encountered with a higher probability at sites closer to the chromosome arms than in the center (Frøkjaer-Jensen et al. 2016). Together, even though transgene integration at different loci varied, we concluded that transgene expression was unaffected (Fig. 3d) for the loci tested. Together, our experiment revealed that the integration efficiency varies among the tdTomato inserted sites. The landing sites closer to the center appear to have higher efficiencies compared with the chromosome arms. Thus, in order to obtain the optimal integration efficiency, we suggest to use target loci closer to the center and in intergenic regions. Integrating existing extrachromosomal arrays into fluorescent safe habor loci Lastly, we were interested in the targeted integration of existing extrachromosomal arrays into the tdTomato site without the use of mutagens such as UV or TMP that cause pleiotropic DNA defects and require subsequent outcrosses. To do so, we first crossed the target strain bearing the extrachromosomal array with the desired tdTomato marker strain, which we injected with the tdTomato CRISPR ingredients. We introduced, though, a slight variation following the observations in Yoshina et al. (2016) in which they saw a correlation between integration frequency and fragmentation of the extrachromosomal array. This variation consisted of adding a crRNA that would target the Ampicillin resistance gene (which is present on the integrated vector plasmid), thus cutting the array in several pieces. Using the standard screening procedure (loss of NLS::tdTomato), we were able to recover 1 integrated line from 28 P0 (19 nonred F1) within 2-3 generations. The difference in the need of DNA cleavage between existing and de novo arrays probably lies in the fact that during the formation of the array, it already undergoes cleavage and assembly processes (Mello et al. 1991) that allow integration in one step. However, preexisting arrays are circular chromosomes (Woglar et al. 2020) and thus have no free ends. Hence, for NHEJ with the chromosomal cut site, the existing arrays was linearized in situ with a coinjected gRNA targeting the Ampicilin resistance gene. With the present method, we demonstrated that a previously generated extrachromosomal array can be integrated into the tdTomato cleavage site without the drawbacks of random mutagenesis. Cas9-mediated disruption of tdTomato serves as a Co-CRISPR marker A common bottleneck in the generation of CRISPR mutants is the efficient identification of successful gene edits. Without visible markers, PCR-based genotyping remains the ultimate option-a lengthy, tedious and potentially expensive process. Often, CRISPR-mediated genome editing in C. elegans is guided by a phenotypic conversion of an easily screenable co-CRISPR marker (Kim et al. 2014) that is eliminated after successful edits are isolated. In a successful edit, the mutated co-CRISPR locus results in an obvious phenotype which can be easily screened and distinguished from wild-type animals that were not edited. The marker phenotype thus, provides a visual representation of CRISPR efficiency and potentially reduces the number of progeny that eventually need to be sequenced to identify the desired edit. A large number of co-CRISPR marked progeny is indicative of putative edits at the gene of interest (GOI), always depending on the efficiency of the crRNA used for such locus. In C. elegans many co-CRISPR genes have been proposed. Among those, pha-1, unc-22, sqt-1, unc-58, ben-1, zen-4, and dpy-10 (Arribere et al. 2014;Kim et al. 2014;Ward 2014;Dickinson et al. 2015;El Mouridi et al. 2017) are popular, but may be problematic if its associated phenotype interferes with the GOI or is close to the target locus. Segregating alleles of genes that are in close proximity (e.g. dpy-10 from other LGII genes) becomes problematic, since it depends on the genetic distance between the 2 genes. Likewise, co-CRISPR methods can result in subtle mutations at the co-CRISPR locus not phenotypically associated with it that can be confounded with the edit at the GOI (Rawsthorne-Manning et al. 2022). We have already shown that the efficiency to induce Cas9-mediated double-strand breaks of the crRNA for tdTomato is comparable, if not better, to the widely used dpy-10 crRNA (Table 1). Thus, tdTomato loci could pose an attractive alternative co-CRISPR locus, as its conversion does not result in any morphological or locomotion phenotype, and is "silent." In addition, this could be beneficial when the co-CRISPR marker needs to be combined with a sublethal edit in essential genes that could lead to synthetic lethal phenotype (e.g. when combined with a dpy-10 or pha-1 co-CRISPR). Moreover, some phenotypic conversions (to a roller or a paralyzed animal), often preclude other phenotypic effects or can, in the worst cases, have a synthetic adverse effect with the desired gene modification. We specifically run into that problem when we designed a CRISPR edit for the GPCR lat-1, located physically close to dpy-10. We thus inserted a loxP::Δ mCherry site at the lat-1 3 ′ end and used the tdTomato in oxTi390 IV as a co-CRISPR marker. After injection into P0 animals, the jackpot plates contained nonred worms (Fig. 4a) as well as some dimmer/mosaic red F1 progenies, which we interpreted as excised in only one chromosome (see above). We only selected the nonred F1(s) for PCR screening of the loxP::ΔmCherry insertion at the lat-1 locus (Fig. 4b) and successfully identified several candidates (edit efficiency = 28.38 ± 13.25, n = 7). Together, this result demonstrates that tdTomato can be used as a co-CRISPR locus that can be easily screened without causing a confounding phenotype and can be chosen such that it is not linked to the gene of interest and thus can be efficiently eliminated by outcrossing. Compared with dpy-10(cn64), in which potentially successful edits can be identified as heterozygous repairs in dpy-10 and segregate the GOI from the dpy-10 locus, the excised tdTomato site is identified as homozygous. If elimination of the remains of tdTomato from the background is desired, the only possibility is out-crossing. However, the use of FLInt as co-CRISPR marker may involve more possibilities than for integration. Because only excision and not repair with the exogenous array is needed to successfully identify the candidates, the possible strains to be used increase with respect to the integration, in which we also need to account for higher probability of NHEJ repair with the array. In addition, there is the possibility of using a tdTomato close to the GOI if the edit is difficult to screen in subsequent steps (e.g. a point mutation w/o visible phenotype in crosses). In those cases, PCR for the remaining tdTomato could be used in the screening processes. An important issue to consider is the choice of the tdTomato strain to use. Even though some of the 147 tdTomato target sites are mapped to genes, they do not result in visible phenotypes at 20 • C. However, whenever possible, intergenic safe habor sites should be used before starting an integration to avoid possible synthetic effects in downstream analyses. Future extensions of FLInt: single nucleotide conversion of GFP to BFP as a marker for HR-directed repair Dominant co-CRISPR markers, as the widely employed dpy-10(cn64), have the advantage that homology-directed repair can be visualized and distiguished from non homologous end joining repair directly in the F1 (Arribere et al. 2014). The use of tdTomato as a co-CRISPR marker, however, does not allow for such distinction in repair. To generate a co-CRISPR alternative for those cases, we propose the possibility of changing the emission spectra of GFP from green to blue through a single nucleotide change (P4, Fig. 4c) (Heim et al. 1994). Towards this goal, we designed a crRNA that cleaves the GFP sequence at the presumptive chromophore region together with an HR template that introduces the single point mutation to convert a tyrosine to an histidine at position 66. This genetic intervention switched the green emission spectrum of the GFP (508 nm) into the blue emission spectrum (448 nm). This simple modification can be made visible on a standard fluorescence microscope with an appropriate filter set ( Fig. 4c (i, ii, iii)). After confirming that the crRNA efficiently Fig. 4. FLInt as co-CRISPR marker a: Representative images of a cohort of co-CRISPR'ed animals showing animals with the P0 phenotype and candidate F1. b: Screening PCR for lat-1::loxP. c: GFP to P4/BFP conversion as a homology-directed CRISPR marker. i) PAM sequence highlighted in bold, vertical line indicates Cas9 cutsite. ii) A single amino acid change in the GFP protein switches the absorption and emission wavelength to the blue. iii) Change in the emission spectrum from GFP to P4/BFP. X-axis indicates emission wavelength. d: Representative images of the co-converted GFP locus imaged for the GFP and BFP filtersets. cleaved the GFP sequence and led to a loss of GFP fluorescence in F1 animals, we added the HR template for the conversion and the corresponding mix for the GOI. We selected those F1 animals that showed a loss of GFP and emergence of blue fluorescence (Fig. 4d) which we used to screen for the edit at the GOI. However, because the P4/BFP fluorescence is rather weak as a heterozygous and might be difficult to see on standard epifluorescence stereoscope, similar strategies might provide larger contrast and easier screening. For example, the opposite conversion (from P4/BFP to GFP) yielded bright green fluorescence. Alternatively, a set of mutations centered around the tdTomato chromophore could be potentially mutated and the bright red turned into a bright green signal (Wiens et al. 2016). Together, these improvements might facilitate the use of GFP as a co-CRISPR marker, also when the GOI is linked closely to the traditional co-CRISPR locus and thus phenotypically interferes with the edits and/or cannot easily be unlinked through genetic breeding. Conclusion In summary, we leveraged a library consisting of 147 marker strains that carry a single copy of a histone-tagged tdTomato and 142 strains with a nuclear localized GFP (Frøkjaer-Jensen et al. 2014) as safe harbor landing sites for ectopic transgene integration. Importantly, all of these integrations reside at different locations on the 6 chromosomes in C. elegans, providing previously unmatched flexibility in the genetic design and follow-up experiments. We demonstrated that these synthetic landing sites, encoding an ubiquitously expressed fluorescent protein, aided the identification of successful edits during transgene integration with several significant advantages: first, the integration site is known and precisely mapped, second, screening is facilitated through interference with the bright fluorescent signal indicating successful integration, third, a single crRNA can be used for all tdTomato landing sites, and, finally, because the intergenic landing site is known, the transgene integration does not cause any inadvertent phenotypes and defects. Together, these improvements in single shot transgenesis greatly reduce the time needed to screen for stable mutants, is flexible and cost effective, and has the potential to greatly accelerate research in C. elegans. In principle, this method can be extended to other invertebrate, vertebrate and mammalian model systems in which a single copy fluorescent gene is available as gene editing target sites. Data availability All strains and plasmids generated during this study are available upon request to the corresponding author. Strains harboring the safe landing site, including MSB1247 are available through CGC and their information is accessible on wormbuilder.org. Supplementary material are available at G3 online.
2023-01-24T14:12:24.097Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "7c674e5fd4b4813e458f5a0fc1ddca688c35453c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/g3journal/advance-article-pdf/doi/10.1093/g3journal/jkad041/49280436/jkad041.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91d964e6593f6a41309151511087a5192f84eb62", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
207794171
pes2o/s2orc
v3-fos-license
Plankton: Scalable network configuration verification through model checking Network configuration verification enables operators to ensure that the network will behave as intended, prior to deployment of their configurations. Although techniques ranging from graph algorithms to SMT solvers have been proposed, scalable configuration verification with sufficient protocol support continues to be a challenge. In this paper, we show that by combining equivalence partitioning with explicit-state model checking, network configuration verification can be scaled significantly better than the state of the art, while still supporting a rich set of protocol features. We propose Plankton, which uses symbolic partitioning to manage large header spaces and efficient model checking to exhaustively explore protocol behavior. Thanks to a highly effective suite of optimizations including state hashing, partial order reduction, and policy-based pruning, Plankton successfully verifies policies in industrial-scale networks quickly and compactly, at times reaching a 10000$\times$ speedup compared to the state of the art. Introduction Ensuring correctness of networks is a difficult and critical task. A growing number of network verification tools are targeted towards automating this process as much as possible, thereby reducing the burden on the network operator. Verification platforms have improved steadily in the recent years, both in terms of scope and scale. Starting from offline data plane verification tools like Anteater [19] and HSA [13], the state of the art has evolved to support real-time data plane verification [15,12], and more recently, analysis of configurations [6,5,7,1,25]. Configuration analysis tools such as Batfish [6], ERA [5], ARC [7] and Minesweeper [1] are designed to take as input a given network configuration, a correctness specification, and possibly an "environment" specification, such as the maximum expected number of failures, external route advertisements, etc. Their task is to determine whether, under the given environment specification, the network configuration will always meet the correctness specification. As with most formal verification domains, the biggest challenge in configuration analysis is scalability. Being able to analyze the behavior of multiple protocols executing together is a nontrivial task. Past verifiers have used a variety of techniques to try to surmount this scalability challenge. While many of them sacrifice their correctness or expressiveness in the process, Minesweeper maintains both by modeling the network using SMT constraints and performing the verification using an SMT solver. However, we observe that this approach scales poorly with increasing problem size (4+ hours to check a 245-device network for loops, in our experiments). So, this paper addresses the following question: Can a configuration verification tool have broad protocol support, and also scale well? We begin our work by observing that scalability challenges in configuration verification stem from two factors -the large space of possible packet headers, and the possibly diverse outcomes of control plane execution, particularly in the presence of failures. We believe that general purpose SAT/SMT techniques are not as well equipped to tackle these challenges as domain-specific techniques specifically designed to address them. In fact, these challenges have been studied extensively, in the domains of data plane verification and software verification. Data plane verification tools analyze the large header space to determine all possible data plane behaviors and check their correctness. Software verification techniques explore the execution paths of software, including distributed software, and identify undesirable executions that often elude testing. Motivated by the success of the analysis algorithms in these domains, we attempted to combine the two into a scalable configuration verification platform. The result -Plankton -is a configuration verifier that uses equivalence partitioning to manage the large header space, and explicit-state model checking to explore protocol execution. Thanks to these efficient analysis techniques, and an extensive suite of domain-specific optimizations, Plankton delivers consistently high verification performance. Our key contributions are as follows: • We define a configuration verification paradigm that combines packet equivalence computation and explicit-state model checking. • We develop optimizations that make the design feasible for networks of practical scale, including optimizations to reduce the space of event exploration, and techniques to improve efficiency of exploration. • We implement a Plankton prototype with support for OSPF, BGP and static routing, and show experimentally that it can verify policies at practical scale (less than a second for networks with over 300 devices). Plankton outperforms the state of the art in all our tests, in some cases by as much as 4 orders of magnitude. Motivation and Design Principles Configuration verifiers take as input the configuration files for network devices, and combine them with an abstraction of the lower layers -the distributed control plane, which executes to produce data plane state, and the forwarding hardware that will use that state to process traffic. They may additionally take an environment specification, which describes interactions of entities external to the network, such as possible link failures, route advertisements, etc. Their task is to verify that under executions enabled by the supplied configuration and environment, correctness requirements are never violated. Configuration verifiers thus help ensure correctness of proposed configurations prior to deployment. Figure 1 illustrates the current state of the art in configuration verification. As the figure shows, only Minesweeper [1] can reason about multiple possible converged data plane states of the network (e.g., due to topology changes or control plane non-determinism), while also having the ability to support more than just a specific protocol, and maintaining soundness of analysis. All other tools compromise on one or more key features. ARC [7], for example, uses graph algorithms to compute the multiple converged states enabled by failures, but only for shortest-path routing. As a result it cannot handle common network configurations such as BGP configurations that use LocalPref, any form of recursive routing, etc. The reason for the mismatch in Minesweeper's functionality in contrast to others is that it makes a different compromise. By using an SMT-based formulation, Minesweeper is able to achieve good data plane coverage and feature coverage, but pays the price in performance. As experiments show [2], Minesweeper scales poorly with network size, and is unable to handle networks larger than a few hundred devices in reasonable time. Our motivation for Plankton is simple -can we design a configuration verification tool without compromising scale or functionality? Achieving our goal requires tackling two challenges: packet diversity and data plane diversity. Packet diversity, which refers to the large space of packets that needs to be checked, is easier to handle. We leverage the notion of Packet Equivalence Classes (PECs), which are sets of packets that behave identically. Using a trie-based technique similar to VeriFlow [15], we compute PECs as a partitioning of the packet header space such that the behavior of all packets in a PEC remains the same throughout Plankton's exploration. A more interesting aspect of PECs is how to handle dependencies across PECs without compromising performance. In Plankton, this is done by a dependency-aware scheduler designed to maximize independent analysis ( § 3.2). Data plane diversity refers to the complexity of checking every possible converged data plane that an input network configuration may produce. It is the task of the control plane model to ensure coverage of these possible outcomes. Simulation-based tools (the best example being Batfish [6]) execute the system only along a single non-deterministic path, and can hence miss violations in networks that have multiple stable convergences, such as certain BGP configurations. ARC's graph-based approach accounts for possible failures, but can support only shortest-path routing. In order to overcome these shortcomings, Minesweeper, the current state of the art in terms of functionality, uses an SMT solver to search through possible failures and non-deterministic protocol convergence, to find any converged state that represents a violation of network correctness. A key intuition behind our approach is that the generic search technique employed by SMT solvers makes the core of the configuration verification problem much more difficult than it has to be. Network control planes operate using simple algorithms which can not only be easily modeled in software, but can also find a protocol's outcome much more quickly than general-purpose SMT solving. In fact, the common case is that the control plane computes some variant of shortest or least-cost paths. To illustrate this point, we implemented simple single-source shortest path solvers in SMT (Z3) and a model checker (SPIN). The SMT formulation is implemented as constraints on the solution, while the model checker explores execution paths of the Bellman-Ford algorithm; and in this simplistic case the software has deterministic execution. The result is that the model checker approach is similar to a normal execution of software, and is around 12,000× faster even in a moderate-sized fat tree network of N = 180 nodes ( Figure 2). Of course, this is intentionally a simplistic, fully- deterministic case with simple implementations. The point is that the model checking approach begins with a huge advantage -so huge that it could explore many non-deterministic execution paths and still outperform SMT. This leads to our next key intuition: the effect of non-determinism is important, but the amount of "relevant" non-determinism is limited. Networks can experience "relevant" non-determinism like the choice of what failures occur, and protocol execution; as well as "irrelevant" non-determinism like message timing that ultimately doesn't affect the outcome. Configurations and protocols are usually designed to keep the outcome mostly deterministic, with non-deterministic branch points ultimately leading to one or a small number of different converged states. Motivated by this intuition, we create a control plane model that incorporates the possible non-deterministic behaviors, but we also implement optimizations so that when the outcome of the executions is actually deterministic, the "irrelevant" non-determinism is pruned enough that performance is comparable to simulation. This model is exhaustively explored by SPIN, a model checker designed for Promela programs. SPIN performs a depth-first search on the state space of the program, looking for states that violate the policy being checked. We further assist SPIN through optimizations that minimize the size of individual states, thus making the traversal process more efficient. Thanks to these two types of optimizations, Plankton achieves our goal of scalable, general-purpose configuration verification. Plankton Design We now present Plankton's design, illustrated in Packet Equivalence Classes The first phase in Plankton's analysis is the computation of Packet Equivalence Classes. As we discussed in § 2, Plankton uses a trie-based technique inspired by VeriFlow. The trie in Plankton holds prefixes obtained from the configuration, including any prefixes that are advertised (explicitly or automatically), any prefixes appearing in route maps, any static routes, etc. Each prefix in the trie is associated with a config object, that describes any configuration information that is specific to that prefix. For example, consider Figure 4, which illustrates a highly simplified example where the prefixes 128.0.0.0/1 and 192.0.0.0/2 are being advertised over OSPF in a topology with 3 devices. The trie holds three config objects -the default, and one for each advertised prefix. Once the trie is populated, Plankton performs a recursive traversal, simultaneously keeping track of where the prefix boundaries define division of the header space. For each known partition, it stores the most up-to-date networkwide config known. When the end of any new prefix is reached, the config object that is associated with it is merged with the network-wide config for the partition that .255] without any node originating any prefix. As the example shows, each PEC-specific configuration computed this way will still include information about the original prefixes contributing to the PEC. Storing these prefixes may seem redundant. However, note that even within a PEC, the lengths of the prefixes that get advertised or get matched in route filters play a role in decision making. Dependency-aware Scheduling It is tempting to believe that Packet Equivalence Classes could be analyzed fully independently of each other, and that an embarrassingly parallel scheme could be used in the verification process. While this is indeed true sometimes, there can often be dependencies between various PECs. For example, consider a network that is running iBGP. For the various peering nodes to be able to communicate with each other, an underlying routing protocol such as OSPF should first establish a data plane state that forwards packets destined to the devices involved in BGP. In such a network, the manner in which OSPF determines the forwarding behavior for the device addresses will influence the routing decisions made in BGP. In other words, the PECs that are handled by BGP depend on the PECs handled by OSPF. In the past, configuration verification tools have either ignored such cases altogether, or, in the case of Minesweeper, modeled these classes simultaneously. Specifically, for a network of with n routers running iBGP, Minesweeper creates n + 1 copies of the network, searching for the converged solution for the n loopback addresses and also BGP. Effectively, this turns the verification problem into one quadratically larger than the original. Given that configuration verification scales significantly worse than linearly in input size, such a quadratic increase in input size often makes the problem intractable. Plankton goes for a more surgical approach. Once the PECs are calculated, Plankton identifies dependencies between the Packet Equivalence Classes, based on recursive routing entries, BGP sessions, etc. The dependencies are stored as a directed graph, whose nodes are the PECs, and directed edges indicate which PECs depend on which others. In order to maximize parallelism in verification runs across PECs, a dependency-aware scheduler first identifies strongly connected components in the graph. These SCCs represent groups of PECs that are mutually dependent, and hence need to be analyzed simultaneously through a single verification run. In addition, if an SCC is reachable from another, it indicates that the upstream SCC can be scheduled for analysis only after the one downstream has finished. Each verification run is a separate process. For an SCC S, if there is another SCC S that depends on it, Plankton forces all possible outcomes of S to be written to an in-memory filesystem (S will always gets scheduled first). Outcomes refer to every possible converged state for S, together with the non-deterministic choices made in the process of arriving at them. When the verification of S gets scheduled, it reads these converged states, and uses them when necessary. Minesweeper's technique of replicating the network roughly corresponds to the case where all PECs fall into a single strongly connected component. We expect this to almost never be the case. In fact, in typical cases, the SCCs are likely to be of size 1, meaning that every PEC can be analyzed in isolation, with ordering of the runs being the only constraint * . For example, Figure 5 illustrates the dependency graph for a typical case where two PECs are being handled in iBGP on a network with 4 different routers. The only constraint in this case is that the loopback PECs should be analyzed before the iBGP PECs can start. In such cases, Plankton's keeps the problem size significantly smaller, and maximizes the degree of parallelism that can be achieved. When a PEC needs the relevant converged states of past PECs for its exploration, the non-deterministic choices may need to be coordinated across all these PECs. In particular, consider the choice of link failures: if we hypothetically executed one PEC assuming link L has failed and another PEC assuming L is working, the result represents an invalid execution of the overall network. Therefore, our current prototype matches topology changes across explorations. A second class of non-deterministic choices is protocol nondeterminism. In our experiments, we have not seen cases of protocol non-determinism that requires matching across PECs. OSPF by its nature has deterministic outcomes, but on networks which have non-determinism in their internal routing (e.g., non-deterministically configured BGP for in- * An example where the SCCs are bigger than one PEC is the contrived case where there exists a static route for destination IP A whose next hop is IP B, but another static route for destination IP B whose next hop is IP A. ternal routing) and where message timing is correlated across PECs (e.g., via route aggregation), the system would need to coordinate this non-determinism to avoid false positives. Explicit-state Model Checker The explicit state model checker SPIN [9] provides Plankton its exhaustive exploration ability. SPIN verifies models written in the Promela modeling language, which has constructs to describe possible non-deterministic behavior. SPIN's exploration is essentially a depth-first search over the possible states of the supplied model. Points in the execution where non-deterministic choices are made represent branching in the state space graph. Plankton's network model is essentially an implementation of the control plane in Promela. Our current implementation supports OSPF, BGP and static routing. Recall from § 3.1 that Plankton partitions the header space into Packet Equivalence Classes. For each SCC, Plankton uses SPIN to exhaustively explore control plane behavior. In order to improve scalability, Plankton also performs another round of partitioning by executing the control plane for each prefix in the PEC separately. This separation of prefixes is helpful in simplifying the protocol model. However, it does limit Plankton's support for route aggregation. While features such as a change in the routing metric can be supported, if there is a route map that performs an exact match on the aggregated prefix, it will not apply to the more specific routes, which Plankton models. Once the converged states of all relevant prefixes are computed, a model of the FIB combines the results from the various prefixes and protocols into a single network-wide data plane for the PEC. In what follows, we present Plankton's network model that will be executed by SPIN. We will initially present a simple, unoptimized model, which is functionally correct but has significant non-determinism that is irrelevant to finding different converged states. Subsequently, in § 4, we discuss how Plankton attempts to minimize irrelevant non-determinism, making the execution of the deterministic fragments of the control plane comparable to simulation. Abstract Protocol Model To define a control plane suitable for modeling real world protocols such as BGP and OSPF, we look to the technique used by Minesweeper wherein the protocols were modeled as instances of the stable paths problem. Along similar lines, we consider the Simple Path Vector Protocol [8], which was originally proposed to solve the stable paths problem. We first extend SPVP to support some additional features that we wish to model. Based on this, we construct a protocol we call the Reduced Path Vector Protocol, which we show to be sufficient to correctly perform model checking, if we are interested only in the converged states of the network. We use RPVP as the common control plane protocol for Plankton. We begin with a brief overview of SPVP, highlighting our extensions to the protocol. Appendix A contains the full details of the protocol and our extensions. SPVP and its extension SPVP is an abstract model of real-world BGP, replacing the details of BGP configurations with abstract notions of import/export filters and ranking functions. For each node n and peer n , the import and export filters dictate which advertisements (i.e. the advertiser's best path to the origin) n can receive from and send to n , respectively. The ranking function for n dictates the preference among all received advertisements. These notions can be inferred from real-world configurations. We slightly extend the original SPVP [8] to support more features of BGP. The extensions are as follows: we allow for multiple origins instead of a single one; the ranking function can be a partial order instead of a total one to allow for time based tie breaking; and to be able to model iBGP, we allow the ranking function of any node to change at any time during the execution of protocol. It is well known that there are configurations which can make SPVP diverge in some or all execution paths. However, our goal is only to check the forwarding behavior in the converged states, through explicit-state model checking. So, we define a much simpler model that can be used, without compromising the soundness or completeness of the analysis (compared to SPVP). Reduced Path Vector Protocol (RPVP) We now describe RPVP, which is specifically designed for explicit-state model checking of the converged states of the extended SPVP protocol. In RPVP, the message passing model of SPVP is replaced with a shared memory model. The network state only consists of the values of the best known path of each node at each moment (best-path). In the initial state, the best path of all nodes is ⊥, except origins, whose best path is ε. At each step, the set of all enabled nodes (E) is determined (Algorithm 1, line 5). A node n is considered enabled if either i) the current best path p of n is invalid, meaning that the next hop in p has a best path that is not a prefix of p. invalid(n) best-path(best-path(n). head) = best-path(n). rest 16: best-path(n) ← p 17: Or ii) there is a node n among the peers of n that can produce an advertisement which will change the current best path of n. In other words, n has a path better than the current best path of n, and the path is acceptable according to the export and import policies of n and n respectively. can-update n (n ) better(import n,n' (export n',n (best(n )), best(n)) Where better n (p, p ) is true when path p is preferred over p according to the ranking function of n. At any step of the execution, if there is no enabled node, RPVP has reached a converged state. Otherwise a node n is non-deterministically picked among the enabled set (line 9). If the current best path of n is invalid, the best path is set to ⊥. Among all peers of n that can produce advertisements that can update the best path of n, the neighbors that produce the highest ranking advertisements are selected (line 13). Note that in our model we allow multiple paths to have the same rank, so there may be more than one elements in the set U . Among the updates, one peer n is non-deterministically selected and the best path of n is updated according to the advertisement of n (lines [14][15][16]. By the end of line 16, an iteration of RPVP is finished. Note that there are no explicit advertisements propagated; instead nodes are polled for the advertisement that they would generate based on their current best path when needed. The the protocol terminates once a converged state for the target equivalence class is reached. RPVP does not define the semantics of failure or any change to the ranking functions. Any topology changes to be verified are made before the protocol starts its execution and the latest version of the ranking functions are considered. A natural question is whether or not performing analysis using RPVP is sound and complete with respect to SPVP. Soundness is trivial as each step of RPVP can be simulated using a few steps of SPVP. If we are only concerned about the converged states, RPVP is complete as well: Theorem 1. For any converged state reachable from the initial state of the network with a particular set of links L failing at certain steps during the execution of SPVP, there is an execution of RPVP with the same import/export filters and ranking functions equal to the latest version of ranking functions in the execution of SPVP, which starts from the initial state in which all links in L have failed before the protocol execution starts, and reaches the same converged state. Particularly, there is a such execution in which at each step, each node takes a best path that does not change during the rest of the execution. Proof. The proof can be found in the Appendix. Theorem 1 implies that performing model checking using the RPVP model is complete. Note that RPVP does not preserve all the transient states and the divergent behaviors of SPVP. This frees us from checking unnecessary states as we are only interested in the converged states. Yet, even the reduced state space has a significant amount of irrelevant non-determinism. Consequently, we rely on a suite of other domain-specific optimizations ( § 4) to eliminate much of this non-determinism and make model checking practical. Note that our presentation of RPVP has assumed the that a single best path is picked by each node. This matches our current implementation in that we do not support multipath in all protocols. In a special-case deviation from RPVP, our implementation allows a node running OSPF to maintain multiple best paths, chosen based on multiple neighbors. While we could extend our protocol abstraction to allow multiple best paths at each node, it wouldn't reflect the realworld behavior of BGP which, even when multipath is configured, makes routing decisions based on a single best path. However, such an extension is valid under the constrained filtering and ranking techniques of shortest path routing. Our theorems can be extended to incorporate multipath in such protocols. We omit that to preserve clarity. Policies We primarily target verification of data plane policies over converged states of the network. Similar to VeriFlow [15], we don't implement a special declarative language for policies; a policy is simply an arbitrary function computed over a data plane state and returning a Boolean value. Plankton implements a Policy API where a policy supplies a callback which will be invoked each time the model checker generates a converged state. Plankton gives the callback the newlycomputed converged data plane for a particular PEC, as well as the relevant converged states of any other PEC that the current PEC depends on. Plankton checks the callback's return value, and if the policy has failed, it writes a trail file describing the execution path taken to reach the particular converged state. Our API allows a policy to give additional information to help optimize Plankton's search: source nodes and interesting nodes. We define two converged data plane states for a PEC to be equivalent if their paths from the source nodes have the same length and the same interesting nodes are in the same position on the path. Plankton may suppress checking a converged state if an equivalent one (from the perspective of that policy) has already been checked ( § 4.2 and § 4.3 describe how Plankton does this). If source and interesting nodes are not supplied, Plankton by default assumes that all nodes might be sources and might be interesting. As an example, consider a waypoint policy: traffic from a set S of sources must pass through firewalls F . The policy specifies sources S, interesting nodes F , and the callback function steps through each path starting at S and fails when it finds a path that does not contain a member of F . As another example, a loop policy can't optimize as aggressively: it has to consider all sources. In general, this API enables any policy that is a function of a single PEC's converged data plane state. We have found it simple to add new policies, currently including: Reachability, Waypointing, Loop Freedom, BlackHole Freedom, Bounded Path Length and Multipath Consistency [1]. We highlight several classes of policies that fall outside this API: (i) Policies that inspect the converged control plane state, as opposed to the data plane: while not yet strictly in the clean API, this information is easily available and we implemented a representative example, Path Consistency ( §5), which asserts that the control plane state as well as the data plane paths for a set of devices should be identical in the converged state (similar to Local Equivalence in Minesweeper [1]). (ii) Policies that require multiple PECs, e.g., "packets to two destinations use the same firewall". This would be an easy extension, leveraging Plankton's PEC-dependency mechanism, but we have not performed a performance evaluation. (iii) Policies that inspect dynamic behavior, e.g., "no transient loops prior to convergence", are out of scope just as they are for all current configuration verification tools. Optimizations Although Plankton's RPVP-based control plane greatly reduces the state space, naive model checking is still not efficient enough to scale to large networks. We address this challenge through optimizations that fall into two major categories -reducing the search space of the model checker, and making the search more efficient. Partial Order Reduction A well-known optimization technique in explicit-state model checking, Partial Order Reduction (POR) involves exploring a set of events only in one order, if the various orderings will result in the same outcome. In general, precisely answering whether the order of execution affects the outcome can be as hard as model checking itself. Model checkers such as SPIN provide conservative heuristics to achieve some reduction. However, in our experiments, this feature did not yield any reduction. We believe this is because our model of the network has only a single process, and SPIN's heuristics are designed to apply only in a multiprocess environment. Even if we could restructure the model to use SPIN's heuristics, we do not expect significant reductions, as evidenced in past work [4]. Instead, we implement POR heuristics, based on our knowledge of the RPVP control plane † . Explore consistent executions only To describe this optimization, we first introduce the notion of a consistent execution: For a converged state S and a partial execution of RPVP π , we say that π is consistent with S iff at each step of the execution, a node picks a path that is equal to it its best path in S and never changes it. Readers may notice that Theorem 1 asserts the existence of a consistent execution leading to each converged state of the network, once any failures have happened. This implies that if the model checker was to explore only executions that are consistent with some converged state, completeness of the search would not be compromised (soundness is not violated since every such execution is a valid execution). Of course, when we start the exploration, we cannot know the exact executions that are consistent with some converged state, and hence need to be checked. So, we conservatively assume that every execution we explore is relevant, and if we get evidence to the contrary (like a device having to change a selected best path), we stop exploring that execution. Deterministic nodes Even when there is the possibility of non-deterministic convergence, the "relevant" non-determinism is typically localized. In other words, after each non-deterministic choice, there is a significant amount of essentially deterministic behavior before the opportunity for another non-deterministic choice, if any, arises. (We consider it analogous to endgames in chess, but applicable at any point in the protocol execution.) However, this is obscured by "irrelevant" nondeterminism -particularly, ordering between node execution that doesn't impact the converged state. Our goal is to prune the irrelevant non-determinism to reduce the search space for Plankton's model checker. For an enabled node n in state S with a single best update u, we say n is deterministic if in all possible converged states reachable from S, n will have the path selected after n processes u. Of course, with the model checker having only partially executed the protocol, it is highly non-obvious which nodes are deterministic! Nevertheless, suppose for a moment we have a way to identify at least some deterministic nodes. How could we use this information? At each step of RPVP, after finding the set of enabled nodes, if we can identify at least one deterministic enabled node, we choose one of these nodes and instruct SPIN to process its update. † Since we wish to check all converged states of the network, it can be argued that any reduction in search space is essentially POR. But here, we are referring optimizations that have a localized scope. (More specifically, we pick one arbitrarily.) This avoids the costly non-deterministic branching caused by having to explore the numerous execution paths where each one of the enabled nodes is the one executed next. The following theorem shows this is safe. Theorem 2. Any partial execution of RPVP consistent with a converged state can be extended to a complete execution consistent with that state. Proof. The proof can be found in the Appendix. By definition, choosing any deterministic node as the single node to execute next produces a new state that remains consistent with all possible converged states reachable from the prior state. Thus, Theorem 2 implies this deterministic choice does not eliminate any converged states from being reachable, preserving completeness. Note that this optimization does not require the entire network to have one converged state; it can apply at every step of the execution, possibly between non-deterministic choices. What remains is the key: how can we identify deterministic nodes? Perfect identification is too much to hope for, and we allow our heuristics to return fewer deterministic nodes than actually exist. We build heuristics that are specific to each routing protocol, prioritizing speed and simplicity above handling atypical cases like circular route redistribution. For OSPF, our detection algorithm runs a network-wide shortest path computation, and picks each node only after all nodes with shorter paths have executed. We cache this computation so it is only run once for a given topology, set of failures, and set of sources. For BGP, the detection algorithm performs the following computation: For each node which is enabled to update its best path, it checks whether there exists a pending update that would never get replaced, because the update would definitely be preferred over other updates that are enabled now or may be in the future. To check this, we follow the node's BGP decision process, so if the update is tied for most-preferred in one step it moves to the next. For each step of the decision process, the preference calculation is quite conservative. For local pref, it marks an update as the winner if it matches an import filter that explicitly gives it the highest local pref among all import filters. For AS Path, the path length of the current update must be the minimum possible in the topology. For IGP cost, the update must be from the peer with minimum cost. If at any node, any update is found to be a clear winner, the node is picked as a deterministic node, and is allowed to process the update. If no node is found that has a clear winner but there is a node that has ≥ 2 updates tied for the most preferred, then we deterministically pick any one such node and have SPIN non-deterministically choose which of the multiple updates to process. Figure 6 illustrates these scenarios on a BGP network, highlighting one sequence of node selections (out of many possible). Lower local pref for R5 Figure 6: Step-by-step choice of deterministic nodes (Each node has a different AS number). The detection algorithm may fail to detect some deterministic nodes. For instance, suppose node N is deterministic but its import filter from neighbor M sets the highest local pref for updates with a particular community attribute, and M can never assign that attribute. Then the detection algorithm will fail to mark N as deterministic. But successfully identifying at least one deterministic node in a step will avoid non-deterministic branching at that step. As long as this happens frequently, the optimization will be helpful. Even if the decision on a node is ambiguous in a particular state, the system will often make progress to a state where ambiguities can be resolved. In the example above, once M selects a path (and therefore will never change its path as described in § 4.1.1), the detection algorithm no longer needs to account for a possible more-preferred update from it, and may then be able to conclude that N is deterministic. Decision independence If node A's actions are independent of any future decisions of node B and vice versa, then the execution ordering between A and B does not matter. We check a sufficient condition for independence: any route advertisements passed between these nodes, in either direction, must pass through a node that has already made its best path decision (and therefore will not transmit any further updates). In this case, we pick a single arbitrary execution order between A and B. Failure ordering As stated in § 3.4.2, the model checker performs all topology changes before the protocol starts execution. We also enforce a strict ordering of link failures, reducing branching even further. Policy-based Pruning Policy-based pruning limits protocol execution to those parts of the network that are relevant to the satisfaction/failure of the policy. When a policy defines a set of source nodes ( § 3.5), it indicates that the policy can be checked by analyzing the forwarding from those nodes only. The best example for this is reachability, which is checked from a particular set of starting nodes. When an execution reaches a state where all source nodes have made their best-path decision, Plankton considers the execution, which is assumed to be consistent, to have finished. In the cases where only a single prefix is defined in a PEC, Plankton performs a more aggressive pruning, based on the influence relation. Any device that cannot influence a source node is not allowed to execute. With some additional bookkeeping, the optimization can be extended to cases where multiple prefixes contribute to a PEC, but our current implementation does not support this. The optimization is also not sound when applied to PECs on which other PECs depend. A router that does not speak BGP may not directly influence a source node, but it may influence the routing for the router IP addresses, which in turn may affect the chosen path of the source node. So, the optimization is not applied in such cases. Choice of Failures In addition to the total ordering of failures described in § 4.1.4, Plankton also attempts to reduce the number of failures that are explored, using equivalence partitioning of devices as proposed by Bonsai [2]. Bonsai groups devices in the network into abstract nodes, creating a smaller topology overall for verification. Plankton computes Device Equivalence Classes (DECs) similarly, and defines a Link Equivalence Class (LEC) as the set of links between two DECs. For each link failure, Plankton then explores only one representative from each LEC. When exploring multiple failures, we refine the DECs and LECs after each selection. Note that this optimization limits the choice of failed links, but the verification happens on the original input network. In order to avoid remapping interesting nodes ( § 3.5), they are each assigned to a separate DEC. Since the computed DECs can be different for each PEC, this optimization is done only when there are no cross-PEC dependencies. State Hashing During the exhaustive exploration of the state space, the explicit state model checker needs to track a large number of states simultaneously. A single network state consists of a copy of all the protocol-specific state variables at all the devices. Maintaining millions of copies of these variables is naturally expensive, and in fact, unnecessary. A routing decision at one device doesn't immediately affect the variables at any of the other devices. Plankton leverages this property to the reduce memory footprint, storing routing table entries as 64-bit pointers to the actual entry, with each entry stored once and indexed in a hash table. We believe this optimization can be applied to other variables in the network state too, as long as they are not updated frequently. Picking the right variables to optimize this way, and developing more advanced hash-based schemes, can be explored in the future. Evaluation We prototyped Plankton including the equivalence class computation, control plane model, policy API and optimizations in 373 lines of Promela and 4870 lines of C++, excluding the SPIN model checker. We experimented with our prototype on Ubuntu 16.04 running on a 3.4 GHz Intel Xeon processor with 32 hardware threads and 188 GB RAM. We begin our evaluation with simple hand-created topologies incorporating protocol characteristics such as shortest path routing, non-deterministic protocol convergence, redistribution, recursive routing, etc. Among these tests, we incorporated examples of non-deterministic protocol execution from [8], as well as BGP wedgies, which are policy violations which can occur only under some non-deterministic execution paths. In each of these cases, Plankton correctly finds any violations that are present. Having tested basic correctness, we next evaluate performance and compare to past verifiers. What sets Plankton apart from tools other than Minesweeper is its higher data plane coverage and ability to handle multiple protocols (Figure 1). We therefore compare primarily with Minesweeper but also include ARC in some tests. We also evaluated Bonsai [2], a preprocessing technique that helps improve the scalability of configuration verification, for specific policies. Bonsai could assist any configuration verifier. We integrated Bonsai with Plankton, and experimentally compare the performance of Bonsai+Minesweeper and Bonsai+Plankton. However, it is important to study the performance of these tools without Bonsai too: Bonsai's network compression cannot be applied if the correctness is to be evaluated under link failures, or if the policy being evaluated is not preserved by Bonsai. Experiments with synthetic configurations Our first set of performance tests uses fat trees. We construct fat trees of increasing sizes, with each edge switch originating a prefix into OSPF. Link weights are identical. We check these networks for routing loops. In order to cause violations, we install static routes at the core routers. In our first set of experiments, the static routes match the routes that OSPF would eventually compute, so there are no loops. Then, we change the static routes such that some of the traffic falls into a routing loop. Figure 7(a) illustrates the time and memory consumed, using Plankton running on various numbers of cores, and using Minesweeper. We observed that under default settings, Minesweeper's CPU utilization keeps changing, ranging from 100% to 1, 600%. In this experiment and all others where we run both Minesweeper and Plankton, the two tools produced the same policy verification results. This serves as an additional correctness check for Plankton. Bonsai is not used, because its currently available implementation does not appear to support loop policies. As the results show, Plankton scales well with input network size. The speed and memory consumption varies as expected with the degree of parallelism. Even on a single core, Plankton is quicker than Minesweeper for all topologies. For larger networks, Plankton is several orders of magnitude quicker. On the memory front, even on 16 cores, Plankton's footprint is smaller than Minesweeper's. Encouraged by the good performance numbers, we scale up to very large fat trees (Figure 7(b)). Here, Minesweeper doesn't finish even in 4 hours, even with a 500-device network (in the case of passing loop check, even in a 245-device network). So, we did not run Minesweeper on the larger networks. We run Plankton with a single CPU core only, to illustrate its time-memory tradeoff: since the analyses of individual PECs are fully independent and of identical computational effort, running with n cores would reduce the time by n×, and increase memory by n×. For example, in the 2,205-device network, Plankton uses about 170 GB per process. Policies that check a single equivalence class are much cheaper: for example, single-IP reachability finishes in seconds or minutes even on the largest networks (Figure 7(b)). Next, we test Plankton with a very high degree of nondeterminism. We evaluated a data center setting with BGP, which is often employed to provide layer 3 routing down to the rack level in modern data centers [17]. We configure BGP as described in RFC 7938 [17] on fat trees of various sizes. Furthermore, we suppose that the network operator intends traffic to pass through any of a set of acceptable waypoints on the aggregation layer switches (e.g., imagine the switches implement a certain monitoring function). We pick a random subset of aggregation switches as the waypoints in each experiment. However, we create a "misconfiguration" that prevents multipath and fails to steer routes through the waypoints ‡ . Thus, in this scenario, whether the selected path passes through a waypoint depends on the order in which updates are received at various nodes, due to age-based tie breaking [16]. We check waypoint policies which state that the path between two edge switches should pass through one of the waypoints. Plankton evaluates various non-deterministic convergence paths in the network, and determines a violating sequence of events. Time and memory both depend somewhat on the chosen set of aggregation switches, but even the worst-case times are less than 2 seconds (Figure 7(c)). We consider this a success of our policybased pruning optimization: the network has too many converged states to be verified in reasonable time, but many have equivalent results in terms of the policy. Experiments with semi-synthetic configurations We use real-world AS topologies and associated OSPF link weights obtained from RocketFuel [24]. We pick a random ingress point that has more than one link incident on it. We verify that with any single link failure, all destination prefixes are reachable from that ingress. Here, Minesweeper's SMT-based search algorithm could be beneficial, due to the large search space created by failures. Nevertheless, Plankton performs consistently better in both time and memory (Figure 7(d)). Both tools find a violation in each case. The time taken by Plankton with 16 and 32 cores are often identical, since a violation is found in the first set of PECs. Note ‡ This setup is convenient for practical reasons, as our current Plankton prototype implementation does not support BGP multipath. S t a n f o r d Time Taken that in this experiment and the next, we did not use Bonsai, because (i) it cannot be used for checks involving link failures, and (ii) the topology has hardly any symmetry that Bonsai could exploit. To evaluate our handling of PEC dependencies, we configure iBGP over OSPF on the AS topologies. The iBGP pre-fixes rely on the underlying OSPF routing process to reach the next hop. We check that packets destined to the iBGPannounced prefixes are correctly delivered. It is worth noting that this test evaluates a feature that, to the best of our knowledge, is provided only by Plankton and Minesweeper. Thanks to the dependency-aware scheduler, Plankton per-forms multiple orders of magnitude better (Figure 7(e)). This is unsurprising: Minesweeper's approach of duplicating the network forces it to solve a much harder problem here, sometimes over 300× larger. Integration with Bonsai We integrated Plankton with Bonsai to take advantage of control plane compression when permitted by the specific verification task at hand. We test this integration experimentally by checking Bounded Path Length and Reachability policies on fat trees running OSPF. The symmetric nature of fat trees is key for Bonsai to have a significant impact. We measure the time taken by Plankton and Minesweeper, after Bonsai preprocesses the network. Plankton still outperforms Minesweeper by multiple orders of magnitude (Figure 7(f)). Comparison with ARC Having evaluated Plankton's performance in comparison with Minesweeper, we move on to comparing the performance of Plankton and ARC. ARC is specifically designed to check shortest-path routing under failures, so we expected the performance to be much better than the more generalpurpose Plankton, when checking networks compatible with ARC. We check all-to-all reachability in fat trees and AS topologies running OSPF, under a maximum of 0, 1 and 2 link failures. Similar to Minesweeper, ARC's CPU utilization ranges from 100% to 600% under default settings. We allocate 8 cores to Plankton. Plankton is multiple orders of magnitude faster in most cases (Figure 7(g)). § This is genuinely surprising; one reason that may explain the observation is that ARC always computes separate models for each source-destination pair, whereas Plankton computes them based only the destination, when verifying destination address routing. Nevertheless, we do not believe that there is a fundamental limitation in ARC's design that would prevent it from outperforming Plankton on the networks that can be checked by either tool. Interestingly, while ARC's resiliency-focused algorithm doesn't scale as easily as Plankton for larger networks, its performance actually sometimes slightly improves when the number of failures to be checked increases. Plankton on the other hand scales poorly when checking increasing levels of resiliency. We do not find this concerning, since most interesting checks in the real world involve only a small number of failures. When we performed these experiments with Minesweeper, no check involving 2 failures ran to completion except the smallest fat tree. Testing with real configurations We used Plankton to verify 10 different real-world configurations from 3 different organizations, including the publicly available Stanford dataset. We first check reachability, waypointing and bounded path length policies on these networks, with and without failures. All except one of these networks use some form of recursive routing, such as indirect static § Our numbers for ARC are similar to those reported by its authors for similar sized networks, so we believe we have not misconfigured ARC. routes or iBGP. We feel that this highlights the significance of Plankton's and Minesweeper's support for such configurations. Moreover, the PEC dependency graph for these networks did not have any strongly connected components larger than a single PEC, which matches yet another of our expectations. Interestingly, we did find that the PEC dependency graph had self loops, with a static route pointing to a next hop IP within the prefix being matched. It is also noteworthy that in these experiments, the only non-determinism was in the choice of links that failed, which substantiates our argument that network configurations in the real world are largely deterministic. Figure 7(h) illustrates the results, which indicate that Plankton can handle the complexity of real-world configuration verification. In our next experiment with real world configs, we identify three networks where Loop, Multipath Consistency and Path Consistency policies are meaningful and non-trivial to check. We check these policies with and without link failures. Figure 7(i) illustrates the results of this experiment. The results indicate that the breadth of Plankton's policies scale well on real world networks. The Batfish parser, which is used by Minesweeper, was incompatible with the configurations, so we could not check these configs on Minesweeper (checking the Stanford dataset failed after parsing). However, the numbers we observe are significantly better than those reported for Minesweeper on similar-sized networks, for similar policies. Optimization Cost/Effectiveness To determine the effectiveness of Plankton's optimizations, we perform experiments with some optimizations disabled or limited. Figure 8 illustrates the results from these experiments. When all optimizations are turned off, naive model checking fails to scale beyond the most trivial of networks. The optimizations reduce the state space by 4.95× in smaller networks and by as much 24,968× in larger ones. To evaluate device-equivalence based optimizations in picking failed links, we perform loop check on fat trees running OSPF under single link failure with the optimization turned off. We observed a 15× reduction in speed, and a 38× increase in memory overhead, indicating the effectiveness of the optimization in networks with high symmetry. Figure 9: The effect of bitstate hashing on memory usage In the next set of experiments, we measure the impact of our partial order reduction technique of prioritizing deterministic nodes ( § 4.1.2). We first try the iBGP reachability experiment with the AS 1221 topology, with the detection of deterministic nodes in BGP disabled. We notice that in this case the decision independence partial order reduction produces reductions identical to the disabled optimization, keeping the overall performance unaffected. In fact, the the time improves by a small percentage, since there is no detection algorithm that runs at every step. We see similar results when we disable the optimization on the edge switches in our BGP data center example. However, this does not mean that the deterministic node detection can be discarded -in the BGP data center example, when the optimization is disabled altogether, the performance falls dramatically. The next optimization that we study is policy-based pruning. On the BGP data center example, we attempt to check a waypoint policy, with policy-based optimizations turned off. The check times out, since it is forced to generate every converged data plane, not just the ones relevant to the policy. SPIN provides a built-in optimization called bitstate hashing that uses a Bloom filter to keep track of explored states, rather than storing them explicitly. This can cause some false negatives due to reduced coverage of execution paths. We find that bit state hashing provides significant reduction in memory in a variety of our test cases (Figure 9). According to SPIN's statistics our coverage would be over 99.9%. Nevertheless, we have not turned on bitstate hashing in our other experiments in favor of full correctness. Limitations Some of the limitations of Plankton, such as the lack of support for BGP multipath and limited support for route aggregation, have been mentioned in previous sections. As discussed in § 3.2, Plankton may also produce false positives when checking networks with cross-PEC dependencies, because it expects that every converged state of a PEC may coexist with every converged state of other PECs that depend on it. However, such false positives are unlikely to happen in practice, since real-world cases of cross-PEC dependencies (such as iBGP) usually involve only a single converged state for the recursive PECs. Our current implementation of Plankton assumes full visibility of the system to be verified, and that any dynamics will originate from inside the system. So, influences such as external advertisements need to be modeled using stubs that denote entities which originate them. Plankton's technique is not suited for detecting issues in vendor-specific protocol implementations, a limitation that all existing formal configuration analysis tools share. As with most formal verification tools, one needs to assume that Plankton itself is correct, both in terms of the theoretical foundations as well as the implementation. Correct-by-construction program synthesis could help in this regard. Configuration verification: We discussed the state of the art of configuration verification in § 2, and how Plankton improves upon the various tools in existence. Crystal-Net [18] emulates actual device VMs, and its results could be fed to a data plane verifier. However, this would not verify non-deterministic control plane dynamics. Simultaneously improving the fidelity of configuration verifiers in both dimensions (capturing dynamics as in Plankton and implementation-specific behavior as in CrystalNet) appears to be a difficult open problem. Optimizing network verification: Libra [27] is a divideand-conquer data plane verifier, which is related to our equivalence class-based partitioning of possible packets. The use of symmetry to scale verification has been studied in the data plane [22] and control plane (Bonsai [2]). We have discussed how Plankton uses ideas similar to Bonsai, as well as integrates with Bonsai itself. Model checking in the networking domain: Past approaches that used model checking in the networking domain have focused almost exclusively on the network software itself, either as SDN controllers, or protocol implementations [4,23,20]. Plankton uses model checking not to verify software, but to verify configurations. Conclusion and Future Work We described Plankton, a formal network configuration verification tool that combines equivalence partitioning of the header space with explicit state model checking of protocol execution. Thanks to pragmatic optimizations such as partial order reduction and state hashing, Plankton produces significant performance gains over the state of the art in configuration verification. Improvements such as checking transient states, incorporating real software, partial order reduction heuristics that guarantee reduction, etc. are interesting avenues of future work. Corollary 1. For any complete execution π of SPVP , for any node n, any node along the best path of n in the converged state converges before n. Now consider a complete execution π = S 0 , S 1 , ..., S c of SPVP. We will construct a complete execution of RPVP with |N | steps (where N is set of all nodes) resulting in the same converged state as S c . We start with a topology in which all the links that have failed during the execution of SPVP are already failed. For any node n, we define C π (n) = min{i| converged π (n, S i )}. Consider the sequence n 1 , n 2 , ..., n |N | of all nodes sorted in the increasing order of C π . Now consider the execution of RPVP π = S 0 , S 1 , ..., S |N | which starts from the initial state of RPVP and in each state S i , (a) either node n i+1 is the picked enabled node and the node p i = best-path S c (n i+1 )[0] is the picked best peer, or (b) in case p i = ⊥, nothing happens and S i+1 = S i . First, note that (modulo the repeated states in case b), π is a valid execution of of RPVP: at each state S i (in case a), n i+1 is indeed an enabled node since its best path at that state is ⊥ and according to corollary 1, p i has already picked its path: Also p i will be in the set of best peers of n i+1 (line 13 in RPVP). Assume this is not the case, i.e there exists another peer p that can advertise a better path. This means that in S c of SPVP, p can send an advertisement that is better (according to the version of ranking functions in S c ) than the converged path of n i+1 . This contradicts the fact that S c is a converged path. Second, note that S |N | is a converged state for RPVP, because otherwise, using similar reasoning as above, S c can not be converged. Also it is easy to see that best-path S |N | = best-path S c Finally, note that in π , once a node changes its best path from ⊥, it does not change its best path again. Proof of Theorem 2. We begin by making two observations about RPVP that are key to the proof: • RPVP for a prefix can never converge to a state having looping paths. • If a node u adopts the best path of a neighbor v, v will be next hop of u. Consider any converged state S. The theorem states that any partial execution that is consistent with S can be extended to a full execution that leads to S. We prove the theorem by induction on the length of the longest best path in S. Base case: If in a network a converged state exists where the best path at each node is of length 0, that means that each node is either an origin or doesn't have a best path for the prefix. Since any execution apart from the empty execution (where no protocol event happens) is not consistent with this state, the theorem holds. Induction hypothesis: If a converged state exists in a network such that all best paths are of length k or less, then any partial execution that is consistent with the converged state can be extended to a full execution that reaches the converged state. Induction step: Consider a network with a converged state S such that the longest best path is of length k + 1. We first divide the nodes in the network into two classes -N , which are the nodes with best paths of length k or less, and N , which are nodes with best paths of length k + 1. Consider a partial execution π that is consistent with S. We identify two possibilities for π: Case 1: Every node that has executed in π falls into N . In this case, we define a smaller network which is the subgraph of the original network, induced by the nodes in N . In this network, the path selections made in S will constitute a converged state. This is because in the original network, in the state S, the nodes in N are not enabled to make state changes. So, we can extend π such that we get an execution π where nodes in N match the path selections in S. Now, we further extend π with steps where each node in N reads the best path from the node that is its nexthop in S and updates its best path. When every node in N has done this, the overall system state will reach S. Case 2: At least one node in N has executed in π. In this case, we observe that since π is consistent with S, by the definition of a consistent execution, no node in the network has read the state of any node in N . So, we can construct an execution π which has the same steps as π, except that any step taken by a node in N is skipped. As in the previous case, π can be extended to reach a converged state in the subgraph induced by N . We extend π, first by using the steps that extend π , and if necessary, taking additional steps at nodes from N to reach S.
2019-11-05T23:11:59.000Z
2019-11-05T00:00:00.000
{ "year": 2019, "sha1": "a566a2040f2ea9f804b509fdda1fc8b032982e03", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a566a2040f2ea9f804b509fdda1fc8b032982e03", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
4996396
pes2o/s2orc
v3-fos-license
Viable-but-Nonculturable Listeria monocytogenes and Salmonella enterica Serovar Thompson Induced by Chlorine Stress Remain Infectious ABSTRACT The microbiological safety of fresh produce is monitored almost exclusively by culture-based detection methods. However, bacterial food-borne pathogens are known to enter a viable-but-nonculturable (VBNC) state in response to environmental stresses such as chlorine, which is commonly used for fresh produce decontamination. Here, complete VBNC induction of green fluorescent protein-tagged Listeria monocytogenes and Salmonella enterica serovar Thompson was achieved by exposure to 12 and 3 ppm chlorine, respectively. The pathogens were subjected to chlorine washing following incubation on spinach leaves. Culture data revealed that total viable L. monocytogenes and Salmonella Thompson populations became VBNC by 50 and 100 ppm chlorine, respectively, while enumeration by direct viable counting found that chlorine caused a <1-log reduction in viability. The pathogenicity of chlorine-induced VBNC L. monocytogenes and Salmonella Thompson was assessed by using Caenorhabditis elegans. Ingestion of VBNC pathogens by C. elegans resulted in a significant life span reduction (P = 0.0064 and P < 0.0001), and no significant difference between the life span reductions caused by the VBNC and culturable L. monocytogenes treatments was observed. L. monocytogenes was visualized beyond the nematode intestinal lumen, indicating resuscitation and cell invasion. These data emphasize the risk that VBNC food-borne pathogens could pose to public health should they continue to go undetected. counterparts, including antibiotic tolerance and high temperatures (4). Despite the protection that the state provides for many bacterial pathogens, there are crucial gaps in the understanding of its underlying mechanisms and uncertainty regarding the infective potential of VBNC pathogens. This is particularly relevant to food-borne pathogens, where the industry relies almost exclusively on the use of culture recovery techniques to detect microbial contamination. Food-borne disease presents a consistent but frequently preventable threat to public health and is responsible for an estimated 2.2 million deaths worldwide annually. In the United Kingdom, it is estimated that each year one million people suffer a food-borne illness, resulting in 500 deaths. In 2010, the bacterial food-borne pathogens Listeria monocytogenes and Salmonella spp. were responsible for more than half of these deaths following gastrointestinal infection (5). Another United Kingdom study spanning 17 years determined that in food-borne outbreaks, Salmonella spp. were responsible for the highest number of disease cases and the greatest proportion of deaths was caused by L. monocytogenes (6). Fresh produce such as lettuce and spinach provides an effective vehicle for these pathogens, as they are often sold as ready-to-eat foods. As consumer habits are tending toward healthier eating with more fresh produce, the risk of disease outbreaks is increasing (7). In 2016, an outbreak of L. monocytogenes associated with packaged salads caused 19 cases, each resulting in hospitalization, across nine states in the United States (8). In the United Kingdom, an outbreak was caused by L. monocytogenes contaminating sandwiches sold at a hospital, affecting five pregnant women (9). Although Salmonella species outbreaks are proportionally less severe, they are farther reaching. One produce-associated outbreak of Salmonella enterica serovar Saintpaul resulted in 1,500 disease cases across 43 U.S. states, which hospitalized 21% of those affected and may have caused two deaths (10). Despite their nonculturability, VBNC food-borne pathogens still pose a risk to consumers. While there is conflicting data on the pathogenicity of VBNC cells, there is evidence for their resuscitation under more favorable conditions, potentially allowing pathogens to cause disease prior to or even following ingestion by humans. Research carried out with L. monocytogenes has found that VBNC cells induced by starvation were avirulent when exposed to human adenocarcinoma cells but were resuscitated when inoculated into embryonated chicken eggs and regained virulence (11,12). Similar results have been observed with S. enterica serovar Typhimurium, where VBNC cells induced by UV irradiation were unable to cause infection in a mouse model (13); however, another study using S. enterica serovar Oranienburg induced into the VBNC state by osmotic stress found that resuscitation could be achieved following injection into a mouse model (14). Other pathogens have been shown to retain aspects of their virulence while VBNC; the toxin genes of Shigella dysenteriae and Escherichia coli O157 have been detected while the bacteria are nonculturable (15,16). The parameters of the VBNC state and the infectivity of VBNC pathogens have been explored with a focus on VBNC induction via harsh conditions that bacteria are likely to encounter in a natural environment, but food production provides alternate stressors for food-borne pathogens. Chlorine is widely used to decontaminate fresh produce of both food-borne pathogens and spoilage bacteria. Previously, the efficacy of chlorine against L. monocytogenes has been measured by using culture techniques, reporting that there were no viable cells recovered after using 50 ppm chlorine (17). The presence of VBNC cells was not measured. Chlorine has been shown to induce the VBNC state in Salmonella Typhimurium biofilms (18). Further work concentrating on chlorinated drinking water and wastewater found that chlorine induces the VBNC state in a range of pathogens, including E. coli, Salmonella Typhimurium, and Helicobacter pylori (19,20). The relevance of the VBNC state to food safety has recently been reviewed (21). However, it has yet to be shown whether chlorine-stressed pathogens remain infective in animals. The mechanisms responsible for the antimicrobial activity of chlorine are not fully understood, though studies indicate that reactive chlorine species attack the bacterial inner membrane, where the dose of HOCl required for cell killing is similar to the dose required for ATP loss, loss of DNA replication, and prevention of protein transport across the inner membrane (22,23). This study simulated the passage of spinach contaminated with L. monocytogenes and S. enterica serovar Thompson from farm and processing to ingestion. In this way, VBNC induction of the pathogens by chlorine was assessed in situ on the spinach leaf phylloplane, comparing culture techniques to direct viable counts (with enumeration of both culturable and VBNC cells). The potential for infection by VBNC pathogens was then determined by using the animal model Caenorhabditis elegans. RESULTS Visualization of pathogen adherence to spinach phylloplane. L. monocytogenes and Salmonella Thompson were visualized by episcopic differential interference contrast (EDIC)-epifluorescence (EF) microscopy following 24 h of incubation on the spinach phylloplane. Green fluorescence indicated that the pathogens were localized primarily inside the spinach stomata and at cell junctions. Compared with uninoculated control spinach leaves, both inoculated spinach samples possess a rough, uneven surface indicative of biofilm growth (Fig. 1). Induction of VBNC L. monocytogenes and Salmonella Thompson in chlorinated water. L. monocytogenes became fully VBNC after 2 min of exposure to 12 ppm chlorine, with a just-under-1-log reduction of culturability at 3 ppm (P Ͻ 0.0001) and a Ͼ4-log reduction by 6 ppm (Fig. 2). Between 0 and 15 ppm, 47.64% of the viable cells counted by direct viable counting (DVC) were lost (P ϭ 0.0075). Salmonella Thompson became fully VBNC after 2 min of exposure to 3 ppm chlorine (P Ͻ 0.0001). Each increase in the chlorine concentration was met with a loss of Salmonella Thompson cells, with a 49% reduction between 0 and 15 ppm chlorine (P Ͻ 0.0001). There was also a 1.4-log difference between culturable cells and those enumerated by DVC (P Ͻ 0.0001) at 0 ppm chlorine (Fig. 3). Induction of VBNC L. monocytogenes and Salmonella Thompson adhering to the spinach phylloplane. Spinach-adherent L. monocytogenes became fully VBNC after 2 min of exposure to 50 ppm chlorine, with a culturability reduction of 96.5% at 20 ppm. Direct viable counts declined with each increase in the chlorine concentration, where only the decrease between 20 and 50 ppm was not statistically significant. Despite this, there was Ͻ1-log reduction between 0 and 100 ppm. There was also a 1.7-log discrepancy between culture data and DVC data at 0 ppm (Fig. 4). Salmonella Thompson adhering to spinach leaves became fully VBNC after a 2-min exposure to 100 ppm chlorine, with a mean density of 207 CFU/ml at 50 ppm and 18 CFU/ml at 80 ppm (Fig. 5). Consistent with L. monocytogenes, a DVC reduction was observed with each increase in the chlorine concentration until a plateau was reached at 100 ppm. Again, there was a Ͻ1-log DVC reduction between 0 and 100 ppm (Fig. 5). Virulence of VBNC L. monocytogenes and Salmonella Thompson ingested by C. elegans. C. elegans that had only ingested E. coli Op50 survived for a maximum of 22 days. All of the worms exposed to culturable and VBNC L. monocytogenes died by day 16, with no statistically significant difference between the two conditions. C. elegans exposed to culturable Salmonella Thompson died by day 13, and worms exposed to VBNC Salmonella Thompson died by day 15. Significantly different nematode life span reductions were caused by E. coli Op50 and culturable L. monocytogenes (P ϭ 0.0012) and by E. coli Op50 and VBNC L. monocytogenes (P ϭ 0.0064), where the median life span of C. elegans feeding on E. coli Op50 was 12 days and only 9 days for both L. monocytogenes treatments. Similarly, ingestion of culturable (P Ͻ 0.0001) or VBNC (P Ͻ 0.0001) Salmonella Thompson significantly reduced the C. elegans life span compared with ingestion of the E. coli Op50 control. The median life spans of C. elegans worms that fed on culturable and VBNC Salmonella Thompson were 6 and 7 days, respectively, with a statistically significant difference observed between the two treatments (P ϭ 0.0322) (Fig. 6). Green fluorescent protein (GFP) fluorescence from each pathogen assessed was observed filling the intestinal lumen of C. elegans (Fig. 7) and, in the case of L. monocytogenes, permeating the surrounding tissues (Fig. 7A). Pathogen cells were still visible when nematodes were returned to E. coli Op50 plates. DISCUSSION As chlorine is commonly used in the agricultural industry to decontaminate fresh produce, food-borne pathogens will be exposed to the sanitizer during food production, both adhering to the phylloplane and detached in suspension. Here we show that in both cases, exposure to chlorine can induce the VBNC state in L. monocytogenes and Salmonella Thompson (Fig. 2 to 5). In water, L. monocytogenes becomes fully VBNC when exposed to 12 ppm chlorine, although 50 ppm is required following incubation on the spinach phylloplane ( Fig. 2 and 4). Similarly, Salmonella Thompson becomes fully VBNC following exposure to 100 ppm chlorine on the phylloplane but only 3 ppm is required in chlorinated water ( Fig. 3 and 5). This could largely be explained by the bacterial colonization of the phylloplane. Both pathogens are localized primarily in and around stomata and at cell junctions, thus potentially physically protected from the sanitizer. A further benefit to phylloplane adherence is the facilitation of biofilm formation, where the production of an extracellular polysaccharide matrix presents a barrier to chlorine molecules. Previous studies have shown chlorine and hypochlorite to have limited penetrative ability in Pseudomonas aeruginosa and Klebsiella pneumoniae biofilms (24,25), as well as in Salmonella biofilms (26). This effect could be supplemented by the autochthonous bacterial species present on the phylloplane. Nonfluorescent bacterial growth observed on the spinach cell surface indicates biofilm formation by indigenous species (Fig. 1), where an agonistic interaction with the inoculated foodborne pathogen may serve to reduce chlorine efficacy. These interactions could account for the relative decrease in sensitivity to chlorine observed in Salmonella Thompson on the phylloplane, where in double-distilled H 2 O (ddH 2 O) the pathogen lost culturability more easily than L. monocytogenes ( Fig. 2 to 5). It was postulated in one study that when the food-borne pathogen E. coli O157 is attached to the spinach phylloplane, its biofilm-forming capability may be augmented by the presence of indigenous epiphytic bacteria (27). Despite the protective effect of biofilm, exposure to 5.5 ppm chlorine has previously been shown to induce the VBNC state in Salmonella biofilm (18). This corroborates the findings of this study. The total population of L. monocytogenes and Salmonella Thompson lost culturability following exposure to 100 ppm FIG 3 Salmonella Thompson exposed to chlorinated water, cultured on selective media (black), and quantified by DVC (gray). Error bars indicate the SEM of two replicates. Infectious Chlorine-Induced VBNC Food-Borne Pathogens ® chlorine ( Fig. 4 and 5), where the approximately 1-log reduction in bacteria determined by DVC can be attributed to cell death by chlorine exposure. Here, that reduction resulted in 1.6 ϫ 10 6 CFU/ml VBNC L. monocytogenes and 1.4 ϫ 10 6 CFU/ml VBNC Salmonella Thompson. Typically in the agricultural industry, 90 ppm chlorine is used to wash fresh produce and is assumed to sanitize the food and the surrounding water. While these data show that an increase in the chlorine concentration does result in a loss of viable bacteria, the use of chlorine in industry is limited by the damage it causes to the food product, particularly leafy vegetables. Decontamination of food products by chlorination may be ubiquitous across food production; however, a wealth of research has shown chlorine to be ineffective at killing food-borne pathogens, including L. monocytogenes and E. coli O157 inoculated onto lettuce (28,29). The initial bacterial inoculum concentrations reflect both previous research assessing contamination of crop plants by food-borne pathogens (30,31) and the level of contamination previously detected in vegetables affected by bacterial soft rot collected from a marketplace in the United States (32). From contaminated spinach, 3 ϫ 10 5 suspected Salmonella colonies/ml of wash water were detected, and using enrichment broth, 1.7 ϫ 10 7 and 8.6 ϫ 10 8 CFU/ml were detected in healthy and rotting spinach, respectively. In this study, biofilms were grown on the spinach phylloplane for 24 h at room temperature, so the resulting bacterial population is indicative of the level of contamination that would be seen in the field. In water, the relatively greater sensitivity to chlorine observed in Salmonella Thompson ( Fig. 2 and 3) could be due to the nature of the damage caused by reactive chlorine species in bacteria. Chlorine is thought to cause bacterial cell death by impeding the functions of the inner membrane (22). Salmonella Thompson is Gram-negative, whereas L. monocytogenes is Gram-positive and the Gram-positive thick peptidoglycan layer could influence susceptibility to chlorine stress. Previously, it has been shown that inactivation by exposure to singlet oxygen is affected by the presence of the peptidoglycan layer (33). The data obtained show a pronounced difference between untreated cells quantified by culture and by DVC, particularly in Fig. 4. In this case, it could be that the osmotic stress placed upon L. monocytogenes in ddH 2 O resulted in some loss of culturability without exposure to chlorine. It is also possible that the discrepancy is a consequence of the assumption that cells are evenly distributed across each microscope slide. The data obtained in this study suggest that the chlorine-mediated killing of bacteria observed in previous research can be attributed, in part, to VBNC induction by chlorine. In the food industry, the use of chlorine to decontaminate minimally processed food results in the inability of "gold standard" culture techniques to detect food-borne pathogens, which may then go on to cause disease outbreaks. As similar work has not yet been carried out with alternative methods of fresh produce decontamination, their efficacies may also be reduced by VBNC induction. Studies assessing the efficacy of sanitizers such as ozone (34,35), gamma (36) or UV (37) irradiation, and ultrasound (38-40) routinely use culture-based bacterial enumeration exclusively, so the contribution of VBNC bacteria has not been explored. However, previous studies have observed that these exposures to UV irradiation and ultrasound can also result in VBNC induction in different pathogens (41,42). In finding alternative decontamination treatments, industry is further restricted as it must effectively kill bacteria without inducing the VBNC state and without compromising the quality of the food product. The nematode killing assay revealed that there is no difference in the virulence of L. monocytogenes in the culturable and VBNC states and that both cause a reduction in the C. elegans life span (Fig. 6). Previous work with L. monocytogenes has provided evidence that the pathogen is avirulent in the VBNC state (11). The results of this study could contradict this for several reasons; this study focused on VBNC induction by chlorine exposure, whereas Cappelier et al. (11) generated VBNC cells via starvation. Using human cell lines as a model, virulence was previously measured by assessing the invasive properties of L. monocytogenes and it was injected into the bloodstream in a mouse model. In this study, infection was modeled in C. elegans by ingestion and infection of the gastrointestinal tract. It has been shown that VBNC E. coli O157 maintains the expression of its Shiga-like toxin genes when it is VBNC (15), so while there is limited research on L. monocytogenes, it is possible that toxin expression causes disease in the digestive tract while cell invasion in the VBNC state is impaired. The suggestion that there are differences in the VBNC states of the same pathogen dependent on the method of VBNC induction has not been explored but could present further challenges for the food industry. Prior to harvest, the phylloplane is a harsh environment for bacteria, with exposure to UV radiation and limited moisture providing Infectious Chlorine-Induced VBNC Food-Borne Pathogens ® conditions that could induce the VBNC survival state in food-borne pathogens before exposure to chlorination. There is evidence of this, as VBNC induction has been shown to occur in E. coli O157 on the lettuce phylloplane in response to low temperatures (2). While these data show that VBNC L. monocytogenes induced by chlorine can cause disease, VBNC pathogens induced by physical stimuli on the phylloplane may require a separate assessment comparing VBNC expression profiles, where the fundamental mechanisms of the state have yet to be fully understood. Corroborating previous studies (43), C. elegans feeding on Salmonella Thompson was also found to significantly reduce the worm's life span, where worms fed on culturable Salmonella Thompson died within 13 days and those fed on VBNC Salmonella Thompson died within 15 days (Fig. 6). By comparing them to one another, it was determined that a significantly greater reduction in the C. elegans life span is achieved by using culturable Salmonella Thompson (P ϭ 0.0322). This indicates that while the pathogen is still virulent in the animal model, it does lose some infectivity in the VBNC state. Research on the cell invasion ability of VBNC Salmonella Typhimurium has indicated that VBNC cells have an impaired ability to invade epithelia (44) and those induced by antibiotic pressure are unable to cause disease in mice (45). Conversely, immunocompromised mice that ingested VBNC Salmonella Oranienburg were affected by the pathogen, suggesting that there is still a risk of infection by VBNC Salmonella under certain conditions (14). The relative success of VBNC L. monocytogenes in reducing the C. elegans life span to a degree similar to that of its culturable counterpart could be due to the ability of the pathogen to grow at lower temperatures (46). VBNC Salmonella Thompson may require a higher temperature, such as the mammalian core temperature of 37°C, to more effectively resuscitate and establish infection. Both pathogens in the VBNC state could be seen fluorescing inside the intestinal lumen of C. elegans (Fig. 7). L. monocytogenes completely fills the intestinal tract and has invaded the surrounding tissues, with the ovary of the nematode masking the terminal end of the tract (Fig. 7A). The high level of fluorescence observed, even when nematodes are removed from the pathogen food source, provides evidence that the bacteria have colonized the gut, which may suggest resuscitation once inside a host. This is supported by the fluorescence extending beyond the intestine, which is consistent with the cell invasion that occurs upon L. monocytogenes infection (47). A similar phenomenon has been observed in L. monocytogenes, where resuscitation occurred following introduction into embryonated eggs but not following introduction into nonembryonated eggs (12). The differences observed between C. elegans infections by S. enterica and L. monocytogenes have also been observed in Tetrahymena (48). Salmonella Thompson was released in vesicles from the protozoan, while L. monocytogenes was digested. In this case, the authors observed that ingestion by Tetrahymena protects Salmonella Thompson from environmental stresses. In this study, Salmonella Thompson accumulated in the intestine at the pharyngeal-intestinal valve (Fig. 7B), resembling Salmonella infection in vertebrate hosts, where attachment to the apical surface of epithelial cells takes place (49). The different interactions of both food-borne pathogens with the C. elegans host may indicate that resuscitation has also taken place in VBNC Salmonella Thompson, resulting in its virulence in the nematode. These data support the use of the C. elegans invertebrate model for the study of VBNC food-borne pathogens; it is more cost and space efficient than the use of vertebrate models and is free from ethical constrains. In addition, the presence of a well-defined nervous system and digestive tract, with a mouth, a pharynx that pumps the food into the intestines, a digestive system that enables the worm to process the food, and an excretory system, makes this animal model more applicable to higher organisms than others such as the unicellular amoebal and wax moth larva infectivity models. Preliminary work conducted in this study is consistent with resuscitation of VBNC pathogens inside the host; when assessed using a nematode killing assay, GFP-tagged Salmonella Thompson strain RM2311 was not found to reduce the C. elegans life span. However, C. elegans worms that fed on Salmonella Thompson died rapidly from day 12, which could be a result of colonization or, in the case of VBNC cells, resuscitation (data not shown). Conversely, Salmonella Thompson strain NCTC 2252 was shown to reduce the C. elegans life span (Fig. 6), where the difference in infectivity may be a result of the fitness cost of GFP expression by the pathogen (50). The data obtained in this study do not discern whether VBNC L. monocytogenes and Salmonella Thompson cause disease by resuscitation stimulated by ingestion by a host or by continued expression of virulence factors while in the VBNC state. However, they do provide evidence that the use of chlorine to decontaminate fresh produce is not only ineffective but permits virulent food-borne pathogens to reach the public undetected by standard methods. Outbreaks of food-borne disease where no food vehicle can be identified do occur (51), and it is possible that the VBNC state plays an important role. Consequently, new methods are required to rapidly detect VBNC pathogens, which are still capable of causing disease despite accepted sanitization procedures, to protect public health. Indeed, it may be better not to sanitize foodstuffs and rely instead on rapid pathogen detection methods and positive release of those foodstuffs deemed safe for human consumption. MATERIALS AND METHODS Bacterial strains. The bacteria used in this study were L. monocytogenes Scott A expressing GFP on plasmid pPL3-GFP and S. enterica serovar Thompson strains NCTC 2252 and RM2311. Salmonella Thompson RM2311 expresses GFP on plasmid pWM1007, which also contains a kanamycin resistance gene (52,53). Both were cultured for 18 h at 37°C in brain heart infusion broth (BHIB; Oxoid, United Kingdom). L. monocytogenes was cultured on agar by using the selective medium PALCAM (Oxoid, United Kingdom) with Listeria selective supplement (Sigma-Aldrich, United States), and S. enterica was cultured on agar by using CHROMagar Salmonella Plus with its cognate supplement (CHROMagar, France). E. coli Op50 was used as a nonpathogenic control in the nematode killing assay. It was cultured in Luria-Bertani broth (Formedium, United Kingdom) for 18 h at 37°C prior to use. Leaf samples. The leaf samples used were raw, unwashed spinach leaves supplied by Vitacress Salads Ltd., United Kingdom. Leaves were inoculated within 48 h of delivery. Twenty-five-gram leaf samples were placed in a Stomacher bag (Interscience, France) and inoculated with 1 ml of bacteria at a concentration of 5 ϫ 10 7 CFU/ml of BHIB. Inoculated samples were shaken vigorously and incubated at 22°C for 24 h prior to being washed with chlorine. Chlorinated washing water samples. A stock solution of 2,500 ppm free chlorine was produced by dissolving one Haz-Tab (Guest Medical, United Kingdom) in 1 liter of ddH 2 O, which was further diluted in ddH 2 O to generate working solutions. Bacterial suspensions of 10 8 CFU in phosphate-buffered saline (PBS; Oxoid, United Kingdom) were inoculated into 50 ml of ddH 2 O in a Stomacher bag to which 50 ml of the appropriate chlorine dilution was added. The sample was shaken vigorously for 2 min and then filtered through a 0.22-m-pore-size mixed cellulose ester membrane (Millipore, USA) by vacuum filtration. Bacteria were removed from the membrane by placement in another Stomacher bag with 100 ml of PBS and shaken with a Pulsifier (Microgen, United Kingdom) for 30 s, producing a final concentration of 10 6 CFU/ml. Samples were then taken for culture and DVC. Spinach. Following 24 h of incubation, 225 ml of ddH 2 O containing the appropriate volume of chlorine solution was added to inoculated spinach samples. Samples were vigorously shaken for 2 min, and the liquid was discarded, retaining the leaf samples; 225 ml of PBS was then added, and the bag was shaken in the Pulsifier for 30 s. Samples of the resulting bacterial suspension were then taken for culture and DVC. DVC and visualization of samples. Samples taken for DVC were concentrated by centrifuging a 10-ml sample for 15 min at 4,000 rpm with a Heraeus Megafuge 1.0. The sample was then resuspended in 1 ml of PBS. To aid visualization, samples were subjected to cell elongation by a modification of the method of Juhna et al. (54). The 1-ml sample was added to 4 ml of ddH 2 , and 10 l of pipemidic acid at a concentration of 10 g/ml. The suspension was incubated for 18 h at 22°C in darkness. The suspension was concentrated prior to DVC in the same manner as before. All samples were imaged by using EDIC-EF microscopy (55) and a QImaging Retiga EXi camera. Bacteria were quantified by counting visible cells across at least 30 fields of view per sample. Images were merged with ImageJ. C. elegans killing assay. C. elegans worms were maintained on 5-cm nematode growth medium (NGM) agar plates prepared in accordance with standard methods (56) with a lawn of E. coli Op50. To prepare an experimental plate, 50 l of E. coli Op50, L. monocytogenes, or Salmonella Thompson culture was added to the center of the plate and it was incubated at 22°C for 24 h. To produce VBNC cells, cultures of L. monocytogenes and Salmonella Thompson were pelleted by centrifugation and resuspended in 10 ml of a 200 ppm chlorine solution for 30 min. Chlorinated water was removed by vacuum filtration as described above, and bacteria were removed from the membrane by vortexing in 1 ml of PBS for 2 min (57), concentrating the sample to compensate for the growth of the culturable counterparts on the NGM plate. Plates were then inoculated with 50 l of VBNC cells and incubated at 22°C for 24 h. VBNC cells were plated on selective media to verify the VBNC state. C. elegans worms were transferred to experimental plates at the L4 stage. Twenty worms were used per plate, and each condition was tested with at least four replicates. Nematodes were counted daily and transferred to fresh plates every other day. Nematodes that did not respond when prodded with a pick were considered dead. Statistical analyses. Culture data and DVC were separately subjected to one-way analysis of variance with Tukey's multiple-comparison test. Comparisons of culture and DVC data were done with multiple t tests. Nematode killing assay data were analyzed by using the survival curve comparison Mantel-Cox test. All statistical analyses were done with GraphPad Prism 7.
2018-04-27T03:46:33.690Z
2018-04-17T00:00:00.000
{ "year": 2018, "sha1": "81c1affef29d18a4994cde9f6126ea141acab25b", "oa_license": "CCBY", "oa_url": "https://mbio.asm.org/content/mbio/9/2/e00540-18.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81c1affef29d18a4994cde9f6126ea141acab25b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
261329759
pes2o/s2orc
v3-fos-license
SOIL CONTAMINATION STATUS USING CONTAMINATION INDICATORS AND THE HEALTH RISK using the atomic spectrometer, was used to process the results of analyzes of heavy elements in soils and represent them graphically and statistically, and then write the research in its final form. Finding. To find out the source of soil pollution, whether it is a natural source or human­induced, in addition to the application of two models of environmental risk indicators. (Environmental risk factor and potential environmental risk index) to find out how the elements are dangerous to the plant or animal environment. Originality. In this study measuring soil pollution is determined by the Contamination factor, Pollution Load Index, Degree of contamination, Ecological risk factor, and Potential Ecological Risk Index. Practical value. In the study area (1M on the right side, 2M on the side behind the SDI Factory, 3M inside the SDI Factory, and 4M on the left of the SDI Factory), which primarily shows an increase in the concentrations of the element’s cadmium and mercury in all areas of the study area by comparing them with the concentrations of the same elements in the earth’s crust. Introduction. Heavy elements, or what is known as heavy gases, are defined as those whose density is five times greater than the density of water, 5 mg/cm 3 , and it has negative effects on human, animal, and plant health. Among the heavy ele ments are lead, Pb, mercury, Hg, copper, Cu, Zn, arsenic, nickel, Ni, and other elements, which are some of the most dangerous toxic substances that pollute the soil, water, and air, as the agricultural soil is exposed to contamination with heavy elements, which leads to the loss of its fertility, as it causes a decrease in the bacteria responsible for the decomposition of the organic matter present in the soil and the fixation of the nitrogen element for them [1]. Also, all heavy elements are considered toxic if they are present in high concentrations, as they can interact with all components of cells and disrupt their functions, whether in plants, animals, or humans [2]. Plants absorb these elements if they are present in soil or water, and then they reach the man after that, through the food chain. Thus, preserving the soil from pollution or deterioration is an inevitable necessity of the era because it is related to human health [2]. The toxicity of heavy elements is due to two reasons [3]. First: Heavy metals are linked with functional groups in en zymes by stable bonds and in the form of complexes, which leads to the disruption of molecules that create metabolic reactions. Second: heavy elements appear on the cell membrane, which changes its structural composition, thus impeding the exchange of ions and organic substances necessary for life, such as pro teins and sugars, or completely preventing them from being transported. The heavy elements accumulate in human organs. Soil is an important element for life if we take into consid eration the fact that it embraces the roots of plants, and unfor tunately, we have become seriously exposed to pollution due to human activities [2]. The researchers linked the environmental problems to the industrial progress that began in the middle of the last century, as well as to various agricultural activities and other reasons [3]. Preserving the soil intact and clean is the basis for preserving the organisms that live on it [4]. Heavy metal pollution is one of the forms of environmen tal pollution resulting from human industrial or agricultural activity. In recent years, scientists have been interested in ENVIRONMENTAL SAFETY, LABOUR PROTECTION studying heavy metals in terms of their presence in the envi ronment and their biological effects, and their relationship to human health [5]. The disposal of these wastes, such as putting them randomly in rivers, valleys, and others, leads to negative consequences for the oceanic environment, especially for or ganisms that depend directly on the water [6]. Pollutants are divided into two main groups: organic pol lutants and inorganic pollutants. Among the most important inorganic pollutants are heavy elements such as lead, cadmi um, arsenic, mercury, zinc, and other elements [7]. Heavy metals are naturally present in the soil in low quantities, but their quantities increase as a result of human activities [8]. As a result of the development and prosperity of the industry sector, pollutants mostly spread in the environmental system, and heavy metals had a major role in environmental pollution [9]. Heavy metals can exist in the soil within natural values, and their presence is important in giving the soil some beneficial properties, but the increase in the rates of heavy metals in the soil has a negative effect, as heavy metals may change the gen eral soil properties, especially the biological properties of the soil from the number of microorganisms and its diversity and activities, and thus change the soil temperature, pH, clay min erals, organic and inorganic materials, as well as chemical forms of minerals, as the following works are followed to know and evaluate pollutants as well as potential environmental risks [10]. Several studies have been conducted to investigate and assess the presence of pollutants, particularly heavy metals, in soil, and to evaluate their potential environmental risks. These investiga tions are crucial for understanding the extent of contamination and developing strategies for remediation and prevention [11]. The effects of heavy metal contamination on soil proper ties, especially biological properties, have been widely studied. High concentrations of heavy metals can have toxic effects on soil microorganisms, reducing their abundance, diversity, and activities. This, in turn, can disrupt important soil processes such as nutrient cycling, organic matter decomposition, and plantmicrobe interactions. Changes in soil temperature, pH, clay minerals, and the availability of organic and inorganic ma terials can also occur as a result of heavy metal pollution [8,12]. To evaluate the potential environmental risks associated with heavy metal contamination in soil, different risk assessment approaches are employed. These approaches consider factors such as the toxicity of the heavy metals, their mobility and bio availability, and the vulnerability of the surrounding ecosystems. Environmental risk assessments help in identifying the areas at high risk of contamination, prioritizing remediation efforts, and implementing measures to prevent further pollution [13]. Understanding and evaluating the presence of pollutants, particularly heavy metals, in the soil is crucial for maintaining soil health and preventing environmental degradation. Through thorough investigations, risk assessments, and ap propriate remediation measures, it is possible to minimize the adverse effects of heavy metal contamination and promote sustainable land use practices [12,14]. It is everything that is thrown into the environment and leads to a change in the environmental characteristics as a re sult of human and natural activities through their direct or in direct effects. Environmental pollution is the process of dis turbing the natural balance of the environment, which affects the life of living organisms [15]. Natural pollution means that man has no interference with it, i.e. it is a result of the soil itself, as the soil is a mixture of minerals that resulted from the physical, chemical, and bio logical weathering processes of the rocks of the earth's crust, composed of the original material, and then they are naturally present in the soil because they are part of their components, volcanoes, earthquakes, and hurricanes [9,16]. It is due to the pollutants produced by human activities and various activities into the environment, including waste water and sewage from residential areas, as well as the extrac tion of mines and the resulting waste that becomes a source of pollution in the surrounding lands, as well as sewage and in dustrial waste and the use of pesticides that become polluted if they accumulate in large quantities and in a manner that con tradicts with others [17]. Many vital compounds, various chemicals, and gases, which include carbon monoxide and chlorine, occur as a re sult of industry and wars, for example, that cause damage to living organisms on the surface of the earth [18,19]. Water pollution is the pollution that makes water unsuit able for human, animal, and plant use, and because water is the lifeblood of all living creatures and organisms, its pollution and corruption cause the most severe health crises and prob lems ever [20,21]. Soil contamination is the change in the physical, chemi cal, and biological characteristics of the soil through the addi tion and removal of substances from it [6]. The soil is consid ered the main medium for plant growth and its production of nutrients for the rest of the food chain, although the soil con tains the main elements such as phosphorus, nitrogen, and other macro and micronutrients. It is a storehouse of other elements naturally or as a result of abnormal additions [22]. The process of soil contamination with heavy elements is a critical and detrimental indicator of its permanence and ability to continue sustaining the ecosystem with nutrients necessary for the continuation of energy flow in parts of the food web [11]. The environment is defined as the envelopes surrounding the earth which consist of the lithosphere, the hydrosphere, the atmosphere, the biosphere, and all the interactions that arise between them. In other words, it is the sum of the rela tionships between water, air, land, and living things [23]. As for environmental pollutants, they were defined for the first time in (1981) as industrial or natural chemicals that are liberated in nature by human activity and have a harmful effect on the en vironment (soil + water + air) exceeding the critical concentra tion that leads to harmful effects on human health and other organisms alone or by interacting with others. This system complicates its ability to get rid of these pollutants naturally, and scientists believe that human exposure to environmental pollution and environmental pollutants at the present time has increased more than ever before [24]. Environmental pollution has caused damage to various types of organisms, including plants, animals, and humans [25]. Even tropical rainforests have been affected by this pollu tion. Over the past three decades, there has been concern about the worsening health effects resulting from environmental pol lution. The World Health Organization [26] estimates that about a quarter of the diseases that humans face are caused by environmental pollution and due to long exposure to these pol lutants, as found [26]. The emergence of some diseases and their geographical distribution is linked to the high concentra tion of certain elements in the environment. Environmental pollution is mainly due to various human activities represented in the extraction of mineral deposits, mining, power generation and construction. In addition to the waste of the industrial es tablishment, there is use of wastewater, sewage sludge, pesti cides and chemical fertilizers in the agricultural field [27]. There are many types of environmental pollution, such as air pollution, soil pollution, water pollution, food pollution, radioactive pollution, thermal pollution, etc., but there are many sources of environmental pollution, some of which are of natural origin, such as volcanoes, which, during their eruption, release many gases (H 2 S, SO 2 , HCl), which are very harmful to the environment as well. It releases huge amounts of ash, which reaches several tons, and may reach the serotosphere layer (55 km) above sea level. The World Health Organization ap pointed (2011) that the main cause of pollution in Iceland is volcanic ash in addition to the volcanic volume, which carries large amounts of molten sulfur and selfgases, which leads to an increase in the acidity of water and soil as a result of the dis solution of these gases in water [28]. Fires are also considered one of the natural sources of environmental pollution, as they release Cl gas such as (CO 2 , CO 6 , NO x ). According to [29], there are some rocks rich in minerals, including heavy ones such as serpentine, whose residues are a source of environmen tal pollution with some metals such as (Ni, Cr, Co). Industrial sources of environmental pollution resulting from human activities include: 1. Means of transportation: in all its land and air forms. The means of land transport are the most important in the field of environmental pollution due to their large numbers and widespread, and they produce many pollutants such as carbon, sulfur, and nitrogen oxides, in addition to some heavy metals present in fuel, such as lead and others. 2. The industrial establishments, and factories are gener ally among the largest polluters of the environment, especially power plants, cement plants, and oil refineries [21]. 3. Chemicals used in daily life, as it showed the presence of tens of thousands of chemicals used in industry, agriculture, homes, hospitals, tanneries, and other issues that are directly thrown into the environment, leading to its pollution [25]. The fixation is noted of heavy metal ions on the solid phase of the soil, which includes a mineral part, an organic part, and another organometallic part. This phase is what gives the soil the ability to carry out chemical absorption exchange reac tions, especially mineral ions. The organic part (mineral part) in the soil is the most capable of fixing mineral ions through mineral particles represented by clay minerals and oxides of hydrated minerals and others [30]. There are two main types of clay minerals, the first, such as kaolinite, as the basic unit of the mineral consists of a four faceted silica layer, followed by an eightfaced aluminum lay er, which are linked by hydrogen bonds, and thus water and mineral ions do not enter between the layers, and, therefore, its ability to fix heavy metals is low due to the low area of ab sorption surfaces, type. The second is from clay minerals, such as montmorillonite and illite. Here, the mineral consists of a tetrahedral silica layer (Si 2 O 5 ) with octahedral layers (Al 2 O 4 (OA) 2 ). Vander Walz forces link between the layers, so water and metal ions enter between the layers, and thus the adsorption surfaces increase and the fixation increases, name ly metal ions due to the ability of these metals to expand [31]. The organic matter present in the soil also has the ability to interact with heavy metals and form stable complexes with them in the soil, because the components of the organic mat ter, especially folic and humic acids, have functional groups such as OH and COOH, so they work to bind the heavy metal and thus sequester it and limit its movement. Organic matter complicates heavy metals and thus restricts their movement. The tendency of these metals to bind varies according to the type of mineral. Cu, Pb, Cr are the most inclined elements to bind with organic matter, and sometimes they can form some organic complexes of copper and be dissolved [32]. Materials and Methodology. Field Work. The fieldwork rep resents the first step in starting work for the current study, as the modeling was carried out in a field tour in November for each region in depth, paying attention to removing leaves and weeds and placing the samples in tight nylon bags on which the sam ple number and the name of the region were written [33,34]. Laboratory Work. Laboratory work After completing the fieldwork, the samples were transferred to the laboratory to process them to measure the concentrations of heavy elements approved in the current study (manganese, copper, cadmium, mercury) using the atomic spectrometer in the laboratories of the University of Baghdad (Engineering Consulting Office), as each sample was crushed and homogenized accordingly on the basics of the effective method of work from the General Com pany for Geological Survey and Mining [35,36]. Office Work. Reviewing previous studies related to the sub ject of research on the study area, and some software, including Microsoft Excel, was used to process the results of analyzes of heavy elements in soils and represent them graphically and sta tistically, and then write the research in its final form [37,38]. Contamination Indicators. There are different ways to eval uate the effects of human activities, where several pollution indicators (pollution factor, pollution load index, degree of pollution, land accumulation index) were applied to find out the source of soil pollution [39,40] and whether it is a natural source or humaninduced, in addition to the application of two models of environmental risk indicators. Environmental risk factor and potential environmental risk index were consid ered to find out how dangerous the elements are to the plant or animal environment [41,42]. Contamination factor (CF). This factor is used to classify the level of elemental contamination in soil samples by divid ing the concentration of each element by the reference value [43]. It is calculated according to the following equation where (C m ) Sample is the concentration of a specific element in the soil, (C m ) Background is the concentration of the same element in the earth's crust, and the values of the pollution factor are expressed in the categories mentioned in Tables 1-2 [43]. Pollution Load Index (PLI). This indicator is used to esti mate the percentage of pollution with heavy elements in the studied area. The pollution load index is extracted according to equation (2), where n represents the number of elements, CF. Pollution factor in Table 2 shows the categories of pollu tion load index [43]. (2) Degree of contamination (C d ). The degree of pollution is known as the sum of the pollution factors [43] and is found in the following equation where C d is the degree of pollution, n is the number of ele ments, CF is the pollution factor, and the categories (degree of pollution) are used to describe the degree of pollution [39]. Ecological risk factor (Er). This factor is used to assess the environmental hazards of a trace element in soil and the value of Er (environmental hazard factor) and Tr (toxic response factor to the elements Cd, Hg, Cu) are found to be 30, 40 and 5.0, respectively [9]. CF is the pollution factor that is evaluated according to the categories classified in Table 4 [44]. Er = Tr ⋅ CF. (4) Potential Ecological Risk Index (PERI). It is one of the methods for evaluating pollution and is applied according to the toxicity and content of a specific pollutant [45] and is found from equation (5) where RI is the potential environ mental risk index, n is the number of elements, and ER is the Results and Discussion. There are different methods for evaluating the effects of human activities, where some pollu tion indicators (pollution factor, pollution load index, degree of pollution) were applied to find out the source of soil pollu tion, whether it is a natural source or humaninduced, in ad dition to the application of two models of environmental risk indicators (environmental risk factor and environmental risk index) potential) to find out how dangerous the elements are to the plant or animal environment. The results of the analysis of heavy metal concentrations were included in Table 6 in the study area (1M on the right side of the SDI Factory, 2M on the side behind the SDI Factory, 3M inside the SDI Factory, 4M on the left of the SDI Facto ry), which primarily shows an increase in the concentrations of the elements of cadmium and mercury in all areas of the study area by comparing them with the concentrations of the same elements in the earth's crust. By applying the pollution factor index to all the elements under study, which are shown in Table 7 and Fig. 1, we notice that there are three categories, as the moderate pollution factor was for the manganese and copper elements in all regions un der study, while the moderate pollution factor was for the ele ment Mercury in regions 1, 2, 4 and cadmium in regions 2, 4, and the most dangerous category was for element mercury in the region 3 and cadmium in regions 1, 3, and these results confirm the initial interpretation through the results of con centrations of elements. The reason may be due to the place (inside) of the factory in these areas and not others, where the waste accumulates, which leads to exposure of the soil to pol lution more than the rest of the areas. The second indicator that was applied was the degree of pollution indicator, which shows, through its results shown in Table 8 and Fig. 2, that there are two categories of it, which are a low degree of pollution and a medium degree of pollution, as this indicator shows the degree of contamina tion of the region's soil with heavy elements. We note that area No. 3 M is the most polluted compared to the other ar eas under study, and the reason may be that the place is in side the factory where waste and drug residues accumulate, which leads to soil exposure to pollution more than the rest of the areas. The results of the pollution load index were included in Table 9 and Fig. 3, which showed that the soil of two of the regions under study is considered polluted by applying the equation for the index. After applying the individual indicator model for the envi ronmental risk factor, we distinguish four categories, each with a specific meaning. The misleading results in yellow and red are shown in Table 10, and Fig. 4 as the large environmental risks and high environmental risks are found in areas 1, 2, 3 for the elements cadmium and mercury, and this is due to several reasons, as cadmium increases in the environment from min ing, industrial work, burning coal, and household waste. As for mercury, it increases due to volcanic activities and rock erosion, because it is found naturally in the earth's crust, so it is liberated to the environment and as a result of human activity, especially from coalfired power plants, industrial ac tivities, and waste dumps. Fig. 4 also explains it. The last indicator that was applied to reach the objectives of the research is the indicator of potential environmental haz ards, the results of which were included in Table 11 and Fig. 5. Near a waste dump inside the SDI Factory, which contains waste, it leads to the place inside the factory where waste ac cumulates, which leads to soil exposure to pollution more than in the rest of the areas. Finally, within the research area, which is located at dis tances of 1M to the right of the SDI Factory, 2M behind the SDI Factory, 3M inside the SDI Factory, and 4M to the left of the SDI Factory, three distinct categories were observed. Moderate pollution levels were noted for the manganese and copper elements across all regions under investigation. Addi tionally, moderate pollution levels were observed for the ele ment mercury in regions 1, 2 and 4, as well as for cadmium in regions 2 and 4. The most hazardous category was identified for the element mercury in region 3 and for cadmium in re gions 1 and 3. In this study, we used modeling of Contamination Indica tors and the health risk assessment to explain the total con tamination of heavy metals in this area instead of normal methods used in other previous works. It is necessary to study other areas that are beside the study area and use more modeling to calculate the pollution of heavy metal. Conclusions. There are different methods for evaluating the effects of human activities, where some pollution indica tors (pollution factor, degree of pollution, pollution load in dex), in the study area (1M on the right side of the SDI Fac tory, 2M on the side behind the SDI Factory, 3M inside the SDI Factory, 4M on the left of the SDI Factory), we notice that there are three categories, as the moderate pollution factor was for the manganese and copper elements in all regions un der study, while the moderate pollution factor was for the ele ment mercury in regions 1, 2, 4 and cadmium in regions 2, 4, and the most dangerous category was for element mercury in the region 3 and cadmium in regions 1, 3. The second indicator that was applied was the degree of pollution indicator, and there are two categories of it, which are a low degree of pollution and a medium degree of pollu tion, as this indicator shows the degree of contamination of the region's soil with heavy minerals. The results of the pollution load index showed that the soil of two of the regions under study is considered polluted by ap plying the equation for the index, which was applied to reach the objectives of the research is the indicator of potential envi ronmental hazards, near a waste dump inside the SDI Factory, which contains waste, it leads to the place inside the factory where waste accumulates, which results in soil exposure to pollution more than in the rest of the areas.
2023-08-30T15:20:32.921Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "dad9b12a8926d86ff92cfdc70097adf26fbbd260", "oa_license": null, "oa_url": "http://nvngu.in.ua/jdownloads/pdf/2023/4/04_2023_Mustafa_Abdullah_Theyab.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "511d81b758cd14a78cece8ae02f7867506503f77", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
215819506
pes2o/s2orc
v3-fos-license
Deep-water depositional architecture and sedimentary evolution in the Rakhine Basin, northeast Bay of Bengal Since the consecutive discovery of several gas fields from 2004 to present, the Rakhine Basin has been an active area for petroleum exploration in the Bay of Bengal. High-resolution 3D seismic data and well data from blocks AD1, AD6 and AD8 offshore northwest Myanmar are used to study the Miocene–Pleistocene depositional architecture and sedimentary evolution in the Rakhine Basin. Analysis of seismic facies and seismic attributes indicates that deep-water architectural elements include submarine canyons, confined slope channel complex systems, aggradational channel–levee complexes, isolated channels, frontal splays and mass-transport complexes, which have variable characters (shape, dimension, sedimentary architecture) within predominantly background deep-water slope-basin floor facies. Most of the sediments are interpreted to be sourced from the Ganges–Brahmaputra fluvio-deltaic system to the north with only minor lateral input from the Indo-Myanmar Ranges to the east. Investigation of the depositional evolution and architectural elements transformation during the filling history of the Rakhine Basin suggests the Rakhine Basin experienced rapid progradation during the Oligocene–Middle/Upper Miocene, gradual retrogradation during the Middle/Upper Miocene–Early Pliocene and gradual progradation during the Early Pliocene–Pleistocene. Published exploration results indicate that the main reservoirs of the discoveries in blocks A1 and A3 are Pliocene frontal splays and channel–levee fills, dominated by fine and very fine-grained sandstones, in structural and structural–stratigraphic traps. Analytic results from seismic characters and several exploration wells indicate that channel complexes and associated overbanks and frontal splays with fine-grained sandstones and siltstones trapped by the four-way closures are primary reservoir targets. Introduction The Bengal Fan, the largest submarine fan on Earth covering the floor of all of Bay of Bengal Weber et al. 1997), has attracted extensive attention from petroleum geologists in recent years. Several large discoveries have been made (Mya, Shwe, Shwe phyu and Thalin) in northeast part of the Bengal Fan, offshore Rakhine Basin, Myanmar, in the past decades. In 2004, biogenic gas-charged Pliocene turbidites were penetrated in several wells, and the Shwe, Shwe Phyu and Mya gas fields were discovered in blocks A1 and A3 (Yang and Kim 2014). Since then, the Rakhine Basin, which comprises shelf-slope blocks A1-A7 & AD6 and slope to deep-water blocks AD1-AD5 & AD7-AD16 ( Fig. 1a) (Racey and Ridd 2015), has been the focus of deep-water exploration. However, little has been published on the detailed deep-water depositional architecture and evolution of the Rakhine Basin, although some publications have been presented to improve the understanding of architecture and growth processes of the northeast Bengal Fan during the past 20 years (Curray et al. 2002;Hübscher et al. 1997;Schwenk et al. 2005;Subrahmanyam et al. 2008;Weber et al. 2003). Here, we offer a key paper that includes new data with observations and interpretations based on industry seismic and well data to describe the seismic facies Edited by Jie Hao and architectural elements of the Miocene-Pleistocene found in shelf Block AD6 and slope deep-water blocks AD1&8 (Fig. 1b). The objectives of this paper are to (1) use high-resolution 3D seismic imaging to characterize the geometry and internal architectures of a portion of the palaeo-Bengal Fan; (2) preliminarily summarize a depositional model of a finegrained deep-water system; (3) illustrate the deep-water depositional evolution and discuss the controlling factors and the provenance variation; and (4) discuss the implications for hydrocarbon exploration of the Rakhine Basin. Basin overview The Rakhine Basin is a Tertiary foredeep basin located along the eastern fringe of the Bay of Bengal and the western coastal provinces of Myanmar (Fig. 1a). It is predominantly an offshore basin, with the onshore parts comprising lowlying islands and coastal areas. The formation of the Rakhine Basin is closely related to oblique convergence of the Indian and Burmese Plates from the Paleocene, with the ensuing westward migration of the developing accretionary wedge (Indo-Burma Range). Current basin includes the young folded and thrusted sequences of the accretionary wedge (Pliocene-present), which formed as a result of oblique subduction of the oceanic Indian Plate beneath the island arc/forearc silver of the Burmese Plate and undeformed submarine sequences. Simplistically, three structural belts are evident in the basin based on the tectonic deformation intensity, including fold and thrust belt, gentle fold belt, and undeformed abyssal plain belt from east to west (Fig. 2). The western margin of deformed sequences is defined by the accretionary wedge front, which is deformed by a combination of transform and compressional movements. The basin is thought to contain over 12,000 m of sediment fill of Upper Cretaceous-Holocene. Tertiary exposure in the present onshore portion of the basin contain open marine and near shore to deltaic sediments, while the offshore portion contains continental shelf, slope and basin floor facies. Database and methods The study areas AD6, AD1 and AD8, located across the shelf and basin floor of the central part of Rakhine Basin, are at water depths of less than 100 m on the shelf (Block AD6), and 1300-2100 m on the slope to basin floor (Block AD1&8). The data set used for this research is a conventional, industry, 3-D seismic volume with a dominant frequency of 40 Hz in shelfal areas to 30 Hz in the slope-basin floor areas. Line spacing is 12.5 m in cross-line directions and 25 m in inline directions. The studied interval is from Pleistocene to (a), showing main tectonic provinces in the northeastern Bay of Bengal, and location map of the study area (b), illustrating exploration blocks, discovered gas fields and bathymetry (Racey and Ridd 2015) 1 3 Miocene. This interval ranges in thickness from 4000 m in Block AD1&8 to as much as 5500 m in Block AD6. All of the seismic images in this paper are displayed in SEG positive standard polarity, in which a positive reflection coefficient is represented by a central peak (plotted black) and troughs are red. Fortunately, there are boreholes in the study areas and adjacent areas to calibrate seismic interpretation with lithology. Lithologies drilled by wells range from fine-grained sandstones, siltstones to mudstones. Well data indicate that the acoustic impedance of sandstones from seabed to Miocene is distinctly lower than that of mudstones and there are no carbonate and volcanic sediments in the study areas. In general, the high amplitudes observed on the seismic data are associated with fine-grained sandstones and siltstones, the tops of which correspond to the amplitude troughs. The reflection events in the studied interval have lowmoderate continuity with high amplitude within a background of low-amplitude-transparent seismic facies (Fig. 3a). A schematic stratigraphic sections for the Tertiary of the deep-water Rakhine Basin show the architectural elements and general depositional evolution in light of the general seismic characters observed in Block AD1&8 (Fig. 3b). The high-resolution 3D seismic data are used to recognize the profile features of the architectural elements, and the best images from horizontal slices generated from 3-D seismic amplitude cube and 3-D semblance cube provide a useful method for defining the elements' forms and geometries . And the lithology can be predicted by the seismic inversion combined with the well calibration. Deep-water architectural elements The deep-water system of Miocene-Pleistocene in the Rakhine Basin is observed to comprise six distinct and contrasting deep-water architectural elements: (1) submarine canyon, (2) confined slope channel complex system, (3) aggradational channel levee complex, (4) frontal splay (depositional lobe), (5) isolated channel, and (6) mass-transport complex (MTC) ( Table 1). These elements are recognized from their geometries, seismic facies and well data. The following section focuses on these architectural elements and their seismic facies. Submarine canyon Submarine Canyons are large-scale erosional features up to 850 m deep and 10 km wide ( Fig. 4a) with variable internal seismic geometries and are mainly observed in the northern Block AD6. A key aspect of submarine canyons in this area in particular is that they are found located in shelfal areas in the Pliocene-Pleistocene, with delta front deposition characterized by high amplitude and parallel reflections in seismic below the canyons' erosional bases (Fig. 4b, c) and multiple-thickening upwards sequences observed on gamma ray well profiles of fine-grained sandstones-siltstones. The bases of the canyons are often highly irregular with stepped, erosional, staircase trajectories. In plan form, the canyons form a tributary network with steep-sided, smaller erosional gullies feeding into a larger trunk canyon. The canyon fill seismic character contrasts starkly with the seismic facies they truncate. Three principal seismic facies within the shelf canyon fills are recognized (Fig. 4b, c): SF1-low-amplitude to transparent or chaotic reflection packages predominate; SF2-discontinuous, high-amplitude seismic reflections characterized by either vertically or laterally offset-stacked V-or U-shaped channel elements that are arranged in variably stacked clusters; SF3-medium-to low-amplitude reflections of limited lateral extent, found at similar stratigraphic levels to SF2 seismic facies or the upper part of the canyon fillings. The canyon is observed almost everywhere to erode into high-amplitude topset, or shelfal (shallow marine) substrate. These underlying seismic facies are high-to moderate-amplitude, tramline seismic facies with continuous parallel seismic reflections. A series of proximal-distal transects through one of the main Pleistocene submarine canyons, which shows a number of features, are displayed. Firstly, the head area is characterized by a series of tributary gullies which pass down dip into a single submarine canyon (Fig. 4a). The average erosional depth is approximately 400-500 m. Secondly, the canyons are associated with syn-sedimentary faulting at the canyon margins, as so often with submarine canyons (Twichell et al. 1995), with a series of depositional terraces seen outboard of the canyon axis-directed faults. SF1 seismic facies is interpreted as fine-grained sediments dominated by slumps generated by failure of the canyon margins, or longer distance transportation of muddy/silty debris flows from further up the canyon. SF2 seismic facies is interpreted as stacked sand-filled turbidite channel elements. SF3 seismic facies are interpreted as passive filling of the canyon by fine-grained turbidites and hemipelagic shales, either as canyon-wide drapes or inside levee wedges associated with late-stage meandering channel elements. Thus, the canyons are predominantly fine-grained as observed in well penetrations, with a combination of slumps, and fine-grained inside levee or lateral canyon quiescent drape facies commonly enveloping the various stacked or isolated channel elements. The canyons in Block AD6 propagated progressively shelfwards, perhaps even shorewards from Miocene to Pleistocene by retrogressive headwards failure due to avalanching (Xu et al. 2014). This is evidenced by the gross erosional character of the canyons, with large-scale synsedimentary margin failure excavating the growing features through a series of avalanches. Significant proximal portions of the larger canyons are found in a shallow marine context (Belderson and Stride 1969;Belderson and Kenyon 1976). In addition, canyons progressively shift and young from east to west. This is thought to be due to the westward shift of fluvial input to the north, and tectonic uplift to the east resulted from westwards-migration of the accretionary wedge. Confined slope channel complex system Confined slope channel complex systems (CSCCS) are observed downslope from submarine canyons in the study area. They comprise a wide erosional fairway element, flanked by aggradational levee wings. The erosional fairway element is the main defining element seen seismically, Frontal splays appear as one to several parallel to sub-parallel, high amplitude reflectors with moderate to good continuity. Frontal splays have a variety of shapes in the study area, such as elongate or radial geometry. The gamma ray log pattern consists of blocky with sharp base at the front splay axis and a more serrated pattern for more distal frontal splays. The seismic facies is characterized by chaotic to hummocky reflectors with poor continuity and variable amplitude with obvious erosional base surface and irregular top surface. The gamma ray log pattern is generally shaly. The seismic facies is characterized by high amplitude, discontinuous reflectors. The planeform can be highly sinuous Typical wireline-log pattern is blocky at the lower part and fines upward to shales at the top. The erosional fairway element is a large and prominent incision flanked by wedge-shaped outer levees. Channel-axis deposits are characterized by discontinuous high amplitude reflectors that are formed by lateral and vertical migration of coarse-grained channel thalweg deposits. Inner levees are characterized by chaotic, or transparet low amplitude reflectors with a stepped shape. Seismic reflections within outer levees are always continuous, but can range from low to high amplitude. The wireline-log pattern in the internal axis fills composed of high GR at the base suggesting mud prone, blocky sands at the lower part and thin/ fine upward to shale pattern at top. On the gamma-ray log, the master levees consist of interbedded sands and shales at the lower part and fine upwards to shales at the upper part. Large scale erosional features with variable internal seismic geometries. Internal seismic facies can be classified into three types: SF1-low amplitude -transparent chaotic reflectors; SF2-discontinuous, high amplitude seismic reflectors; and SF3-medium-low amplitude reflectors of limited lateral extent. The gamma ray log pattern in the canyon margin is shaly interbedded with sands. The internal fill can be divided into four stages: stage 1-erosion and sediments bypass; stage 2-mixed erosion and deposition, featured by lateral migrated channels; stage 3-paticularly commom slumps blocked the canyon, generating the dominantly vertical stack of channels; stage 4-mudstone passive filling Levee sediements are characterized by low amplitude, continuous and/or relatively transparent reflectors with a double-wedge shape that thins away from the channel-fill sediments that characterized by high amplitude discontinuous reflectors on seismic. The gamma-ray log response of levee is generally shaly or "ratty" in appearance, more sandy pattern at the lower pattern and shows vertical decrease in grain size and N:G. Aggradational Channel Levee Complex 1 3 observed as a large and prominent incision scour flanked by wedge-shaped outer, or master, levees. The width of the erosional fairway element exceeds 5.5 km, and the thickness exceeds 540 m (Fig. 5). The thicker axial parts of the CSCCS are characterized by discontinuous, high-amplitude reflections arranged as laterally lenses that can be mapped as moderately sinuous channels on planform seismic slices. Seismic reflections within outer flanking wedge-shaped master levees are almost always continuous, but can range from low to high amplitude. Some of the CSCCSs are isolated (Fig. 5), and some are observed to be multiple, stacked erosional-depositional systems (Fig. 6). In Block AD1&8, these multiple CSCCSs comprise discontinuous, mainly high-amplitude reflection bundles contained within three or more obvious, and mappable, U-shaped erosion surfaces and the seismic bundles clearly display different stacking patterns. The total width of these CSCCSs exceeds 8 km, whereas the typical width and thickness of the mappable channel complex set inside them are 2 km and 300 m, respectively (Fig. 6). The erosional fairway element within the CSCCS is interpreted as the main pathway for turbidity currents, which initially bypass the channels and deposit coarse-grained lags. The fill of the erosional fairway is dominated by lateral and vertical migration of channel thalweg deposits that have high-amplitude seismic character and are interpreted as coarse-grained lags. Inner levees associated with them are characterized by chaotic or transparent low amplitude reflections with a stepped shape. The edges of the erosional fairway are characterized by stepped terraces, often capped by fine-grained facies. These bench-like depositional terraces can form from both depositional processes (i.e., vertical aggradation of inner levee deposits resulting from the overbanking of under-fit channels) and erosional processes (i.e., sculpting and slumping of inner levees along channel margins) (Gong et al. 2016(Gong et al. , 2018. The attached wedges outboard of the erosional fairways of the slope channel complexes are interpreted as master, or external levees that correspond to aggradational phases of the channel complexes where fine-grained sediments overspilled the margins. CSCCSs prevail in Miocene sediments in Block AD1&8. They have either vertically offset-stacked, laterally stacked or a combination of entrenched and offset-stacked channel complexes within them, forming channel complex systems (Figs. 5, 6). Confined slope channel complex systems are well-known features in the subsurface (often called slope channel complexes, valleys-fills, confined slope channel complexes, or confined slope channel complex systems) (Cronin et al. 2005;Janoko et al. 2013;Kang et al.2018;Sprague et al. 2005). In this paper, we refer to them as Confined Slope Channel Complex Systems. The terminology ascribed to these features, particularly the hierarchy of recognizable architectural element. CSCCSs are the downslope continuation of submarine canyons. CSCCSs differ from submarine canyons by their general confinement-where canyons are strictly entirely or mostly erosional, CSCCSs have an earlier erosional or entrenched phase followed by a combined confined/aggradational master levee growth phase, and smaller scale contained channels and inner levees filling phase. Furthermore, the range of stacking patterns within a CSCCS from the proximal area to the distal area is observed in the study to be variable. As shown in Fig. 7, in the proximal area of the CSCCS from the Upper Miocene is a classic 'entrenched channel complex' (Cronin et al. 2005); further downslope, the CSCCS breaks out into three laterally offset CSCCS; and even further downdip, there are more resolvable CSCCSs. The lateral migration of the entire CSCCS is thought to be related to a gradual increase in accommodation space downslope, allowing reoccupancy phases of the CSCCS to shift laterally due to lower aggradation of the enveloping master levees. The CSCCSs have a less linear planform shape further downslope and internally have more pronounced sinuous channel elements within them (Liu et al. 2013), as the degree of internal erosion, confinement, and perhaps slope and thalweg gradient, also decreases. Aggradational channel levee complex Aggradational channel levee complexes are mainly observed in Pliocene-Pleistocene in Block AD1&8. On seismic profiles, channel axis sediments are often characterized by high-amplitude, discontinuous reflections. Levee-overbank sediments are characterized by low-amplitude, continuous, and/or relatively transparent reflections with a double-wedge shape that thins away from the channel-fill sediments. The thickness-to-width (aspect) ratio of these levee units varies significantly from system to system. The maximum width of these facies reaches up to 23 km, whereas the general thickness is approximately 340 m (Fig. 8). The stacking patterns in these aggradational channel complexes are variable but dominated by channel elements that initially stack laterally and then progressively stack vertically. The levee can be subdivided into proximal levee, distal levee, and crevasse splay that may be comprise different sediments and have different reservoir potential. The units primarily consist of mudstones; however, thinly bedded finegrained sandstones and siltstones also can also be deposited on these areas, which are proved by wells and characterized by relatively high-amplitude reflection in seismic. The net gross can reach as much as 57% in proximal levees and 6% in distal levees from the interpretation of borehole coring in adjacent blocks. The stacking pattern of the channel elements is interpreted to reflect progressive confinement by levees. Aggradational channel levee complexes have been described from most modern large-scale 'passive margin' settings since their discovery on the Amazon, Indus and Mississippi Fans (Pickering et al. 1986;Damuth et al. 1988;Pirmez et al. 1997;Kolla 2007;Kolla and Coumes 1987), and on the Bengal Fan (Curray et al. 2002;Schwenk et al. 2005;Kolla 2007). Levee-overbank sediments are mainly developed in aggradational channel-levee systems which are associated with high channel sinuosity, lower slope gradient, large terrestrial drainage basin area, and fine-grained sediment contributing to the high volume of suspended material in turbidity currents that build up these types of deep-water channel system. Frontal splay In Block AD1&8, frontal splays can be found at or near the base of slope, where they are usually larger and more radial, and further out on the basin floor where they are narrower and smaller. On seismic, frontal splays appear as one to several parallel, high-amplitude reflections with moderate to good continuity, often with rapid lateral terminations. However, in planform view, they have a variety of shapes such as lobate (or tongue/tab-shaped), elongate (Fig. 9) or radial (Fig. 10). The total width of these elements exceeds 8 km and the thickness exceeds 100 m in some areas. The updip contact between frontal splays and channels is transitional or unclear. We found it more convenient to put the end members of frontal splay geometries as either elongate frontal splays (for thinner, more extended frontal splays fed by narrow channels) or clustered frontal splays (for radial or lobate geometries) (Zhang et al. 2017a). More elongate frontal splays appear to be more intimately associated with channels which are narrow and bordered by a 'double-track' of higher amplitudes, and they are often more sinuous channels than the clustered frontal splays. Clustered frontal splays are often stacked and characterized by discontinuous highamplitude reflections with a composite thickness of 200 m (Fig. 10a). Such a thick pile of sediments must have been deposited as a vertical stack of multiple frontal splays, as the individual splay ranges from 5 to 15 m in thickness (Lee et al. 2002;Jegou et al. 2008;Saller et al. 2008). A detail dissection displayed three stages of frontal splays as shown in Fig. 11. Each stage is separated by the low-amplitude reflection on seismic. The clustered and elongate frontal splay end members described above must always be considered as being made up of these smaller-scale elements, each 5-15 m thick on average, 1-2 km wide and 2-7 km long depending on the type of frontal splay. Though often blocky gamma ray log character, very thin shale breaks are often found between the splay elements, which form pressure seals in compartmentalization in some cases within these frontal splay reservoir bodies, or unexplained slow or tortuous communication between bodies which appear, or are assumed to be, more sheet-like in nature. Frontal splay is a term that we adopt here for what are usually called depositional lobes. This is primarily because the term depositional lobe does not imply any strict notion of scale (Shanmugham and Moiola 1991), being a very generic term for shape; a term too embroiled in delta terminology with which there is little analogy; and which does not have an associated, widely accepted hierarchical scheme for its architectural elements. In this paper, we use the term frontal splay to define the depositional architectural elements that are deposited from decelerating flows at the terminus of channels (Twichell et al. 1992;Saller et al. 2008;Yang and Kim 2014) and reflect the sediments that have bypassed through updip channels (typified by confined flows) and are deposited in primarily an unconfined setting. Isolated channel In Block AD1&8, isolated channels are characterized by high-amplitude, discontinuous reflections (Fig. 11). Some isolated channels have pronounced levees, but many do not. According to their outer morphology, isolated channels can be highly sinuous in the study area. Isolated channels occur in the study area, which do not fit into the CSCCS or aggradational channel levee complex definitions. An isolated channel is formed by repeated gravity flow deposits along one channel over a period of time (Zhang et al. 2017c). Isolated channels are smaller than confined channel complex systems and are not seen directly associated with frontal splays and are thus described as a separate distinct architectural element. Mass-transport complex (MTC) Large MTCs are mainly observed in the north and the east of Block AD1&8. The seismic facies of mass-transport complexes is primarily characterized by chaotic to hummocky reflections with poor continuity and variable amplitude. Most of these chaotic packages are characterized by dim or transparent seismic reflections, locally with inclined or cross-cutting seismic patterns, in mappable lenses with rapid lateral terminations, though some have bright amplitudes that are moderately continuous (Fig. 12). Although unconfirmed by drilling, this range of reflectivity and continuity may result from (1) rafted blocks and clasts in debris-flow deposits; (2) variable sand content in this dominantly silty/muddy section; or (3) differential compaction that resulted in higher impedance contrasts with the enveloping, usually fine-grained sediments. In some areas, this facies bundles have erosive based where the MTCs are slightly incised into underlying sediments. In the study area, their maximum width and thickness exceed 22 km and 400 m, respectively (Fig. 12b). Mass-transport complexes (MTCs) are sediments that have been re-sedimented by subaqueous mass-wasting processes (slumps, slides and large-scale debris-flow processes, often all in close association). Mass-transport complexes are formed by the gravity flow either down the continental slope, through submarine canyons or CSCCSs, from tectonically uplifted slope areas or laterally from canyon or confined slope channel margins. They are most commonly triggered by a range of geological processes, including tectonism, sedimentary loading, tsunami, dissociation of solid gas hydrates, storms, tidal or other longshore currents, or sea level changes (Wu et al. 2011;Qin et al. 2013). In the Bay of Bengal, mass-transport complexes have not been reported in great detail from the Bengal Submarine Fan (Ma et al., 2011), though they are important exploration targets in the Rakhine Basin (e.g., Thalin Field in Block AD7). Judging from the broad areal extent, great thicknesses of mass-transport complexes and the depositional area in the study areas, they could develop from collapse of the shelf edge and upper slope from the north, which may be result from the rapid progradation of shelf deltas and the high accumulation of slope, and slumps from the east, which may be related to the relative strong tectonic activities to the east area. Depositional model A depositional model of the northeast Bay of Bengal is shown in Fig. 13, showing the main architectural elements in planform. This is the typical planform expression of a fine-grained deep-water system. A single deposystem is active at any one time. Six different types of deep-water architectural elements, which range in character from slope facies to basin floor facies, reflect a combination of active (sediment input from canyon/channel systems) and relatively passive (slope failures and slumps) sediment supply systems in the study area. Evolutionary history of the depositional systems Examining the distribution of the various depositional architectural elements and the evolution of the basin fill from Eocene to Pleistocene, has been undertaken in some detail Fig. 13 Plan form model for deep-water architectural element distribution of the Rakhine basin, offshore Myanmar. Submarine canyons, confined slope channel complex systems, aggradational channel levee complexes and MTCs feature prominently. Isolated channels have not been penetrated by any wells as yet and their downdip relationship with splays is not known hemipelagic mudstones. Some of the frontal splays were incised into by later channels. The Lower Pliocene was dominated by the development of confined slope channel complex systems, aggradational channel levee complexes, and locally by frontal splays and hemipelagic deposits. The Upper Pliocene-Pleistocene comprised widespread aggradational channel levee complexes, frontal splays, masstransport complexes, and hemipelagic deposits. But in Block AD6, an early Pliocene delta front began to develop from the north, and prograded southwards through late Pliocene to Pleistocene, which were incised by large-scale submarine canyons. Investigation of depositional evolution suggests that during the Oligocene-Middle/Late Miocene the Rakhine Basin experienced rapid progradation of the deep-water slope (Zhang et al. 2017b;Li et al. 2017a), and deep-water architectural elements varied from Eocene-Oligocene hemipelagic mudstone drape to Middle/Upper Miocene confined slope channel complex systems. During the Middle/Late Miocene-Early Pliocene, the Rakhine Basin experienced gradual retrogradation and more frontal splays developed in the Lower Pliocene than those in Upper Miocene. During the Early Pliocene-Pleistocene, the Rakhine Basin experienced gradual progradation and architectural elements varied from Pliocene aggradational channel levee complexes and frontal splays to the Pleistocene aggradational channel levee complexes, submarine canyons and a delta system. A regional strike section of the northeast part of the Bengal Fan is shown in Fig. 14, with the principal deepwater architectural elements interpreted. The switch from the confined channel complex systems in Miocene to Pliocene-Pleistocene aggradational channel levee complexes is in response to both internal factor changes in flow size, density and grain-size (Kneller 2003) and external factor changes in slope gradient, sea level and climate. Confined slope channel complex systems are characteristics of much of the Miocene, indicating the input of large volumes of gravity flow-driven resedimentation events during that time. The increase in sediment input could be due to uplifting in Himalayas and resultant steep gradient of the shelf to deep water. The increase in volume of aggradational channel levee complexes and MTCs from the Pliocene-Pleistocene, indicates an increase in volume of clay and silt. These less erosional flows could be due to the decrease in tectonic activity and the resultant decrease in steepness of the shelf to deep water. Moreover, the flows containing more silt and clay are likely to have been fed directly by hyperpycnal flows from delta distributary channels during flood and monsoon events. These flood-driven, sustained low-density turbidity currents would have an associated high volume of suspended sediment load, dominating the system. Glacio-eustatic sea level changes must also have superimposed on this progressive background change in regional tectonics (uplift of the Himalayas since Eocene and uplift of the Indo-Burmese Range to the east), climate change (related to the larger-scale plate tectonics) and vegetation in the hinterland. Block AD-6 is characterized by multiphasic stacked canyon fills, which are typical character of upper slope-shelf facies. The N-S-trending canyon fills were sourced from the north-northwest. Besides the canyons, the discovery of shelf-edge deltas capping the canyon fills which prograde toward the west confirms the existence of provenance from the east inland Myanmar (Fig. 15). Depositional evolution from the Eocene to Pleistocene shows the southward shift of the shelf, and N-S-trending channels indicate the sediments mainly come from the north. However, we have also seen some lateral input from Myanmar, particularly in the form of low-stand deltas from the east in Block AD6 (Fig. 15). In Block A1/A3 (adjacent blocks and the sites of discovery of the Pliocene gas fields Shwe, Shwe Phyu and Mya), seismic stratigraphic analysis suggested two provenances in the Late Pliocene, one from the northwest as part of the Bengal Fan (Yang and Kim 2014) and one from the east inland Myanmar. Implications for hydrocarbon exploration in the Rakhine Basin The commercial fields were discovered to date to be biogenic gas fields produced from Pliocene structural and structural-stratigraphic traps (Shwe, Shew Phyu and Mya in Block A1&A3). In distal shelf-basin floor, the Pliocene and Miocene stratigraphic-structural traps and structural traps are predominant because of relatively weak tectonic deformation. The Pliocene and Miocene submarine fan sandstones comprise the most promising reservoirs, and hemipelagic shales provide intra-formational seals. In the Shwe field, the Pliocene basin floor fine-grained sandstones of the frontal splay comprise the main reservoir. A number of such sand bodies with good lateral continuity have been identified, and logs and core samples have indicated excellent reservoir quality for the sands, with average net-to-gross ratio of 86%, porosity of 28% and permeability of 278 md (Yang and Kim 2014). The drilling results in these adjacent blocks indicate that the main reservoirs of the discoveries are Pliocene depositional frontal splays (called depositional lobes by Yang and Kim 2014) and channel-levee-overbank sediments. The analytic results indicate that Miocene-Pliocene channel complex systems and associated overbanks and frontal splays that include fine-grained sandstones and siltstones trapped by the four-way closures are good reservoir targets. Besides, MTCs are common in the Miocene-Pleistocene, crevasse splays and frontal splays within supra-and pre-MTC unconformity forming the stratigraphic traps are indicated to provide secondary reservoir targets. Other plays may include that the Pliocene-Pleistocene deltaic sandstones (perhaps coming from both the north and the east) comprise the reservoirs, interbedded shales provide top seals and traps are associated with anticlines. Conclusions Six different types of architectural elements which vary in seismic characteristics, geometries, scales and fillings are recognized from the shelf to the slope and basin floor. The study area in the Oligocene-Middle/Late Miocene experienced rapid progradation of deep-water architectural elements, showing that the deposition varied from Eocene-Oligocene hemipelagic mudstone drape to Middle/ Upper Miocene confined slope channel complex systems. The study area experienced gradual retrogradation and gradual progradation in the Middle/Late Miocene-Early Pliocene and Early Pliocene-Pleistocene, respectively, according to the vertical variation of the architectural elements observed in the seismic. Most of the sediments have been sourced from the Ganges-Brahmaputra fluvio-deltaic system to the north with only minor lateral input from the Indo-Burmese Ranges to the east. No matter relatively thick sands in frontal splays or thin sandstones in the levee-overbank, these sandstones were drilled to be gas bearing and have good reservoir quality. Drilling success indicates that the structural trap and trap effectiveness are very important in gas accumulation. MTC Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-04-20T14:38:40.929Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "c823894e92e99d3b48ad9ddfdb5e59f9532960ef", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12182-020-00442-0.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "c823894e92e99d3b48ad9ddfdb5e59f9532960ef", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
268627139
pes2o/s2orc
v3-fos-license
Juno’s JunoCam Images of Europa On 2022 September 29 the Juno spacecraft passed Europa at 355 km, the first close pass since the Galileo flyby in 2000. Juno’s visible-light imager, JunoCam, collected four images, enabling cartographic, topographic, and surface geology analysis. The topography along the terminator is consistent with previously reported features that may indicate true polar wander. A bright band was discovered, and indicates global symmetry in the stress field that forms bright bands on Europa. The named feature Gwern is shown not to be an impact crater. Surface change detection shows no changes in 22 yr, although this is a difficult task considering differences between the JunoCam and Galileo imagers and very different viewing geometries. No active eruptions were detected. Introduction The Juno mission to Jupiter launched in 2011 and entered orbit around Jupiter in 2016.Juno's polar elliptical orbit originally did not come close to any of the Galilean moons.As Juno's orbit evolved, the apojove of its 53 days orbit gradually moved south and the perijove (PJ) rotated north.As a result, the inbound portion of the trajectory ellipse started cutting across the orbits of the Galilean moons beginning in 2021 (Hansen et al. 2022).The project names each close flyby as PJnn, with the number (nn) increasing by one each perijove.In June of 2021 the spacecraft made a close pass by Ganymede a few hours before the Jupiter flyby on PJ34 (Hansen et al. 2022;Ravine et al. 2022).The orbit continued to evolve, bringing Juno relatively close to Europa on PJ37 and PJ40 at 81,752 km and 46,922 km altitude, respectively. On 2022 September 29 the Juno spacecraft made its closest pass by Jupiter's moon Europa a few hours before PJ45.Juno approached Europa from the dark side, with closest approach at 355 km altitude.Four images were acquired from Juno's visible imager JunoCam, starting at 1515 km altitude on the illuminated side. Mounted on the Juno spacecraft spinning at 2 rpm, JunoCam push-frame images are acquired as the spacecraft rotates and the field of view sweeps across the target (Hansen et al. 2017).The JunoCam field of view is 58°, with 1600 pixels across in the cross-rotation dimension.This wide field of view means that geometric parameters such as phase angle will change substantially from one side of the image to the other.JunoCam has four filters: three broadband red, green, blue (RGB) and one narrowband methane (methane was not used for the Europa flyby).An advantage with the push-frame imager is that all three colors are acquired simultaneously in one wide image. Due to the very low altitude this pass was rapid, and just a small portion of Europa was imaged at low emission angles.JunoCam acquired four RGB images of Europa with coverage over an area spanning longitudes 15W-80E, centered just north of the equator, as shown in Figures 1 and 2. The first image was acquired near the sub-Jovian point at a phase angle of over 80°, excellent lighting for discerning topography along the terminator.One RGB image was acquired on one spacecraft rotation, extending ∼120°, sufficient to capture Europa across its edges visible to JunoCam.The spacecraft was so close to Europa that the polar areas were hidden by the limb of Europa (see Figure 2). The images collected between PJ37 and PJ45 are the first well-resolved images of Europa since the New Horizons flyby of 2007, and the images collected just before PJ45 are the first high-resolution images resolving geologic features since the last Galileo images of Europa were acquired in 2000.Objectives for the Juno flyby included close examination of the surface and shallow interior using instruments not available on earlier spacecraft (e.g., Brown et al. 2023;Zhang et al. 2023) and a reexamination by JunoCam of a region not very well observed before (namely portions of the sub-Jovian hemisphere; see Figures 1-4), the subject of this report.Science objectives specifically for JunoCam included improvements in cartography, surface geology, surface change detection, and searching for evidence of active plumes. The spatial resolution is comparable to or better than previous Galileo coverage, which had a resolution of 1-6 km.A mosaic of the four images, shown in Figure 1, includes the western extent of Annwn Regio (chaos terrain), and the large ringed feature Callanish.Rays from Pwyll are visible on the eastern limb in the fourth image. Another projection of the images is shown in Figure 2(a).As each image gets larger (lower resolution), we see the added The start time of the first image was 2022-09-29T09:38:05.7, and each subsequent image is obtained 1 minute later than the previous one.The geometric properties of the images are listed in Table 1. The viewing geometry of the flyby is shown in Figure 3.The close pass began on the nightside, and the closest approach took place at 09:36:28 on the dark side (left globe).The best image for JunoCam began ∼2 minutes later when the illuminated part of Europa came into view (right globe, not to scale).The spacecraft flyby velocity was 23.6 km s −1 . Image Processing and Cartography The four JunoCam image observations were acquired over the northern sub-Jovian hemisphere between ∼350°and ∼90°E longitudes.This region was partially covered by Galileo from E25 (ESGLOBAL01) and E4 (ESGLOMAP01) observations at ∼1000 and 1400 m pixel −1 resolutions, respectively, although it should be noted that the E25 observations were particularly affected by radiation noise during image readout.JunoCam was especially useful in bridging a significant gap in prior imaging coverage between ∼30°and 45°E longitudes that was observed by Galileo at only ∼7000 m ground pixel scale, imaged even more poorly by New Horizons and Voyager, thereby fulfilling a cartography objective and improving our understanding of feature locations (though the imaging gap at ∼260°E remains). The nature of the JunoCam imaging system allowed us to use image-stacking techniques to extract full information content.In this context, "mosaic" refers to the collection of RGB framelets making up one image.Each of the four main mosaics was acquired in three colors.By map-projecting each color framelet into its own mosaic at resolutions 3× better than nominal pixel resolution and summing (or stacking) the red, green, and blue filter submosaics for each observation, we were able to substantially improve feature recognition, thus mitigating or aliasing effects that often otherwise make the nearly ubiquitous narrow lineaments (mostly ridges) across much of Europa's surface in these images appear jagged (Figure 4).These stacked versions of the four JunoCam observations are then recombined with the three-color mosaics to produce full-resolution color mosaics (Figure 5).Thus, image mosaic 1 (Table 1) was map-projected at 330 m pixel −1 (at the equator), although the inherent ground pixel scale was ∼1000 m.While this technique does not increase the inherent resolving power of the images to detect features, it does significantly improve the ability to recognize and characterize many features. In order to utilize the super-resolution technique to maximum effect (as well as fulfill the search for surface change detection and other mapping objectives) it was necessary to accurately and precisely align the JunoCam framelets to each other and to the existing global control network for Europa using Galileo, Voyager, and New Horizons images (Schenk et al. 2011;Bland et al. 2021).Alignment of the JunoCam framelets was achieved using standard Integrated Software for Imagers and Spectrometers (ISIS) qnet and jigsaw tools, in which match points are identified linking surface features in the overlapping images and then adjusting the associated camera pointing kernels until the least-squares residuals are minimized.The four JunoCam and overlapping Galileo image sets were then map-projected and ratioed.Any misalignment of the overlapping mosaics revealed by the ratio products as severe contrast effects were then corrected by adding or editing match points in those areas.This process was repeated until no major misalignments could be identified. Pits, Ridges, and Terrains along the Terminator An important set of results comes from looking along the terminator zone between ∼345°and 360°E, shown in Figure 6(a).JunoCam imaging here is at much lower incidence angles than existing Voyager or Galileo (E25 and E4) observations, allowing easier identification of blocks, troughs, and depressions in this terminator region.The most prominent topographic features are a set of irregularly distributed, 20-50 km wide, ovoid, relatively steep-walled depressions from ∼37°N to 11°S.These are morphologically similar to the large pits observed elsewhere on Europa in association with true polar wander (TPW) described below (Schenk et al. 2009), but are the first to be mapped in this hemisphere. The four JunoCam mosaics have a maximum parallax angle difference of ∼22°.This is sufficient to resolve vertical features on the Z-scale of ∼500 m.However, because most of the surface of Europa has relief of <500 m (P.Schenk & F. Nimmo 2023, in preparation), no geologic features are resolvable in the stereogrammetric digital elevation models processed to date.No large domical features such as the one mapped on Ganymede at the sub-Jovian point (Ravine et al. 2022;P. Schenk & F. Nimmo 2023, in preparation) have as yet been identified.The ovoid depressions are also not resolved in the JunoCam stereo, but shape from shading (and attendant shadow lengths) indicate depths for most of up to 500 m, consistent with prior depression measurements (Singer et al. 2021).The largest pit at 28°355°E has a depth of ∼900 m, similar to the depths of irregular steep-walled depressions associated with TPW (Schenk et al. 2009(Schenk et al. , 2020)).Even the super-resolution version of mosaic 1 is not sufficient to resolve whether structural deformation occurred within these depressions.Mapping of these features in the first image, which also includes a previously unrecognized arcuate trough, reveals that most occur along the extension of one of the outer rings of the concentric TPW pattern (i.e., the reorientation of a floating ice shell with respect to the "fixed" rotation axis; Schenk et al. 2020), suggesting that many depressions observed elsewhere on Europa may be part of this pattern (Figure 6(b)).The observed size range of these pits are all larger than those observed globally by Singer et al. (2021), which are in the <5-20 km range, but are in the range of those large pits observed in geometric association with TPW (Schenk et al. 2009(Schenk et al. , 2020)).This supports a TPW origin for the observed large pits, though we note that this is not a systematic survey of all pits in the mapping area, especially smaller pits, which were mapped at image pixel scales of ∼200-250 m by Singer et al. (2021) and are not reliably identified at JunoCam pixel scales.Full mapping of pits and other TPW features is limited by the fact that the terminator mapping, including Voyager, Galileo, JunoCam, covers only <25% of Europa's surface.Figure 5. Full-resolution RGB cylindrical map projection of Europa using the mosaic 1 imaging data from JunoCam.This projection was processed using the "superresolution" technique as described in the text.While the JunoCam colors are reasonable approximations to the human color range, each color band here has been given an optimum contrast stretch to maximize color and brightness contrast. Filling in the Geologic Map With the increase in spatial resolution between the Galileo images and the JunoCam images in the region imaged (Figure 7), we expect to see more detail on the surface of Europa.This enhanced level of detail allows for further identification of linear features and bright bands, and notable improvements in our identification of some chaos morphology.In addition to the resolution change, these new observations are The large difference in lighting conditions, particularly where the JunoCam image is near the terminator, causes the imaged terrain to appear significantly different.A high incidence angle highlights texture and topography over apparent albedo differences, whereas a low incidence angle highlights apparent albedo differences and causes topography and texture to appear less conspicuous.The global geologic map was created with the Galileo images where the incidence angle was low, and so the units were mapped primarily by apparent albedo (or relative brightness).Now, with the addition of the high-incidence-angle JunoCam images, we are able to see where the texture and topography line up with what was assumed in the lowerresolution and low-incidence-angle images.Because of the high incidence angle, the knobby texture of chaos terrain particularly jumps out (Figures 6(a) and 7).The southeastern portion of the previously identified chaos terrain is now uncertain, because this region does not contain the chaos-like texture, even though it has the chaos-like low relative brightness.Due to the high incidence angle of the JunoCam images, we would expect the texture of the chaos terrain to be apparent as in the example to the north, but this is not the case in the south.This terrain, previously identified as chaos based on its relative albedo, could still be chaos terrain where the texture is smoother or unresolved, but it could also be a case of "smooth plains"-a terrain type not previously identified at the global scale, but previously defined by regional mapping in other locations on Europa (e.g., Prockter & Schenk 2005). JunoCam data improved the resolution of the image coverage from the E4 and E25 Galileo encounters (partially shown in Figure 7).Two arcuate troughs identified in the Galileo data set are shown to extend further into the JunoCam image.A third newly identified trough is visible near the terminator in the JunoCam image, aligned with the other two troughs (Figure 6(a)).Large lineaments can be seen reaching across the area, meeting on both sides.The interpretation of the linear features identified as undifferentiated lineae in the Galileo map has been updated to ridges in the JunoCam map.Smaller lineaments can also be observed.Some portions of the surface imaged by JunoCam still appear relatively featureless, but this is probably due to features that are still unresolved in the new image (Becker et al. 2023; the Juno Stellar Reference Unit image with a resolution of ∼300 m pixel −1 shows very fine linear structure that would look bland at JunoCam's resolution). Tectonics of Bright Bands Bright bands are some of the least common landforms on Europa.The common types of bands on Europa are generally lower albedo than the background plains, have symmetrical boundaries, and can be tectonically reconstructed as crustal spreading features.Bright bands are asymmetrical and do not appear to be formed by spreading (Schenk & McKinnon 1989;Prockter et al. 2002;Greenberg 2004;Kattenhorn 2004).The most well-studied example of a bright band is Agenor Linea, which trends east to west for approximately 1500 km in the southern anti-Jovian hemisphere (not imaged by JunoCam).Even though Agenor is referred to as "bright," it is only relatively brighter than the background ridged plains at moderate to high phase angles.Based on imaging obtained through Galileo orbit E6, Geissler et al. (1998) found that Agenor appeared darker than its surroundings at low phase angles, and then underwent a contrast reversal at about 40°p hase, appearing increasingly brighter than its surroundings at higher phase angles.Further imaging of Agenor in the Galileo mission bore out these conclusions; Agenor is seen as a relatively dark feature at 27°phase in imaging from orbit E10, neutral in tone in high-resolution imaging at 45°phase in some E17 images, and relatively bright in other images from orbits E12, E14, and E17 (100°, 78°, and 71°phase, respectively).This unusual photometric behavior led Geissler et al. (1998) to suggest that Agenor may be recently active.Agenor was then targeted for high-resolution imaging later in the Galileo mission, which showed that it is not the most recently active feature in its region (Phillips et al. 2000).Images acquired late in the Galileo mission during orbit E25 revealed another Agenor-like bright band, named Corick Linea, on the northern sub-Jovian hemisphere.Corick also trends mostly east-west, curving northeast along its 1400 km length, and curiously lies almost antipodal to Agenor (Greenberg 2004). JunoCam imaging of Europa resolves the eastern end of Corick Linea, as well as the eastern end of a previously unrecognized bright band to the north of Corick, named Pasiphae Linea (Figure 8).Like Agenor, Pasiphae appears bright in JunoCam images at high phase angles (80.77°at the center of the first image, with higher values toward the terminator), but Pasiphae can be found as a prominent dark lineament at low phase angles in Galileo imaging of the sub-Jovian hemisphere in orbits G2 (2°phase), C9 (4°phase), and E10 (27°phase).The highest-resolution previous imaging of Pasiphae was obtained during Galileo orbit E25 between 33°a nd 41°phase angles, where it appears as a faint dark lineament or a neutral-toned band.Just like Agenor, Pasiphae appears to undergo a contrast reversal with the background plains near 40°p hase angle.Pasiphae can be traced for approximately 1500 km through Galileo and JunoCam imaging. The eastern terminus of Agenor Linea was imaged at high resolution in the Galileo extended mission, and the data showed that the end of the bright band turned into a set of southwardcurving fractures and ridges.These features strongly resemble "tail cracks," which are found at the terminus of a strike-slip fault, where local extension caused by motion along the fault is accommodated by a set of fractures curving toward the side of the material that is under tension.Interpretation of the tail cracks at Agenor has led to the interpretation that the bright band slipped in a right-lateral direction (Prockter et al. 2000;Kattenhorn 2004;Hoyer et al. 2014). At the eastern termini of Corick and Pasiphae Lineae, the JunoCam images show sets of ridges curving to the north, in a splay pattern much like the eastern terminus of Agenor.Unfortunately, the resolution of the new images is insufficient to demonstrate that these ridge splays are the same age as the bright bands, but we predict that future imaging of this area at higher resolution will show a relationship between terminus and tail cracks, much like that seen at high resolution at Agenor.If this is correct, the northward curve of the tail cracks indicates that Corick and Pasiphae both formed by leftlateral slip. It is striking that Corick and Pasiphae show the opposite sense of shear from Agenor, and that the locations of their eastern termini-near (27°N, 2°E) and (39°N, 25°E), respectively-are very close to being antipodal to the eastern terminus of Agenor near (42°S, 182°E).Is this mirror-image symmetry merely a coincidence, or is there a global symmetry in the driving stresses forming bright bands on Europa?Models of diurnal tides (e.g., Rhoden et al. 2012) would predict that a fracture with the position and orientation of Agenor would slip in a right-lateral direction, and that fractures in the positions and orientations of Corick and Pasiphae would slip in a leftlateral direction.While this is a promising explanation, it does not explain why bright bands have only been observed in these two antipodal areas.Another globally symmetrical stress state could be induced by TPW, and the currently hypothesized TPW scenario for Europa would place all the known bright bands in zones of maximum tensile stress (Schenk et al. 2008).However, a single high-resolution Galileo image from orbit E25 which covers the middle of Pasiphae Linea shows that the fractures of Kermario Fossae, which are geometrically This cross-cutting relationship suggests that if both sets of features are related to TPW, then they formed sequentially at different stages of the event, with bright bands first followed by the fracturing.Alternatively, Pasiphae Linea is not geometrically related to the known TPW features.Global Europa mapping will be required to address the origins of bright bands with more confidence. Preliminary Analysis of Color and Photometric Changes One promising aspect of the new set of images is the potential to carry out regional color and stratigraphic analysis of lineae with images all acquired at the same time and lighting.The analysis of Galileo images on the anti-Jovian side of Europa done by Geissler et al. (1998) showed a likely evolution of lineae from cracks to ridges to triple bands to ancient bands.Very preliminary JunoCam analysis has shown fascinating detail: 1.A cycloidal band in the north has different-colored segments in image 1: blue-gray to the south, dark to the north.At lower phase angles in later images and as the sub-spacecraft point shifts, one of the cycloid segments changes from the dark to the blue-gray color.This observation may indicate a dependence of color and/or photometric behavior on viewing azimuth, perhaps due to aligned structures in the band (see Figure 9).2. Many of the old bands, primarily in the north and south and trending generally north-south, are dark in the higher-phase-angle image 1, and some of them fade to blend in with the neutral background color at lower phase angles.3. Other linear features (perhaps ridges, mostly east-west trending) are bright in image 1 at high phase, and either disappear or reverse contrast to look dark at lower phase angles. This is a promising area for future investigation. Europa's Craters Europa's surface age has been estimated at 40-90 Myr (Bierhaus et al. 2009).A total of just 41 craters over 1 km diameter are named and listed in Doggett et al. (2009), and crater characteristics are summarized in Bierhaus et al. (2009) and Schenk & Turtle (2009).No previously unmapped or newly formed craters have been recognized in the new images.Two prominent craters mapped in the overlapping area between the Galileo and JunoCam image coverage are Gwern (previously reported diameter 21 km) and Midir (diameter 38 km).However, in the JunoCam images Gwern appears as a set of intersecting ridges rather than an impact crater (Figure 10).The new images thus show that Europa has "lost" one of its 41 named craters, Gwern, and it is now clear that the intersection of linear ridge features simply produced a quasi-circular pattern in previous images. Midir was imaged by Galileo near the evening terminator and appears as concentric albedo rings with very little topographic shading (Figure 11).The outermost bright ring was interpreted to be the crater rim.Midir is also clearly visible in the JunoCam images but has an odd appearance.It was imaged by JunoCam near the dawn terminator, lit from the opposite direction of the Galileo imaging.Due to the apparent lack of topographic shading in the Galileo images, one would expect a similar appearance in the JunoCam images.Instead, the eastern edge of the structure (previously interpreted as the rim) appears bright, when standard crater morphology would lead one to expect that this should be an inward-facing slope which should be shaded dark by the lighting angle.The apparent structural rings appear bright and not as ridge-like, and the ejecta are not as evident either.One possible interpretation of the images is that Midir is instead a smaller impact structure atop a topographic pedestal, with the pedestal probably being an inner component of the continuous ejecta blanket (Schenk & Ridolfi 2002;Schenk & Turtle 2009).Other interpretations with various combinations of albedo and topographic patterns are also possible, highlighting the need for higher-resolution images at a variety of viewing geometries to confidently identify impact craters. Surface Change Detection Europa's young surface age (e.g., Bierhaus et al. 2009) makes it conceivable that we could detect surface changes in the new images.Surface changes are common on neighboring, active Io (Phillips et al. 2000;Geissler et al. 2004).Previous attempts to map surface changes on Europa (Phillips et al. 2000;Bramson et al. 2011;Phillips 2014;Schenk 2020) have been unsuccessful.Galileo mapping coverage was nearly global but at highly inconsistent pixel scales of ∼1-10 km (with very small areas at much higher resolution) while the New Horizons observations covered ∼70% of the surface at pixel scales of only ∼14-20 km.These provide a useful time base from which to confidently make comparisons, but only for features larger than ∼25 km globally and perhaps down to ∼10 km in the JunoCam area.Furthermore, differences in imaging system filters, sensitivity, and other factors, as well as the dissimilar observation conditions for most observations, also conspire to make change detection a difficult task, as described by Phillips et al. (2000) and Schenk (2020). Despite the challenges, attempts were made using the 2022 JunoCam observations to identify surface changes on Europa.We use the cartographically registered image products in map projection described above in Section 2, as any displacement of features between two of the images due to misregistration will produce artifacts that could be misinterpreted as changes.As previously described, iterative updates to the match points were made until all identifiable misalignments were removed.Difference and ratio images were then produced of the JunoCam and Galileo mosaics in which surface changes would appear as anomalous bright or dark features (Figures 12 and 13).The E25 and E4 Galileo mosaics that overlap with JunoCam included large terminator zones in which shading and shadows were abundant in those images but not JunoCam, introducing extensive artifacts that would make interpretation difficult.While produced, these difference images were essentially uninterpretable for change detection. Difference imaging using the 1997 Galileo E2 and E10 7 km pixel scale global images as reference were used, instead, with the JunoCam images degraded in resolution by factors of ∼2 to provide a better match to the Galileo map products.These images provided a reasonably close match to the phase angle and incidence angle characteristics of the JunoCam 3 and 4 images (Table 1).Despite this, there were some mismatches in emission angles near the outer margins of the JunoCam mosaics due to the proximity to Europa and the much-widerangle optics of JunoCam, which resulted in some lineated terrains appearing darker in the difference product due to increased surface shading in the JunoCam perspective.These features are not candidates for change due to the fact that they are common in the outer high-emission zones of the differenceimage products. Potential changes on Europa's surface due to tectonic or other surface activity were not detected, and this can be attributed to the limited resolution and differences in illumination and viewing conditions for the Galileo and JunoCam observations (or lack of any surface changes).The lack of mappable surface changes in the JunoCam imaging or in prior searches (Phillips et al. 2000;Schenk 2020;Phillips & Ireland 2023) may be due to several causes.The imaging library for Europa is severely limited, and this sharply reduces the number of imaging opportunities where the observation conditions are reasonably similar and a search can be optimally conducted.The differences between the Galileo and JunoCam observation in terms of illumination, wavelength, image noise, and other factors makes comparison of geologic features in the two image sets very difficult indeed.Filter color differences are important because of the terrain color differences on Europa, causing some terrain types to be darker or brighter in one of the imaging systems.Future missions may be able to image Europa's surface at wavelengths, locations, and viewing geometries that optimize for differences in tidal flexing, phase angle, and other factors that could increase our chances of detecting ongoing geologic activity, if acquired under similar conditions as the earlier observations. Surface Changes Due to Eruption Deposition Reports of putative eruptive plumes up to 100 or more km high at Europa (e.g., Roth et al. 2014;Sparks et al. 2016;Jia et al. 2018) suggest at least the possibility of surface changes due to activity between the JunoCam observations in 2022 and the New Horizons 2007 flyby and the Galileo mission observations (1996Galileo mission observations ( -2001)).No obvious plume-like deposits are apparent in the difference products (Figure 12), which we assume would take the form of circular or elongate diffuse patches or swaths from point or linear vent sources (irregular deposition patches could also result in the event of new chaos formation; Fagents et al. 2000).There are several 10-30 km scale irregular patches that appear brighter or darker in the difference products (arrow in Figure 12) but none can be directly attributed to surface changes as identical features are mappable in each image but merely change brightness.These can be attributable to differences in photometric properties with phase angle due to differences in roughness, particle sizes, or albedo.Because these features are not geologically resolved as any specific type of terrain or feature, interpretation is necessarily handicapped. It is possible that any putative plumes may be much smaller in scale (e.g., Fagents et al. 2000;Quick & Hedman 2020;Schenk 2020) than the available imaging allows us to search, or the plumes may be operating continuously (over decadal timescale), resulting in no significant changes over the Voyager to Galileo to New Horizons to Juno 1979-2022 timescale.It is also possible that plume-related changes or deposits are not visible to the wavelengths used by those spacecraft and require other instruments-for example, thin plume deposits could be transparent at visible wavelengths, if they lack distinctive chromophores.Even thin deposits would be expected to have strongly different photometric characteristics than surfaces exposed to space for longer periods; however, no photometrically unusual plume-related deposits can be identified (Figure 12). Another plume search approach is to look for unusual color signatures, similar to those observed in plume deposits on Io (McEwen et al. 1998) and Enceladus (Schenk et al. 2011, in Icarus Saturn color report).Here we take the JunoCam red and blue projected mosaics and ratio them to search for diffuse or other features with distinct visible color slopes (Figure 14).None were observed other than the well-known linear dark reddish lineaments associated with double ridges and dark reddish chaos, which show up as dark features in our ratio map.All such features in the ratio map are identifiable as known geologic features, and thus are not plume related (unless all double ridges were <10 km scale vent sources; Fagents et al. 2000;Schenk 2020).No large 100 km scale circular or linear diffuse color signatures are evident, even though the edge of the ratio mosaic is close to at least one such proposed plume source.Thus, while it seems evident that something unusual is occurring near Europa, possibly vapor eruptions (Roth et al. 2014;Sparks et al. 2016;Jia et al. 2018;Arnold et al. 2019; and see Section 3.8), the imaging results thus far provide no evidence as to its character. No Active Plumes on Europa Our surface-mapping searches focused on surface changes related to putative plume activity on Europa.There is also a great deal of interest in whether or not there are active eruptions at Europa (Roth et al. 2014;Sparks et al. 2016;Jia et al. 2018;Arnold et al. 2019).A key test for such active venting is to search for bright plumes of material projecting into space, as seen at Io and Enceladus.All evidence to date implicating plumes is consistent with vapor eruptions; there is no data relevant to possible particulates.In spite of that, we looked carefully at the terminator and the limb for signs of an eruption, both of which are easier to process than comparison of mapped image products acquired under nonideal circumstances. Figure 15 shows a highly stretched version of the first and highest-resolution JunoCam image.Jupiter shine illuminates the territory along the dark side of the terminator.This allows us to determine that bright spots along the terminator are not eruptions but rather are associated with high points of the terrain.No detached bright spots associated with sunlit tops of eruptive plumes can be identified in the anti-sunward direction away from the terminator in the JunoCam 1 or 2 images (the terminator was not observed in images 3 and 4). The limb of Europa in the fourth image was at ∼90 E (Figure 16), within a few hundred kilometers of where plumes have been reported previously as listed in Table 2.However, an eruption would have to occur at just the right longitude (limb or terminator) and just the right time when Juno flew by, so the probability of a detection was never very high.Also, the phase angle was not ideal for detection of small particles, as optical detection of plumes against space was not possible at Enceladus except at phase angles >120°, and the highest-phase JunoCam observation was at ∼90°phase.Although not ideal, it may have been possible for JunoCam to observe a plume somewhere along the limb.Despite these challenges, processing of the limb region using low-pass smoothing and contouring to suppress low-signal noise in these dark regions (Figure 17) do not reveal recognizable local-scale deviations from the near-limb profile, indicating that, if present, plumes are not identifiable in the limb or terminator regions of these images.It is also possible that the optical density of particulates within the plume is too low for JunoCam to detect.With a very low signal-to-noise ratio, JunoCam has imaged Jupiter's main ring, which has an albedo of 0.015 and an optical depth of 5 × 10 −6 .9For comparison, Enceladus' jets are ∼0.0025(Ingersoll & Ewald 2017). Other Data Sets: PJ37 and PJ40 Polar Coverage On 2021 October 16, ∼8.5 hr before PJ37, JunoCam acquired 17 shots of Europa at an altitude of 81,752 km, primarily to test exposure times (image identifiers JNCE_2021289_37C_0001_V01 to JNCE_2021289_37C00017_V01).At a latitude/longitude of 51.3/228W and with a resolution of 55 km pixel −1 , the images show interesting detail of albedo units on the surface but did not yield any new scientific insights.On 2022 February 24, ∼8 hr before PJ40, Juno came closer to Europa, with an altitude of 46,911 km.The five JunoCam RGB images collected had a best resolution of 31.6 km pixel −1 (image identifiers JNCE_2022055_40C00001_V01 to JNCE_2022055_ 40C00005_V01).This flyby was quite interesting as the images were centered at 77.3/129.9W.The view of the north polar region fills in gaps in Galileo coverage, as shown in Figure 18. The images covering the north pole from PJ40 do not show any remarkable changes from the surrounding terrain, with no significant color changes or visible major structural features (e.g., a prominent dark band).The apparent mottled color suggests that the terrain consists of a mixture of ridged plains and chaos.Additionally, one small area of higher-resolution imaging was acquired by the Galileo spacecraft near the north pole (25ESNPOLE01), which predominantly consists of chaos terrain (Collins & Nimmo 2009).Future imaging of the poles at higher resolution will further illuminate the terrain and any potential similarities or differences in morphology compared to the rest of the moon, which could grant insight into how terrain types vary with tidal heat or ice shell thickness. Summary and Conclusions These images of Europa from JunoCam are the first close-up images since Galileo (flyby E26, 2000 January).A number of new results are reported here.JunoCam sees a new arcuate trough and pits that add to the evidence for a TPW tectonic pattern.Cartography has been extended in a portion of the lower-resolution area in the Galileo map and more lineae connections can be traced in a new segment of the geological map.The tectonics of bright bands are being investigated and have promising implications for understanding Europa's global stress state.The number of documented craters larger than 1 km on Europa has gone from 41 to 40 craters.Careful comparisons of the JunoCam images with overlapping images from Galileo show no surface changes due to plume deposits or ongoing geologic activity over time intervals of 23-26 yr, though admittedly the images are not well matched in resolution, viewing geometry, and wavelength.No active eruptions were detected.Finally, from the Europa data set taken on 2022 February 24, we can say that the north polar cap of Europa at this image scale looks similar to lower latitudes. Looking to the future, Europa Clipper will be the next NASA Flagship mission to Europa, arriving at Jupiter in 2030.The European Jupiter Icy Moons Explorer (JUICE) will arrive at Jupiter in 2031, and make multiple passes by Europa on its journey to Ganymede.The JunoCam data have reminded us of all we have to look forward to in the future, with observations of Europa from Europa Clipper and JUICE. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.territory.Ejecta from the bright rayed crater Pwyll can be seen in the last image.Figure2(b) shows maps of phase angles, incidence angles, emission angles, and pixel scale for the four images. Figure 1 . Figure 1.The image on the left is a composite of the four images JunoCam received, from the perspective of the fourth image.On the right side the four individual images show the rapid change in altitude and coverage.The upper left on the right side is the first image, the upper right is the second image, the lower left is the third image, and the lower right is the fourth image.The diagram on the bottom right shows the small area covered by JunoCam images relative to the Galileo map. Figure 2 . Figure 2. (a) Map showing the four JunoCam color images coregistered and map-projected onto a monochrome Voyager-Galileo global mosaic in simple cylindrical projection (cropped to highlight the JunoCam coverage area).The view is centered on the sub-Jovian hemisphere with some additional terrains to the east.The bright rayed crater Pwyll is just beyond the eastern edge of the JunoCam coverage area at −30, 90E but its rays of ejecta are visible in the fourth image.The abundant dark lineaments are double ridges flanked by dark reddish deposits.(b) Cylindrical maps showing phase, incidence, and emission angles and pixel scales for the four JunoCam mosaics of Europa, with range of angles and pixel scales represented in the 0-256 brightness scale.Map coverage is 60S to 90N, −12E to 105E. Figure 3 . Figure 3. Juno viewing geometry at closest approach (left) and a few minutes later (right, not to scale) when JunoCam took the four images of the illuminated surface.The color bar shows the altitude of the spacecraft along the groundtrack, changing rapidly.Black dots along the groundtrack are spaced by 1 minute.The time in the center of the disk on the left globe is the time of closest approach to Io. Figure 4 . Figure 4. Portion of JunoCam Europa mosaic 1.(a) Green filter mosaic map-projected at native 1000 m pixel −1 resolution; (b) global integrated super-resolution mosaic map-projected at 330 m pixel −1 scales, illustrating the improvement in feature recognition using the technique described in the text. Figure 6 . Figure 6.(a) The high-incidence-angle perspective on images along the terminator shows new blocks, pits, and the arcuate arm of a new trough discovered along the terminator.The 900 m deep pit is ∼24 km across.The change in identification of chaos described in Section 3.2 is also annotated here.(b) This graphic of the features consistent with true polar wander, taken from Schenk et al. (2020), adds in the cluster of JunoCam pits and the new trough on the left side.The JunoCam features are marked with "JC." Figure 7 . Figure 7.The image on the left shows the Galileo coverage while the image on the right was taken by JunoCam.The Galileo image shows the band of much less differentiated terrain, filled in by the JunoCam image on the right.While some of the differences in the two geologic maps arise from an improved mapping scale, others are due to the improved resolution, less radiation noise, and different lighting conditions of the two basemaps.The 1:15M geologic map based on the Galileo data is shown on the bottom left (Leonard et al. 2018, 2024), with a more detailed version allowed by the JunoCam imagery (∼1:10M) shown on the bottom right. Figure 8 . Figure 8.The image on the left shows two northeast-trending bright bands emerging from the terminator of the color JunoCam image at 19°N and 39°N.The image on the right shows the locations of the bright bands Corick and Pasiphae in cyan.Interpreted tail-crack structures emanating from their eastern termini are mapped as dashed magenta lines.Simple cylindrical projection; background grayscale image is the global Voyager-Galileo image mosaic.The background mosaic is downloadable from the USGS online: https://astrogeology.usgs.gov/search/map/Ganymede/Voyager-Galileo/Ganymede_Voyager_GalileoSSI_global_mosaic_1km. Figure 9 . Figure 9.A subtle change in color of the prominent north-south trending curved cycloid in the center is seen between the first and third images, possibly due to phase function effects.The relatively bluish tint might be interpreted as an indication of relatively youthful age. Figure 10 . Figure 10.Gwern "crater" in the Galileo basemap (left) is shown to be a set of intersecting ridges in the JunoCam images (right). Figure 11 . Figure 11.Midir crater in the Galileo basemap (left) and the JunoCam image (right).Both images were obtained at high incidence angles, but with opposite lighting directions.The Galileo image is lit from the left and the JunoCam image is lit from the right. Figure 12 . Figure 12.Difference images between Galileo (1997) and JunoCam (2022) imaging of Europa.The map is a simple cylindrical map projection centered at 40°E longitude.Left: Galileo G2 observation to JunoCam observation 4; right: Galileo C10 observation to JunoCam 4. White arrows highlight linear ridges or belts with dark difference signatures due to increased emission angles in the JunoCam observation.Black arrow highlights unusual but unresolved features with brightness difference attributed to different photometric phase functions. Figure 13 . Figure13.A portion of JunoCam observation 2, green filter, as compared to Galileo clear image 6439 from orbit I25, has been coregistered and masked to include overlap areas only, and their ratio (right).The difference in terminator position between the two images (the terminator is just off to the right side of the Galileo image) reveals details of two large vertical troughs, and Midir crater is visible near the upper right.Differences between the two images can be attributed to differences in lighting and viewing geometry; no surface changes attributed to geologic activity are visible. Figure 14 . Figure 14.Red/blue color ratio map of Europa from JunoCam observation 4. While the known reddish lineaments and irregular-shaped chaos patches are apparent, no diffuse patterns are evident to suggest ongoing plume deposition.The circular feature on the lower left is Callanish at −16.7/334.5W,identified as a large ringed feature with a diameter of 107.0 km. Figure 15 . Figure 15.This highly stretched version of the first JunoCam image was processed by citizen scientist Brian Swift.We can see the portion of the planet illuminated by Jupiter shine adjacent to normally illuminated dayside topography. Figure 16 . Figure16.Location of four limb tracks of the JunoCam mosaics (1-4).The white and black dots across the map of Europa show the approximate locations of putative large plumes identified elsewhere and tabulated in Table2.White dots might have been visible to JunoCam while black dots were not. Figure 17 . Figure 17.These figures show RGB framelets 7 and 8 from the fourth JunoCam image, stretched to look for bright eruptions along the limb.None are apparent.Similar processing was carried on all framelets in all four images. Figure 18 . Figure 18.JunoCam had a north polar view of Europa on PJ40.The figure shows (left to right) the best PJ40 JunoCam image, a polar grid on the JunoCam image, and the current existing Galileo map with the same orientation.Some albedo units close to the pole can be filled in. Table 1 JunoCam Europa Image Geometry at Center of Image Note.Ranges of values are from terminator to limb at the center row. Table 2 References for Previous Prospective Plume Locations (Vapor Only)
2024-03-23T15:14:54.731Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "d37bdd12fea5d846c7e9ba75bdd30d2950c4823f", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.3847/PSJ/ad24f4/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "98d0bac7e8018a67a250cf10b509115b9bcfe36b", "s2fieldsofstudy": [ "Geology", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255779468
pes2o/s2orc
v3-fos-license
Stable and bicistronic expression of two genes in somite- and lateral plate-derived tissues to study chick limb development Components of the limb musculoskeletal system have distinct mesoderm origins. Limb skeletal muscles originate from somites, while the skeleton and attachments (tendons and connective tissues) derive from limb lateral plate. Despite distinct mesoderm origins, the development of muscle, skeleton and attachments is highly coordinated both spatially and temporally to ensure complete function of the musculoskeletal system. A system to study molecular interactions between somitic-derived tissues (muscles) and lateral-plate-derived tissues (skeletal components and attachments) during limb development is missing. We designed a gene delivery system in chick embryos with the ultimate aim to study the interactions between the components of the musculoskeletal system during limb development. We combined the Tol2 genomic integration system with the viral T2A system and developed new vectors that lead to stable and bicistronic expression of two proteins at comparable levels in chick cells. Combined with limb somite and lateral plate electroporation techniques, two fluorescent reporter proteins were co-expressed in stoichiometric proportion in the muscle lineage (somitic-derived) or in skeleton and their attachments (lateral-plate-derived). In addition, we designed three vectors with different promoters to target muscle cells at different steps of the differentiation process. Limb somite electroporation technique using vectors containing these different promoters allowed us to target all myogenic cells, myoblasts or differentiated muscle cells. These stable and promoter-specific vectors lead to bicistronic expression either in somitic-derived myogenic cells or lateral plate-derived cells, depending on the electroporation sites and open new avenues to study the interactions between myogenic cells and tendon or connective tissue cells during limb development. Background Components of the limb musculoskeletal system have distinct mesoderm origins. Myogenic cells originate from somites, while components of the skeletal system originate from limb lateral plate mesoderm [1][2][3][4]. Reciprocal interactions between the different components of the musculoskeletal system are required during development to ensure a complete and functional musculoskeletal system. We designed a technique to study the molecular interactions between somitic-derived tissues (muscles) and lateral-plate-derived tissues (skeletal components and attachments) during limb development, using the chick model. Skeletal muscle development relies on two successive and overlapping waves of myogenesis. Embryonic myogenesis establishes the scaffold of muscles, while foetal myogenesis ensures muscle growth and maturation [5,6]. Both embryonic and foetal myogenesis rely on muscle progenitors that express the Paired homeobox transcription factors Pax3 and Pax7 [7]. In chick embryos, Pax3+/Pax7+ muscle progenitors delaminate from the ventro-lateral lips of dermomyotomes and migrate into limb buds, where they proliferate and organize in two dorsal and ventral muscle masses [8]. Muscle progenitors enter the myogenic program via the sequential activation of the bHLH myogenic regulatory factors (MRFs), Myf5, MyoD, and Myogenin. MyoD promotes cell cycle exit with the direct activation of the cyclindependent kinase inhibitor p57 kip2 [9][10][11]. Muscle differentiation involves a cell fusion process to give rise to multinucleated muscle fibres [5]. Once the muscle differentiation process has started, muscle masses split progressively to give rise to individualised limb skeletal muscles [12]. In parallel to skeletal muscle development, the skeleton formation occurs. The skeleton is attached to muscles via tendons and connective tissues. Skeletal elements are linked together with ligaments. Limb skeletal elements (cartilage/bone) and attachments (tendons, ligaments and connective tissues) are derived from limb lateral plate [1][2][3][4]13]. During limb development, cartilage differentiation is initiated by the condensation of the mesenchyme in the centre of the limb bud [14] surrounded by dorsal and ventral muscle masses. Consequently, in early limb buds myogenic and cartilage cells are located in different limb regions and do not physically interact. In contrast, tendon and connective tissue cells are mixed with myogenic cells in dorsal and ventral limb regions [13,15,16]. Despite distinct mesoderm origins, the development of skeleton, muscle and attachments is highly coordinated to ensure a proper functional musculoskeletal system, in which tendons transmit the force generated by muscles to bones in order to allow movement [13,16]. Limb skeleton development initiates independently of muscles [17], although mechanical forces generated by muscle contraction are needed for further bone development [17,18]. Consistent with their different mesoderm origins, limb muscles and tendons initiate developmental process independently of each other. However, tendons require functional skeletal muscles to further differentiate [13,[19][20][21]. Connective tissue differentiates from limb bud mesenchymal cells and will provide structural support to other limb tissues. Genetic modification of limb muscle connective tissues affects limb muscle formation and patterning [15,16,22]. The electroporation technique is one current method to study gene function during chick development. Since the establishment of the in ovo electroporation in 1997 [23], numerous laboratories have been using this technique to misexpress genes in chick embryos. Over the years the electroporation technique has been applied to different embryonic tissues, mostly neural tubes [24][25][26][27][28] and somites [29,30], but also in aortic endothelial cells [31]. The electroporation technique has been a useful tool to study chick limb development, to target either the muscle lineage with limb somite electroporation [32][33][34][35] or limb mesenchyme with limb lateral plate electroporation [36][37][38]. Strategies have been developed to improve gain and loss-of function experiments in chick embryos using the electroporation technique [39][40][41]. However, most studies are based on electroporation with the use of transient vectors. Due to the episomal expression of these vectors, the electroporated cells failed to maintain the transgene expression more than 48 to 72 h after electroporation [42][43][44]. It therefore prevented any study at late stages of development. Consequently, techniques based on transposon-mediated gene transfer have been designed to obtain stable gene integration into the genome and to study late developmental stages in chick embryos. To date, three transposon systems are available, the Tol2 transposon system that originate from the medaka fish [42,45], the PiggyBac and Sleeping Beauty systems [46,47]. PiggyBac and Tol2 transposons have been proven to be efficient in chick cells [48]. In this report, we designed new vectors that stably and simultaneously express two fluorescent proteins, Tomato and GFP (green fluorescent protein) using the Tol2 transposons and the viral T2A system. These new vectors driving the bicistronic expression of the two fluorescent proteins under the control of a ubiquitous promoter were used to stably misexpress genes in the muscle lineage or in limb lateral plate derived-tissues, using chick tissue electroporation. We also designed new stable vectors containing muscle-specific promoters to target myogenic cells at different steps of the differentiation process. Chick limb somite electroporation with these muscle-specific vectors allowed us to stably and simultaneously co-express two proteins at different steps of myogenesis. We believe that all these new vectors combined with the electroporation technique are powerful tools to study tissue interactions during limb development in chick embryos. Results and discussion Stable and bicistronic expression of TdTomato and EGFP fluorescent reporter proteins using the Tol2 transposon and the viral 2A peptide systems We previously designed stable vectors based on the Tol2 transposon, which allowed us to stably misexpress genesof-interest in chick embryos [44]. However, with this stable vector set, we had to co-electroporate two recombinant vectors, one expressing the gene-of-interest and one expressing a fluorescent reporter protein in order to follow the ectopic gene-of-interest in ovo. One limitation was that both recombinant vectors were not systematically co-integrated into the chick genome preventing any analyses at a cellular level. The IRES (Internal Ribosome Entry Site) is a system that drives the expression of two genes using an unique vector [43,49]. However, the IRES system has proven some failures regarding the expression levels of the second protein [50]. The viral 2A peptide system has been generated to circumvent the problem of different protein expression levels and has been shown to allow the simultaneous expression of several proteins in stoichiometric proportions [50][51][52]. The 2A peptides were found in viruses that used these peptides to mediate protein cleavage [51]. 2A peptides are small peptides that are selfcleaved between the last two amino acids (Gly and Pro) following a rare and conserved consensus motif ( Fig. 1) [51,52]. After translation, proteins are produced in stoichiometric proportion from one unique transcript [50][51][52]. There are several available 2A peptides derived from different viruses, which display high selfcleavage efficiency [50,51]. The T2A peptide originating from the insect Thosea asigna virus 2A was used to generate a bicistronic cassette, which links the TdTomato (Tandem dimer Tomato) and the EGFP (enhanced Green Fluorescent Protein) proteins [52]. We cloned this bicistronic cassette under the control of a ubiquitous promoter, the CMV/βactin promoter. The CMV/ βactin promoter, comprising the CMV (cytomegalovirus) enhancer and the chick β-actin promoter, has been proven to be highly efficient for transient transgenesis in chick embryos [24,34,35,53]. In order to stably integrate this cassette into the chick genome, we inserted the CMV/βactin promoter and the bicistronic cassette into the stable vector based on the Tol2 transposon system [44]. Because TdTomato and EGFP are targeted to the membrane and nucleus, respectively [52], this system allows the stable integration into genomic DNA and the bicistronic expression of the two proteins with comparable expression levels in different subcellular compartments (Fig. 1). At the level of the translation process, the T2A peptide will be self-cleaved between the two amino acids, Gly and Pro (double arrow), following a consensus sequence (boxed). The 19 first amino acids of the cleaved T2A peptide remains fused to the C-terminus of the TdTomato, while the Pro amino acid is added to the N-terminus of the GFP. With the 2A peptide system, one single mRNA is transcribed that produces two proteins in stoichiometric proportions. TdTomato is targeted to the membrane due to a myristoylation signal and EGFP is targeted to the nucleus due to a H2B sequence. This leads to expression of both proteins in two different subcellular compartments Chick limb somite electroporation with the PT2AL-CMV/ βactin-Tomato-T2A-GFP vector DNA electroporation was performed to hypaxial lips of dermomyotomes of limb somites of E2.5/HH15 chick embryos in order to target limb myogenic cells [35] with the stable and bicistronic PT2AL-CMV/βactin-Tomato-T2A-GFP vector. Six days after limb somite electroporation with this vector set, we observed the expression of both Tomato and GFP proteins in forelimb muscles of E8.5/HH34 chick embryos (Fig. 2a-c). Forelimb transverse sections of electroporated chick embryos showed the expression of Tomato and GFP in limb muscles and no expression in the lateral plate derived-tissues such as cartilage elements (Fig. 2d-f ). Consistent with the cellular compartment addressing sequences, Tomato and GFP were observed at the cellular membrane (Fig. 2g, arrowheads) and in nucleus (Fig. 2h, arrowheads), respectively, in limb muscles. All GFP+ nuclei were surrounded by Tomato + membrane (Fig. 2i, arrowheads). However, GFP-nuclei could be observed in Tomato + muscle fibres ( Fig. 2g-l, arrows). We believe that only a few myoblasts were sufficient to target Tomato to the entire sarcolemma of muscle differentiated and multinucleated cells, due to membrane fluidity and the fusion process of muscle cells (Fig. 2j-l). It is likely that non electroporated myoblasts fuse to electroporated muscle cells. This explained why we observed myonuclei displaying GFP (Fig. 2j-l, arrowheads) and myonuclei displaying no GFP expression ( Fig. 2j-l, arrows) in muscle fibre sarcolemma displaying red fluorescence (Fig. 2j-l). The use of the generic CMV/βactin promoter leads to Tomato and GFP expression in MF20+ cells ( Fig. 2m-o, arrowheads) and in Pax7+ muscle progenitors ( Fig. 2p-r, arrows), following limb somite electroporation. We conclude that limb somite electroporation with the pT2AL-CMV/βactin-Tomato-T2A-GFP vector leads to GFP and Tomato bicistronic and stable expression in muscle progenitors and muscle differentiated cells, in chick limbs. Either fluorescent protein can be replaced by a gene-of-interest since the two proteins are produced in stoichiometric proportion. Membrane Tomato fluorescence will be adequate to follow electroporated myotubes even though not all myonuclei are targeted with GFP. Nuclear GFP will allow the targeting of electroporated nuclei in muscle cells. A p57 Muscle Regulatory Element combined with the βactin promoter drives bicistronic expression of Tomato and GFP in myoblasts and muscle fibres Limb somite electroporation with a generic promoter targets gene expression in muscle progenitors before their migration to the limb. In order to study gene function after the migration step, we designed a stable vector containing the Tomato-T2A-GFP cassette under the control of the p57MRE/βactin promoter (Fig. 3a). The p57 kip2 cyclin-dependent kinase inhibitor is directly activated by Myod in myoblasts in vitro and in vivo [9][10][11]. The addition of this mouse regulatory sequence to a generic chick βactin promoter should drive gene expression in myoblasts. The pT2AL-p57/βactin-Tomato-T2A-GFP vector was electroporated into forelimb somites of chick embryos. Six days after electroporation, we observed that limb muscles displayed red and green fluorescence, in E8.5 chick embryos (Fig. 3b-d). This was confirmed on transverse forelimb sections where we observed that muscles were expressing both Tomato and GFP (Fig. 3e-g). GFP always co-localised with Tomato showing that the GFP nuclei were always associated with Tomato + membranes (Fig. 3h-j, arrowheads). However, as for the PT2AL-CMV/βactin-Tomato-T2A-GFP vector (Fig. 2), Tomato + muscle fibres could be observed with GFP+ and GFP− nuclei ( Fig. 3h-j, arrows). Longitudinal muscle sections of E8.5 electroporated forelimbs revealed that the p57MRE-βactin promoter drove the expression of both fluorescent proteins in MF20+ muscle fibres ( Fig. 3k-m, arrowheads) as well as in mononucleated MF20-cells (Fig. 3k-m, arrows), but rarely in Pax7+ muscle progenitors (Fig. 3n-p, arrowheads). We conclude that the p57MRE-βactin promoter drives transgene expression mainly in myoblasts and muscle fibres. Combined with somite electroporation, this vector set targets gene expression in muscle cells at step downstream of muscle progenitors. Stable and bicistronic expression of Tomato and GFP fluorescent proteins in differentiated muscle cells using the Myosin Light Chain promoter There is evidence that differentiated muscle cells signal to muscle progenitors to regulate muscle growth during development [11,54,55]. One option to study the molecular dialogue between muscle fibres and muscle progenitors is to specifically misexpress genes in differentiated muscle cells. Consequently we established another stable vector, in which the Tomato-T2A-GFP cassette was inserted under the control of the mouse Myosin Light Chain (MLC) promoter (Fig. 4a). The mouse MLC promoter drives transgene expression in differentiated muscle cells [44]. In E8.5 electroporated chick embryos, we observed red and green fluorescence in limb muscles, indicating that both Tomato and GFP proteins were expressed (Fig. 5b-d). Both Tomato and GFP proteins were observed in limb muscles on transverse limb sections ( Fig. 5e-g). GFP+ nuclei were associated with Tomato + labelling ( Fig. 5e-g, arrowheads). As for the CMV/βactin and the p57/βactin promoters, we observed Tomato + muscle fibres with GFP+ and GFPnuclei ( Fig. 5e-g, arrows), due to the spread of Tomato in sarcolemma of multinucleated muscle cells. As Forelimb somites of E2.5/HH15 chick embryos were electroporated with the pT2AL-CMV/βactin-Tomato-T2A-GFP stable vector containing the Tomato-T2A-GFP cassette under the control of a general promoter. Six days after electroporation, at E8.5, forelimbs were collected for wholemount visualisation (a-c), immunostaining on transverse (d-i) or longitudinal (j-r) limb sections. Both Tomato and GFP fluorescent proteins were expressed in forelimb muscles, visualised in whole mount embryos (a-c). The Tomato and GFP expression was visualised in all limb muscles on transverse limb sections (d-f). Higher magnifications of muscle transverse sections showed that GFP+ nuclei were associated with Tomato in membrane (g-i, arrowheads). However, Tomato was not always associated with GFP due to the multinucleated statute of muscle fibres and membrane fluidity (g-i, arrows). Longitudinal muscle sections showed electroporated muscle fibres displaying Tomato fluorescence in sarcolemma with only a subset of GFP+ myonuclei (j-l arrowheads). GFP-myonuclei are arrowed (j-l arrows). m-r Electroporated muscle cells co-expressing both Tomato at the membrane and GFP in nuclei were observed in MF20+ muscle fibres (m-o, arrowheads) and in Pax7+ progenitors (p-r, arrows) expected and previously shown for the MLC promoter [44], Tomato and GFP fluorescence was never in Pax7+ muscle progenitors (Fig. 5h-j). We conclude that the stable and muscle-specific vector pT2AL-MLC-Tomato-T2A-GFP leads to bicistronic expression of Tomato and GFP proteins in differentiated muscle cells in chick limb. Replacing either fluorescent protein encoding genes with a gene-of-interest will efficiently drive transgene misexpression in muscle differentiated cells. 3 Stable and bicistronic expression of Tomato and GFP fluorescent proteins in myogenic cells following chick limb somite electroporation with a muscle-specific regulatory element. a Forelimb somites of E2.5/HH15 chick embryos were electroporated with the PT2AL-p57/βactin-Tomato-T2A-GFP stable vector containing the Tomato-T2A-GFP cassette under the control of the p57MRE muscle-specific regulatory element. Six days after electroporation, at E8.5, forelimbs were collected for wholemount visualisation (b-d), immunostaining on transverse (e-j, n-p) or longitudinal (k-m) limb sections. b-d Both Tomato and GFP fluorescent proteins were expressed in forelimb muscles, visualised in whole mount embryos. e-g The Tomato and GFP expression was visualised in limb muscles on transverse limb sections. Higher magnifications of muscle transverse sections showed a general co-localisation of GFP+ nuclei with membrane Tomato (h-j, arrowheads). However, Tomato was not always associated with GFP due to the multinucleated statute of muscle fibres and membrane fluidity (h-j, arrows). Tomato and GFP expression was observed in MF20+ muscle fibres (k-m, arrowheads) and in MF20-cells (k-m, arrows). n-p Nuclear GFP or membrane Tomato expression driven by the p57 regulatory element was barely observed in Pax7+ muscle progenitors (n-p, arrowheads). Scale bars, (e-m) 50 μm (k-p) Chick limb lateral plate electroporation with the generic CMV/βactin promoter drives bicistronic expression of Tomato and GFP proteins in cartilage, tendon and connective tissues In order to target the non-myogenic cells of the limb musculoskeletal system, we electroporated the pT2AL-CMV/βactin-Tomato-T2A-GFP in the forelimb lateral plate of E2/HH13 chick embryos (Fig. 5a). Three days after electroporation, fluorescence was observed throughout the forelimb (Fig. 5b, c). Five days after electroporation both Tomato and GFP proteins were diffusely expressed in chick limbs (Fig. 5d-g). Notably, a high fluorescence was observed in cartilage elements (Fig. 5d-f ). Transverse limb sections showed a general Tomato expression in forelimbs (Fig. 5h, i), expression which did not delineate muscles in contrast to somite electroporation with the same vector (Fig. 2). In limb muscles, Tomato was never observed in MF20+ differentiated muscle cells (Fig. 5j) nor in Pax7+ muscle progenitors (Fig. 5k). We believe that cells displaying Tomato fluorescence in limb muscles following lateral plate electroporation correspond to muscle connective tissue cells. GFP transcripts could be observed in cartilage regions (Fig. 5l-n, arrows). GFP transcripts could also been observed in tendons, which are labelled with the key tendon marker Scleraxis (Scx) (Fig. 5o-r, arrows). We conclude that limb lateral plate electroporation with the pT2AL-CMV/βactin-Tomato-T2A-GFP vector leads to biscistronic and stable transgene expression in lateral plate-derived tissues, such as cartilage, tendon and muscle connective tissues. Conclusion In summary, we designed new vectors that stably and simultaneously express two distinct proteins. Limb muscles are composed of myogenic cells originating from somites and of connective tissue cells derived from lateral plate (Fig. 6a). Myogenic cells in muscles are at different steps of the muscle differentiation process, ranging from muscle progenitors, myoblasts to muscle fibres (Fig. 6a). Limb somite electroporation (Fig. 6b-e) with a generic (Fig. 6b) or muscle-specific promoters (Fig. 6c, d) will target all myogenic cells (Fig. 6b), myoblasts and muscle a Limb lateral plate of E2/HH13 chick embryos was electroporated with the pT2AL-CMV/βactin-Tomato-T2A-GFP stable vector containing the Tomato-T2A-GFP cassette under the control of a general promoter. Forelimbs were collected for wholemount visualisation of Tomato or GFP, 3 (b, c), 5 (d-g) and 6 (l) days after electroporation. h, i Transverse sections of forelimbs 5 days after electroporation to visualise Tomato expression in limbs. j, k Tomato fluorescence was observed in limb connective tissues and never observed in muscle differentiated (j) or progenitor (k) cells. l Forelimbs were collected for wholemount visualisation of Tomato 6 days after electroporation. m-r In situ hybridisation experiments to transverse limb sections of the electroporated forelimb shown in L, with GFP (m, o, q) and Scx (n, p, r) probes at the level of the proximal (m, n) and distal (o-r) forearm. m, n and o, p are adjacent sections. m, n shows strong GFP expression in regions surrounding cartilage elements (arrow). o-r GFP expression in tendons. q, r is a high magnification of a tendon shown in (o, p arrowed). ca, cartilage, u, ulna, r, radius differentiated cells (Fig. 6c) or only muscle differentiated cells (Fig. 6d), respectively. Lateral plate electroporation with a generic promoter (Fig. 6e) target muscle connective tissue cells, while somite lateral plate electroporation with a generic promoter target myogenic cells (Fig. 6b). This provides us with tools to study the molecular interactions between cellular components of muscles. We believe that these new vectors combined with tissue-specific electroporation techniques are powerful tools to study chick limb development. Chick embryos Fertilized chick eggs from a commercial source (JA57 strain, Dangers, France) were incubated at 38.5°C. Embryos were staged according to days in ovo. For early stages, the following day numbers and HH (Hamburger and Hamilton) stages [56] are equivalent: E2/HH13, E2.5/HH15 and correspond to 20 and 25 somite stages, respectively. Establishment of recombinant vectors The pT2AL-MLC-Tomato-T2A-GFP plasmid was obtained as following: The Myr-TdTomato-T2A sequence was amplified by PCR from the plasmid pCS2-TdTomato-2A-GFP [52]. To facilitate subsequent cloning, one XhoI site was added to the forward primer and one BstBI site was added to the reverse primer. The purified PCR product was then inserted into pCRII-TOPO (Invitrogen) and a clone with Tomato downstream of SP6 promoter was selected, giving rise to a plasmid named TOPO/Tomato. H2B-GFP was amplified by PCR from the plasmid pCS2-TdTomato-2A-GFP [52]. A BstbI site was added to the forward primer and one PmlI site and one ClaI site were added to the reverse primer. The purified PCR product was then inserted into pCRII-TOPO (Invitrogen) and a clone with GFP downstream of SP6 promoter was selected, resulting in a plasmid called TOPO/GFP. Next, both TOPO/ Tomato and TOPO/GFP were digested with BstbI and NotI. The T2A sequence was then inserted into TOPO/ Tomato using the T4 DNA ligase (New England Biolabs) to generate a plasmid named TOPO/Tomato-T2A-GFP. The Tomato-T2A-GFP cassette was then excised from TOPO Tomato-T2A-GFP using EcoRV and XhoI and cloned into the pT2AL200R150G [57] previously digested with ClaI (blunt-ended using Fermentas T4 DNA polymerase) and XhoI. The resulting plasmid was named pT2AL-Tomato-T2A-GFP. The Myosin Light Chain (MLC) mouse promoter was removed from the pT2K-MLC-Fgf4 plasmid (previously described in [44]) using NcoI and XhoI. Both extremities were then bluntended using T4 DNA polymerase (Fermentas). The MLC promoter was next blunt ligated to TOPO GFP previously digested with XbaI made blunt. A clone with the MLC promoter inserted with ApaI in 5' and XhoI in 3' was selected resulting in a plasmid called TOPO/ GFP/MLC. Both TOPO/GFP/MLC and pT2AL-Tomato-T2A-GFP were digested with ApaI and XhoI. MLC was inserted into pT2AL-Tomato-T2A-GFP to obtain pT2AL-MLC-Tomato-T2A-GFP. Electroporation Limb somite electroporation was performed as previously described [35]. The DNA solution was systematically composed of the Tol2 stable vectors and the transient transposase vector CMV/βactin-T2TP, which allows the stable integration into the chick genome. The concentration of the different vectors was between 1.5 and 2 μg/μL and of 1/3 for the CMV/βactin-T2TP. DNA was prepared in solution containing carboxymethyl cellulose 0,17 %, fast green 1 %, MgCl 2 1 mM and PBS 1X in water. Lateral plate electroporation was performed as followed: Stage HH13-15 (E2) chick embryos were windowed following standard techniques in preparation for electroporation [58]. PBS without Ca 2+ /Mg 2+ was applied to the embryo. A capillary was backfilled with DNA solution, which was injected under 200 Pa pressure (injection duration 0.1-0.5 s and compensatory pressure 15-25 Pa) (Femtojet, Eppendorf ) into the embryonic coelom, to fill completely the anterior to posterior extent of the forelimb territory. The negative electrode (0.8 mm diameter tungsten rod with a 4-mm length and 2-mm exposed surface) was inserted into the yolk and positioned beneath the forelimb field, approximately 2 mm below the embryo. A 0.8 mm diameter platinum rod with a 1-mm exposed tip served as the positive electrode and was positioned above the forelimb field with an approximate distance of 3 mm. A wave pulse train consisting of 50 V, five pulses, 20 ms duration with a 200 ms interpulse interval was delivered via TSS20 electroporator and EP21 current amplifier (Intracel). Embryos were returned to 37.5°C for the remaining incubation period. DNA solution was composed of pT2AL-CMV/βactin-Tomato-T2A-GFP (1-3 μg/μL) and CMV/βactin-T2TP at a molar ratio of 1:5-1:10, diluted in a mix containing PBS without Ca 2+ /Mg 2+ and Fast Green 0.005 %. This ratio resulted in persistent gene expression in the embryonic limbs during foetal development. Immunohistochemistry Experimental forelimbs were fixed in paraformaldehyde 4 % overnight at 4°C and processed for cryostat sections (12 μm). Immunohistochemistry was performed as previously described [59]. The monoclonal antibodies MF20 that recognizes sarcomeric myosin heavy chains and Pax7 that recognizes muscle progenitors, developed by D.A. Fischman and A. Kawakami, respectively, were obtained from the Developmental Studies Hybridoma Bank developed under the auspices of the NICHD and maintained by The University of Iowa, Department of Biology Iowa City, IA 52242. After overnight incubation with the primary antibody at 4°C, biotinylated secondary antibodies (Anti-Mouse IgG2b from Southern Biotech; Anti-Mouse IgG1 from Jackson ImmunoResearch laboratories) were applied for 1 h at room temperature, followed by a 45 min incubation with Cy5-Streptavidin (Invitrogen). Hoechst (Molecular Probes) staining was performed with a dilution of 1/20000 in PBS 1X for 10 min at room temperature. In situ hybridization In situ hybridization experiments were performed for GFP and Scx probes, as previously described [35]. Image capturing Images of the wholemount electroporated limbs were acquired with a Leica stereo-macroscope equipped with a Leica DFC300 camera. After immunohistochemistry, sectioned samples images were captured using a Nikon epifluorescence microscope, a Leica DMI600B inverted microscope or a Leica SP5 confocal system.
2023-01-14T14:52:53.164Z
2015-10-30T00:00:00.000
{ "year": 2015, "sha1": "e786c9d45899a87db1a845e8620e1c31a824717b", "oa_license": "CCBY", "oa_url": "https://bmcdevbiol.biomedcentral.com/track/pdf/10.1186/s12861-015-0088-3", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "e786c9d45899a87db1a845e8620e1c31a824717b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
253409864
pes2o/s2orc
v3-fos-license
Computed tomography-like magnetic resonance images based on T1 spoiled gradient-echo to detect calcinosis in a patient with anti-nuclear matrix protein 2 antibody-positive juvenile dermatomyositis Computed tomography-like magnetic resonance images based on T1 spoiled gradient-echo to detect calcinosis in a patient with anti-nuclear matrix protein 2 antibody-positive juvenile dermatomyositis Yuji Fujita *, Shotaro Suzuki, Yoshiyuki Shirakawa, Shigeko Kuwashima, Shigemi Yoshihara Department of Paediatrics, Dokkyo Medical University, Tochigi, Japan Division of Rheumatology and Allergology, Department of Internal Medicine, St. Marianna University School of Medicine, Kawasaki, Japan Department of Radiology, Dokkyo Medical University, Tochigi, Japan *Correspondence to: Yuji Fujita, Department of Paediatrics, Dokkyo Medical University, 880 Kitakobayashi, Mibu, Shimotsuga, Tochigi 321-0293, Japan. E-mail: fujitay@dokkyomed.ac.jp DEAR EDITOR, An 11-year-old girl was admitted to our hospital for evaluation of a 6-month history of rash with joint pain affecting the right elbow during the past 2 months. She had no muscle weakness, myalgia or dyspnoea on exertion. Erythematous eruptions were present on the eyelids, midface and extensor surfaces of the fingers, which were suggestive of heliotrope eruption, malar rash and Gottron sign, respectively. These cutaneous findings were highly suggestive of JDM. The extensor surface of the right elbow was swollen and erythematous, on which white nodules appeared only in the flexed position but not in the extended position ( Fig. 1A and B). The remaining results of the physical examinations were normal. Blood levels of creatinine kinase and aldolase were elevated at 470 IU/l (reference, 45-163 IU/l) and 14.7 mg/dl (reference, 2.1-6.1 mg/dl), respectively. The results of tests for liver function, renal function, cell blood counts and coagulation were normal. Short tau inversion recovery MRI revealed high signals in fasciae and muscles of both thighs, which was consistent with myositis with fasciitis. Chest radiography and CT showed no abnormal findings. Radiographs of the right elbow revealed calcification in the s.c. region (Fig. 1C). Musculoskeletal ultrasonography showed high echoic lesions in the space between the skin and triceps muscle, with no signs of acoustic shadow (Fig. 1D). Contrast-enhanced T1-weighted MRI of the right elbow revealed an area of abnormal high intensity with contrast enhancement in the s.c. tissues, a finding consistent with inflammation of soft tissue but not calcification (Fig. 1E). However, CT-like MR images based on T1 spoiled gradient-echo (T1SGRE) showed a high-intensity signal in the s.c. nodules of the right elbow, suggestive of calcinosis (Fig. 1F). Analysis of myositis-specific antibodies was positive for anti-nuclear matrix protein 2 autoantibodies. She was diagnosed with JDM with anti-nuclear matrix protein 2 autoantibody-related calcinosis and treated with CSs and MTX. Herein, we describe a paediatric case of JDM with antinuclear matrix protein 2 autoantibodies presenting with calcinosis at the time of diagnosis. Calcinosis is the deposition of calcium within or under the skin, and it can cause various problems in patients with DM, such as pain, functional disability owing to joint contractures, ulceration and bacterial infection from ulceration; therefore, early diagnosis and careful follow-up are recommended [1]. Calcinosis develops more commonly in younger patients with anti-nuclear matrix protein 2 autoantibodies than in adult patients with DM [2] and is rarely present at the time of diagnosis. The classic location for calcinosis tends to be repeatedly pressured sites, primarily elbows, knees, fingers and buttocks [3,4], and it can be more visible in the flexed position than in the extended position, as in our case. Calcinosis can also cause local acute inflammation in the surrounding tissue, often resulting in difficulty in distinguishing calcinosis from arthritis. If joint inflammation develops in JDM, clinicians should consider the possibility of soft tissue inflammation owing to calcinosis or arthritis associated with JDM. Given that it is difficult to detect calcinosis only by ultrasonography or conventional MRI methods, calcinosis should be monitored through radiography. However, frequent and repeated assessment of calcinosis using X-rays or CT might lead to unnecessary radiation exposure to the breasts and genitalia. Recently, the usefulness of CT-like magnetic resonance images based on T1SGRE has been reported for the evaluation of various diseases, such as fractures, degenerative bone changes, bone tumours and craniosynostosis [5][6][7], and it allows us to evaluate the lesions of calcinosis harmlessly, especially in juvenile patients, who should avoid radiation exposure. Our case demonstrated that CT-like magnetic resonance images based on T1SGRE might be useful for evaluating calcification in rheumatic diseases without radiation exposure. Data availability statement The data supporting the findings of this study are available from the corresponding author, Y.F., upon reasonable request. Funding No specific funding was received from any public, commercial or not-for-profit bodies to carry out the work described in this article. Disclosure statement: The authors have declared no conflicts of interest. Consent: Written informed consent for publication was obtained from the patient's legal guardian. Letter to the Editor
2022-11-09T16:19:37.525Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "3bcd71389e47e833d00e202f22efe93e8eba2602", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/rheumap/advance-article-pdf/doi/10.1093/rap/rkac093/46855355/rkac093.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76b4bdf72e11c0f5fb2bbff080c571a1bf7d2174", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
263606163
pes2o/s2orc
v3-fos-license
Natural defense against multi-drug resistant Pseudomonas aeruginosa: Cassia occidentalis L. in vitro and in silico antibacterial activity Cassia occidentalis L. is widely used in indigenous and traditional medicine, but its impact on multi-drug resistant (MDR) bacterial infections mostly remains unknown. Therefore, this study aimed to evaluate the in vitro antibacterial efficiency of methanol and ethyl acetate extracts of C. occidentalis L. leaves (MECOL and EAECOL) against multi-drug resistant Pseudomonas aeruginosa and to identify potential antibacterial agents through computational studies targeting the LasR protein. Initially, 82 compounds were identified using GC-MS analysis, and the functional groups were determined through FT-IR analysis. Both extracts of the plant exhibited dose-dependent antibacterial activity, with MICs of 104.16 ± 36.08 μg mL−1 for MECOL and 83.33 ± 36.08 μg mL−1 for EAECOL, and an MBC of 125 μg mL−1. Among the 82 compounds, 12 potential compounds were identified based on binding scores using molecular docking with the LasR protein and MM-GBSA analysis. Furthermore, screening for ADME properties, including physicochemical features, water solubility, lipophilicity, RO5 compliance, and toxicity, identified the top three compounds: methyl dihydrojasmonate, methyl benzoate, and 4a-methyl-4,4a,5,6,7,8-hexahydro-2(3H)-naphthalenone, which also demonstrated binding affinity with the active site residues of the LpxC protein of the bacteria. Additionally, molecular dynamics (MD) simulations confirmed the binding reliability of these three phytochemicals to LasR's active pocket, comparable to the protein native inhibitory ligands (C12-HSL). The study offers scientific support for the traditional use of C. occidentalis in treating bacterial infections, highlighting the potential of the three compounds as leads for developing LasR inhibitors to combat multi-drug resistant P. aeruginosa. Introduction One of the signicant biomedical challenges today is developing effective disease-modifying treatments for multidrugresistant (MDR) bacteria.These MDR pathogens pose a persistent threat to public health and human well-being.Among them, P. aeruginosa infections are particularly concerning, as they have developed resistance against current antibiotics by altering metabolic pathways for survival and persistence. 1The bacteria are classied as a top priority by the World Health Organization for innovative therapeutic approaches, and it is a key concern, according to the U.S. Centers for Disease Control. 2 The bacteria cause infections with high mortality rates (up to 61%), posing signicant challenges worldwide, especially Plant collection In January 2022, C. occidentalis L. leaves were collected from Churamonkathi, Jashore district, Bangladesh (latitude 23.2238°N, longitude 89.1646°W).The herb's authenticity was veried by Dr Sardar Nasiruddin, a taxonomist at the National Herbarium in Dhaka, Bangladesh.Collected leaves were washed, air-dried at room temperature (∼25 °C), and ground into powder, then stored in an airtight container. 23ant extract preparation Phytoextracts from C. occidentalis L. leaves were obtained following standard procedures described by Rahman et al. 2020. 23The powdered plant material (100 g) was divided into two 1-liter conical asks, one with 400 mL of methanol and the other with 400 mL of ethyl acetate.Aer 72 hours of shaking at 250 rotations per minute and 37 °C, the mixture was ltered using sterile cotton mesh and Whatman lter paper (number 1).The concentrated crude extracts were stored in sterilized tubes at 4 °C, yielding 5.5 g (dry weight, 5.5% w/w) of crude methanol extract and 5.77 g (dry weight, 5.77% w/w) of crude ethyl acetate extract from each 100 g of powdered plant material. Qualitative phytochemical screening Qualitative analysis of phytochemicals in MECOL and EAECOL solutions was conducted using previously methods describe by Rahman et al. 2023. 24To detect phenolic compounds (avonoids and tannins), alkaline reagents, FeCl 3 , and Pb(OAc) 2 were employed.Alkaline reagent tests involved dropwise addition of MECOL and EAECOL solutions to NaOH (5%), followed by the addition of 10% HCl solution resulting in color disappearance.FeCl 3 (5%) solution was added to MECOL and EAECOL solutions (10 mg mL −1 ), producing a reddish-black color. 25For tannin detection, 10 mg of MECOL and EAECOL were mixed with chloroform, and 10% conc.Pb(OAc) 2 was added.The steroids test showed a dark-black color on the bottom surface.Mayer's test involved dissolving 25 mg of MECOL and EAECOL in 10 mL of aqueous HCl (1%) and adding Mayer's reagent.Legal tests were conducted to identify cardiac glycosides using pyridine, nitroprusside, and sodium hydroxide solution. 26Fehling's assay was used to detect reducing sugars.Foaming and frothing assays were performed to test for the presence of saponin in MECOL and EAECOL solutions. Gas chromatography-mass spectrometry (GC-MS) analysis The GC-MS analysis followed a previously disclosed methodology described with slight modications. 27Shimadzu triple-quad GCMS-TQ8040 was used to detect phytochemicals in MECOL and EAECOL.Helium gas was used as the mobile phase, and a Rtx-5MS capillary column (30 m 0.25 mm id, lm thickness of 0.25 m) was used as the stationary phase.The column oven temperature was adjusted using dedicated soware.The sample injector temperature was maintained at 250 °C for a 40 minute run in splitless mode.The instrument settings included a continuous stream frequency of 1 mL min −1 , an interface temperature of 250 °C, an ion source temperature of 230 °C, a scanning range of 50-600 m/z, and an ionization energy of 70 eV with a scan period of 0.3 s.Phytochemicals were identied by contrasting retention time and spectral patterns, and matches were made to the National Institute of Standards and Technology database (NIST) for specic content (%) estimation in MECOL and EAECOL. FT-IR spectroscopic analysis FT-IR analysis of MECOL and EAECOL followed a previously established method by He et al., (2020). 28Phytoconstituents were loaded as KBr pellets in the FT-IR sample chamber, and the infrared spectra were obtained in the range of 4500 to 400 cm −1 with a resolution of 4 cm −1 . Bacterial strains collection A glycerol stock of a multidrug-resistant P. aeruginosa bacterial strain (Gene Bank Accession Number: OK355439) was collected from the Department of Biotechnology and Genetic Engineering, Islamic University, Kushtia, Bangladesh.This strain was isolated from medical center drainage water and tested for antibiotic susceptibility against various antibiotics, showing multiple antibiotic resistance. 29 Agar-well diffusion and disc diffusion assay The antibacterial potential of MECOL and EAECOL was evaluated using agar-well diffusion and disc diffusion techniques against frozen MDR P. aeruginosa stock.Bacterial colonies were grown on LB agar and a conned colony of the bacteria was inoculated into 25 mL of LB broth and cultured for bacterial growth at 37 °C with constant agitation at 250 rpm, until the exponential phase of optical density (OD) reached 0.4 at 600 nm wavelength, as measured using a UV-spectrophotometer. Four wells were made in each LB agar Petri plate containing bacterial culture.Serial dilutions of MECOL and EAECOL (500 mg mL −1 ) were added to wells.Erythromycin discs served as the standard and the discs of erythromycin in the center of the Petri plates were used as a negative control.Discs with various extract concentrations were placed on the agar plate.Blank discs with DMSO were used as a negative control.Zones of inhibition were measured aer incubation at 37 °C for 16 hours, and the experiment was performed in triplicate. MIC and MBC analysis The MIC and MBC of MECOL and EAECOL were determined using a doubled serial dilution scheme.Stock solutions of MECOL and EAECOL (500 mg mL −1 ) were diluted with LB broth to concentrations of 250, 125, and 62.5 mg mL −1 .Bacterial samples were added to each tube and incubated at 37 °C for 24 hours.The lowest concentration preventing visible bacterial growth was identied as MIC.For MBC, cultures were subcultured on LB agar and incubated for 16 hours at 37 °C.The lowest concentration without visible bacterial growth on the agar plate was recorded as MBC.The experiments were conducted thrice for accuracy. Retrieval and preparation of protein structure The 3D X-ray crystallographic structures of the biolm-forming LasR (PDB: 3IX3) and lipid A biosynthesis protein LpxC (PDB: 2ves) with their native ligands (C12-HSL and BB-78485, respectively) were obtained from the RCSB-PDB. 6,30The resolution of LasR and LpxC structures was determined to be 1.40 Å and 1.90 Å, respectively.Protein preparation was performed using the Schrodinger protein preparation wizard (version 2020-3), including bond order assignment, creation of zero-order bonds for metals, addition of disulde bonds, and removal of water molecules. 31The protein structures were further rened, and energy minimized using the OPLS-3e force eld. 32 Compounds preparation In the GC-MS analysis, a total of 82 different phytochemicals were identied, with seven being identical in both MECOL and EAECOL extracts.The 3D structures of these phytochemicals were retrieved from the PubChem database and processed using the Ligprep wizard in the Maestro Schrodinger suite. 31Highenergy ionization states of the ligand molecules were minimized at pH 7.0 using Epik version 5.3.The OPLS3e force eld was applied to identify potential chiral centers and generate potential stereoisomers, followed by another minimization step. Molecular docking Natural phytochemicals identied through GS-MS were analyzed for protein-ligand binding scores using extra precision (XP) molecular docking with LasR.The Glide package in Glide v-8.8 and Maestro v-12.5.139 programs were utilized to assess the best binding scores between the phytochemicals and target proteins. 33,34Docking was performed in the standard precision mode with the OPLS3e force eld.Binding positions of reference native inhibitor ligands, C12-HSL, and BB-78485-binding residues in LasR and LpxC active sites, respectively, were determined, and grid boxes were created accordingly.The center coordinates of the grid box were X = 7.25, Y = 2.53, Z = 33.1,X = 9.55, Y = 3.83, Z = 29.1 LasR and LpxC, respectively that determines the area where the ligand docking calculations were taken place.XP molecular docking simulations were conducted on the two proteins with 82 phytochemicals and native ligands, and from these simulations, target proteins and ligand-binding energy were extracted. Molecular mechanics-generalized born surface area (MM-GBSA) analysis In this study, a MM-GBSA analysis was conducted using the Prime MM-GBSA program package to estimate the binding-free energy of ligands and validate the docking process between the LasR protein and ligands.The analysis considered negative MM-GBSA DG bind (NS), DG bind Coulomb (Coulomb energy), DG bind H-bond (hydrogen bond energy), DG bind lipo (lipophilicity energy), and DG bind vdW (van der Waals interaction energy) for the 12 highest-interacting ligands and the native ligand of the LasR protein. 35,36These features, representing energy contributions from different terms in the energy expression, provide valuable information on ligand, receptor, and complex structures, as well as energy differences related to strain and binding. Pharmacokinetics (PK) and toxicity (T) analysis In drug development, investigating absorption, distribution, metabolism, and excretion (ADME), properties including physicochemical properties, lipophilicity, water solubility, druglikeness, and synthetic accessibility helps identify potential druggable compounds. 9We analyzed the top twelve phytoligand molecules with docking scores below ve and MM-GBSA prole.To predict the PK properties, we used the Swis-sADME server (https://www.swissadme.ch/) with the SMILES format data of these molecules. 37Additionally, we assessed toxicity using the ProTox-II web server (https://toxnew.charite.de/). 38 Molecular dynamics (MD) simulation To assess protein-ligand interaction stability, a 100 ns MD simulation was conducted for the complex structures using "Desmond v6. 3 Program" in Schrodinger 2020-3 on Linux.The simulation focused on LasR-phytocompound complex with a TIP3P water model.An orthorhombic box with 10 Å distance from the center maintained a specic volume, and Na+ and Clions were added to neutralize the system (0.15 M salt concentration).OPLS3e force eld was applied. 39The complex system was minimized under isothermal-isobaric (NPT) ensemble (pressure of 101325 pascals and temperature of 300 K).Stability and dynamic characteristics were analyzed using RMSD, RMSF, rGyr, and SASA values. Statistical analysis The antibacterial activity test results, performed at varying extract concentrations, were presented as the mean ± standard deviation (STD) of three independent replicates.The statistical analysis comprised one-way ANOVA conducted using Origin Lab 2018 soware and Bonferroni's and Tukey's post hoc tests.Signicance levels were indicated as *p < 0.05 and **p < 0.01. Phytochemical screening Phytochemical screening of C. occidentalis L. leaves revealed the presence of various compounds using standard color change methods listed in Table 1 and part of plant as well as primary phytochemical represented in Fig. S1 and S2.† Both MECOL and EAECOL showed light-yellow coloration with 5% NaOH, which faded to colorless with the addition of 10% HCl, indicating the presence of avonoids. Tannins were detected by the greenish-black coloration aer treatment with 5% FeCl 3 and the formation of a grey-white precipitate with lead acetate.Terpenoids were conrmed by Salkowski's test, showing a light-yellow layer in the lower phase and a brown layer at the upper interface.Steroids were indicated by a deep brown layer in the lower phase and a lightyellow layer in the upper phase of Salkowski's test.Alkaloids were absent as Mayer's test did not produce creamy-white precipitation.Legal's test showed a blood-red color, conrming the presence of cardiac glycosides.Reducing sugars were detected with Fehling's test, resulting in a deep green and yellow-lime precipitation.Saponins were present, demonstrated by the 10 minute foaming stability in the foaming and frothing experiments. FT-IR analysis FT-IR analysis of MECOL and EAECOL detected various functional groups.Peaks in the spectrum indicated the presence of specic bonds and functional groups listed in Table 2 and shown in Fig. S3.† MECOL exhibited peaks at 3700-3000 cm GC-MS analysis of MECOL and EAECOL MECOL and EAECOL exhibited 46 and 43 peaks respectively in their GC-MS chromatograms shown in Fig. 1(A) and (B).Each peak represented a distinct phytochemical, and we estimated their relative percentage amounts based on average peak areas compared to total areas with retention time (RT) provided in Tables S1 and S2.† These peaks indicated the presence of specic phytochemicals with unique identities, dependent on their chemical formula, and structure.The phytochemical composition of MECOL consisted of alkane, alcohol, alkene, steroid, terpene, amide, ester, ether, ketone, tocopherols, and phenol, with the major compounds being stigmasterol, 13-docosenamide, neophytadiene, phytol, tetrapentacontane, 1-hexadecanol, and 9octadecene (Table S1 †).Similarly, the phytochemicals identied in EAECOL included esters, carbohydrates, carbonyl compounds, amide, terpenoids, alkane, steroids, alcohol, ketone, phenol, and alkyne, with the primary constituents being esters such as 6octadecenoic acid and hexadecanoic acid, along with 3-O-methyl-D-glucose and 13-docosenamide (Table S2 †).Notably, both MECOL and EAECOL shared seven common phytochemicals (Table S3 †) as identied through GC-MS analysis. Antibacterial activity of C. occidentalis L. leaves extracts In this study, we investigated the antibacterial potential of MECOL and EAECOL against multi-drug-resistant P. aeruginosa.The agar well diffusion and disc diffusion assays were conducted using various concentrations of MECOL and EAECOL (ranging from 62.55 to 500 mg mL −1 ). Our results demonstrated that both MECOL and EAECOL inhibited the proliferation of MDR P. aeruginosa, as evidenced by inhibition zones ranging from 9.33 ± 0.57 to 17.33 ± 0.57 mm for MECOL and 13.66 ± 0.57 to 17.66 ± 0.57 mm for EAECOL in the agar well diffusion assay represented in Fig. 2(A), (C) and (E).Similarly, in the disc diffusion assay, inhibition zones ranged from 7.33 ± 1 to 15.33 ± 0.57 mm for MECOL and 12 ± 1 to 16.66 ± 0.57 mm for EAECOL shown in Fig. 2(B), (D), and (F).Comparing the two compounds, EAECOL exhibited superior antimicrobial activity against P. aeruginosa in both the agar well diffusion and disc diffusion assays.Interestingly, erythromycin, a conventional antibiotic, showed no antibacterial activity against MDR P. aeruginosa, which had previously been conrmed to be resistant to erythromycin.The Minimum Inhibitory Concentration (MIC) values for MECOL and EAECOL were determined to be 104.16± 36.08 mg mL −1 and 83.33 ± 36.08 mg mL −1 , respectively, while the Minimum Bactericidal Concentration (MBC) was found to be 125 mg mL −1 for both compounds presented in Fig. S4.† However, to achieve complete bacterial eradication, a higher concentration of MECOL and EAECOL was required compared to the concentration that inhibited visible bacterial growth in vitro.The experiments were independently repeated three times, and S4.† These 13 compounds were further subjected to MM-GBSA analysis and in silico investigations.Table S5 † and Fig. 3B reveal that 12 phytochemical ligands exhibited favorable binding free energy scores compared to the native ligand of LasR (−13.9 kcal mol −1 ), except CID 91716874, which showed a score of −12.71 kcal mol −1 .Among these, CID 91716874 and CID 5365371 demonstrated the lowest and highest negative MM-GBSA DG binding (NS) scores of −12.71 and −47.19 kcal mol −1 , respectively.The post-docking analysis of LasR-ligand complexes indicated various interaction energies, including DG bind Coulomb (Coulomb energy), DG bind Hbond (hydrogen bond energy), DG bind lipo (lipophilicity energy), and DG bind vdW (van der Waals interaction energy), contributing to the overall binding stability of the complexes.Overall, our ndings suggest that the 12 identied compounds have a strong binding affinity compared to the native ligand of the LasR protein, indicating their potential as promising candidates for further exploration in inhibiting the QS signaling of P. aeruginosa. Pharmacokinetics (ADME) and toxicity (T) The ADME and toxicity assessment is vital for drug development, ensuring safety and efficacy for regulatory approval.Therefore, the ADME and toxicity of the 12 compounds have been identied. Human intestinal absorption (HIA) is essential for drug bioavailability, with seven compounds showing high HIA and 5 exhibiting low HIA.BBB permeability was observed in 7 compounds, indicating CNS targeting, while ve compounds lacked this potential.All 12 compounds were non-inhibitors of CYP450 enzymes, ensuring reduced metabolism interference.Nine compounds showed moderate clearance for drug excretion, while 3 had high clearance tendencies.In terms of toxicity, the 12 compounds showed promising results in silico evaluations, being nonhepatotoxic, non-carcinogenic, non-mutagenic, nonimmunogenic, and non-cytotoxic.Seven compounds were nontoxic according to EPA toxicity categories, while ve were toxic and listed in Table S6.† The physicochemical properties of the 12 compounds were assessed for their ADME inuence.All compounds displayed favorable molecular weight, hydrogen bond acceptors (HBA), hydrogen bond donors (HBD), heavy atoms, and topological polar surface area (TPSA) for absorption.Additionally, they had appropriate rotatable bond (RB) counts, indicating good absorption listed in Table S7.† Molecules with cLogP values of #5 are considered to have strong absorption, and nine compounds met this criterion.Three compounds were predicted to cross the membrane bilayer based on acceptable LogS values.Five compounds satised Lipinski's rule of ve for drug-likeness, and eleven compounds showed ease of synthesis.Among these compounds, methyl dihydrojasmonate, methyl benzoate, and 4amethyl-4,4a,5,6,7,8-hexahydro-2(3H)-naphthalenone were identi-ed as the best-hit compounds based on their favorable pharmacokinetics, lack of toxicity, and suitable drug-ability proles. Protein-ligands interactions Various non-bonded interactions, such as hydrogen bonds, hydrophobic bonds, and electrostatic bonds, were observed between LasR and the best-hit phytochemicals.Methyl dihydrojasmonate displayed a hydrogen bond with TYR 56 and ASP 73, alongside other interactions, resulting in a binding affinity of −5.923 kcal mol −1 .Methyl benzoate showed a hydrogen bond with TYR 56 and SER 129, and other interactions, with a binding affinity of −5.811 kcal mol −1 .4a-Methyl-4,4a,5,6,7,8-hexahydro-2(3H)-naphthalenone had a binding affinity of −5.472 kcal mol −1 with LasR, without a hydrogen bond but containing various other interactions.In contrast, the LasR-native ligand complex formed with a binding energy of −5.375 kcal mol −1 , featuring a single TRP 60 hydrogen bond and other interactions.Fig. 4 illustrates the 3D and 2D interactions between the designated three phytocompounds (CID: 7150, CID: 136654, CID: 102861) and LasR, along with the LasR native ligand (CID: 3246941), within the protein's active pocket.The molecular docking interactions and interacting residues of LasR quorum sensing protein with the best phytochemical compounds and the native ligand are listed for each docked complex in Table S7.† Multi-targeting capabilities analysis of the phytochemicals To explore the inhibitory potential and multi-targeting capabilities of the lead phytochemicals from C. occidentalis, we conducted molecular docking simulations with the LpxC enzyme.LpxC plays a crucial role in the synthesis of toxic bacterial outer lipid A membrane in Gram-negative P. aeruginosa and is a key target for antibiotic development.The phytocompounds CID 102861, CID 136654, and CID 7150 exhibited interactions with the LpxC target protein, with binding energies of −4.772, −4.488, and −3.505 kcal mol −1 , respectively.In comparison, the native inhibitor of LpxC, CID 9823454, formed a bond with a binding energy of −4.604 kcal mol −1 .This suggests that the potential lead phytocompounds have additional inhibitory interactions with the LpxC enzyme involved in lipid A biosynthesis, as shown in Table S9.† MD simulation To assess the stability of the three potential hit phytocompounds (CID7150, CID102861, and CID136654) within the protein's binding site, MD simulations of the protein-ligand complex structure were conducted.The analysis considered parameters such as Root Mean Square Deviation (RMSD), Root Mean Square Fluctuation (RMSF), Radius of Gyration (Rg), and Solvent Accessible Surface Area (SASA) to evaluate the constancy of the interactions between the LasR protein and the selected phytocompounds. RMSD analysis During the 100 ns simulation, the RMSD of Ca atoms was measured to evaluate the stability of the protein-ligand complex structure.Fig. 5A illustrates the average uctuation of the selected phytocompounds (CID 7150, CID 102861, and CID 136654), which showed values of 1.45 Å, 1.47 Å, and 1.62 Å, respectively.The LasR apoprotein and native ligand complex exhibited average uctuations of 1.42 Å and 1.65 Å, respectively.These minimal uctuations, falling well within the acceptable range of 1-3 Å, indicate the conformational stability of the protein-ligand complex structure.Throughout the simulation trajectory, the compounds remained stable, with only minor RMSF analysis To assess the exibility of the LasR protein in response to specic ligand interactions, RMSF values of compounds CID 7150, CID 102861, and CID 136654 were analyzed (Fig. 5B).The LasR apo protein showed peak uctuations at HIS 169, GLU 168, ASP 43, PHE 167, and LYS 42 residues, while the native ligand exhibited uctuations at HIS 169, HIS 169, GLU 168, PHE 7, and GLU 168 residues.The three selected compounds shared a common peak area at the HIS 169-residue location, indicating signicant alterations during the simulation.CID 3246941 (native ligand) exhibited a higher RMSF value of 5.308 at residue 162, along with the apoprotein.For CID 7150, CID 102861, and CID 136654, RMSF values were 3.467, 2.368, 3.551, and 4.623, respectively, with CID 7150 showing the highest rigidity. Radius of gyration (Rg) The radius of gyration (Rg) was analyzed to assess the distribution of atoms around the axis of the protein-compound complexes.CID 102861, CID-7150, CID-136654, and the native ligand (CID 3246941) stability in complex with the LasR protein were examined based on their Rg values over 100 ns simulations (Fig. 5C).The CID 3246941 (native ligand) and CID 102861 showed Rg ranges of 5.265 Å to 4.431 Å (uctuation of 0.834 Å) and 3.755 Å to 3.097 Å (uctuation of 0.658 Å), respectively.On the other hand, CID 7150 and CID 136654 exhibited Rg ranges of 2.395 Å to 2.263 Å (uctuation of 0.132 Å) and 2.441 Å to 2.333 Å (uctuation of 0.108 Å).Notably, all three phytocompounds demonstrated greater stability in 100 ns simulations, with lower uctuation ranges compared to native ligand complexes, suggesting minimal conformational changes in the LasR active site due to the binding of the selected phytocompounds. Discussion P. aeruginosa poses a signicant threat to public health, especially in hospitals and intensive care units, due to its multidrugresistant nature. 39Current treatment strategies rely on a combination of antibiotics, but the emergence of resistance and associated side effects has led to a pressing need for innovative antibacterial medications. 40,41In this context, there is growing interest in exploring phytocompounds as potential antimicrobial agents, as they offer diverse chemical structures and biological activities with minimal side effects. Our study focused on the antibacterial potential of C. occidentalis leaf extracts and their phytocompounds against MDR P. aeruginosa.Prior research has explored the antimicrobial activity of these leaf extracts against various pathogens, but there was a lack of information on their activity against MDR P. aeruginosa.Therefore, we aimed to investigate the antibacterial properties of these phytocompounds, particularly targeting LasR, a crucial factor involved in drug resistance. Initially, GC-MS analysis identied 82 chemical structures, which were docked and analyzed for MM-GBSA to assess their potential as antibacterial agents targeting LasR.Molecular docking simulation identied 13 lead phytocompounds with higher binding affinity, whereas MM-GBSA identied 12 best compounds binding to LasR and evaluated their pharmacokinetics, toxicity, and drug-ability proles.CID 7150, CID 102861, and CID 136654 showed the most promising characteristics and were selected for further studies.We also evaluated the interaction of these phytocompounds with LpxC, an enzyme involved in lipid A biosynthesis, which is crucial for bacterial outer membrane synthesis. 42The lead phytocompounds demonstrated multitargeting ability and drug-like properties, making them attractive candidates for antimicrobial therapeutics. Molecular dynamics simulations were performed to assess the stability and structural changes of the protein-ligand complexes.The RMSD, RMSF, Rg, and SASA analyses revealed that the selected phytocompounds formed stable complexes with LasR, indicating their potential as antibacterial agents against MDR P. aeruginosa. Among the lead phytocompounds, methyl dihydrojasmonate (CID 102861) has been reported for its diverse activities, including anticancer 43 and antimicrobial properties. 44Methyl benzoate (CID 7150) has shown promise as an environmentally friendly insecticide 45 and an antibacterial agent 46 against Gramnegative bacteria.Considering the diverse parameters used for evaluation, CID 7150, CID 102861, and CID 136654 were iden-tied as the most promising lead phytocompounds for developing antibacterial drugs against MDR P. aeruginosa and related infectious diseases.These compounds warrant further investigation in human in vivo studies to validate their potential as antimicrobial therapeutics. Conclusion In this study, C. occidentalis L. extract exhibited antibacterial activity against MDR P. aeruginosa, as demonstrated by inhibitory zones in disc diffusion and agar well diffusion tests.Computer-aided drug design identied three lead compounds (methyl dihydrojasmonate, methyl benzoate, and 4a-methyl-4,4a,5,6,7,8-hexahydro-2(3H)-naphthalene) that competitively inhibited LasR, the key signaling molecule receptor responsible for P. aeruginosa's virulence and multi-drug resistance.These ndings offer potential for developing new bioactive compounds to combat antibiotic-resistant infections.Further in vivo evaluations are needed to validate our results. with standard deviations were presented.Statistical analyses indicated that EAECOL showed a signicant difference from MECOL in agar well diffusion and disc diffusion assays (*p < 0.05, **p < 0.01).The standard antibiotic erythromycin discs further validated the results.Molecular docking and MM-GBSA analysisOur study performed molecular docking simulations and postdocking MM-GBSA analysis of 82 phytochemicals derived from C. occidentalis L. leaves against the QS signaling molecule receptor, LasR, of P. aeruginosa.Out of the docked phytochemicals, 13 compounds displayed signicant negative binding affinity, ranging from −5.417 to −8.01 kcal mol −1 , in comparison to the native LasR ligand, N-3-oxo-dodecanoyl-Lhomoserine lactone (CID: 324694) having a binding score −5.375 kcal mol −1 shown in Fig. 3A listed in Table Fig. 1 Fig. 1 GC-MS spectrometry of C. occidentalis L. leaves extracts, showing different peaks representing various compounds.(A) Chromatogram of MECOL and (B) chromatogram of the EAECOL. Fig. 2 Fig. 2 Antibacterial activity of MECOL and EAECOL against multi-drug resistant P. aeruginosa.Representing agar well diffusion assay and the inhibition zone exhibited by MECOL (A, E) and EAECOL (C, E) at different concentrations.Disc diffusion assay displays the inhibition zone produced by MECOL (B, F) and EAECOL (D, F). Fig. 3 Fig. 3 Molecular docking and MM-GBSA of phytochemicals derived from C. occidentalis L. against LasR.(A) Docking-based binding energy of the top 13 phytochemicals derived from C. occidentalis L. The binding energy values were obtained through molecular docking simulations, which predict the strength of interaction between the phytochemicals and the LasR protein.(B) Post-docking analysis of MM-GBSA DG binding (NS) scores of the compounds with LasR protein.The MM-GBSA method was applied to assess further the binding affinities of the phytochemicals identified in the docking study.Lower MM-GBSA DG binding (NS) scores indicate stronger binding interactions between the compounds and the LasR protein. Fig. 4 Fig. 4 Depicts the molecular interactions between the P. aeruginosa LasR protein and three specific phytocompounds, as illustrated in both 3D (left) and 2D (right) representations.The compounds are as follows: (A) CID: 7150, (B) CID: 136654, (C) CID: 102861, additionally, (D) represents the LasR protein with native complex ligands CID: 3246941.The interactions are visualized within the active pocket of the LasR protein. Fig. 5 Fig. 5 Molecular dynamics simulation of representative ligand and protein complexes calculated from a 100 ns simulation.(A) RMSD values extracted from Ca atoms of the protein-ligand docked complex.LasR protein as apoprotein and native ligand shown in gray and gold, respectively, while the selected compounds CID 7150, CID 102861, and CID 136654 represented by red, green, and orange, respectively.(B) RMSF values extracted from protein Ca atoms of the docked protein-ligand complex.LasR apoprotein depicted in light blue, native ligand CID 3246941 in dark blue, and the designated compounds CID 7150, CID 102861 and CID 136654 in complex with LasR in orange, gray, and gold, respectively.(C) Radius of gyration (Rg) of the protein-ligand complexes.Rg values of CID 7150, CID 102861, CID 136654, and native ligand (CID 3246941) in complex with LasR represented by blue, orange, gray, and gold, respectively.(D) Graphic representation of the protein-ligand complex's solvent accessible surface area (SASA).SASA values of CID 7150, CID 102861, CID 136654, and native ligand (CID 32469419) in complex with LasR denoted by blue, orange, gray, and gold, respectively. Table 2 FT-IR spectral analysis of MECOL and EAECOL.The characteristic peak position in the spectrum usually corresponds to the vibrational mode of a particular bond © 2023 The Author(s).Published by the Royal Society of Chemistry RSC Adv., 2023, 13, 28773-28784 | 28777
2023-10-04T05:13:39.916Z
2023-09-26T00:00:00.000
{ "year": 2023, "sha1": "24968e00073eeb858f91742c9af5929024f36cc2", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/ra/d3ra03923d", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24968e00073eeb858f91742c9af5929024f36cc2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
131335082
pes2o/s2orc
v3-fos-license
The Ocean Biogeographic Information System (obis): an On-line, Worldwide Atlas for Accessing, Modeling and Mapping Marine Biological Data in a Multidimensional Geographic Context ~BIS is a component of the Census of Marine Life (CoML) an international program to assess and explain the diversity, distribution, and abundance of marine life. There is no adequate system for retrieval of ocean biological data. The few existing databases do not usefully summarize known distributions and abundance of marine life nor are they organized to encourage frequent use and intercomparison of datasets. An on-line, user-friendly system for absorbing , integrating, and assessing data about life in the oceans will stimulate taxonomic and systematic research and generate new hypotheses concerning evolutionary processes, factors related to maintenance of species distributions, and roles of marine organisms in marine ecosystem function. Over the past three decades there have been major advances in understanding the outlines of relationships among broadly-defined trophic units and their biogeo-chemical roles in ecosystems. Understanding of spatial pattern of marine ecosystems, their evolution, and how they respond to environmental change will require greater use of species-level data. The geographical boundaries of ecosystems are poorly defined and there is little agreement on the number of faunal provinces and their boundaries. The United Nations Environmental Program Global Biodiversity Assessment (UNEP, 1995) provides two biogeographic maps: one recognizes six classes of "Oceanic Realms" (p. 100) and the other classifies coastal fisheries management areas into 49 "Large Marine Ecosystems" (p. 502). A more recent global classification of surface waters recognizes 51 geographic provinces (Longhurst, 1998). Accurate data on the spatial and temporal distribution of most marine species are not readily available. There are no good maps of marine biodiversity (Grassle and Stocks, 1999) and biogeographic classifications seldom consider life in deep-sea sediments where communities have high marine biodiversity. Description of previously unknown species and higher taxa of marine life from both deep-sea and shallow bottom communities continues steadily and, in the case of some groups, at an increasing pace (Figure 1). As accurate data on abundance and distribution of marine taxa accumulate, there is an urgent need for information systems to retrieve and analyze data in the context of temporal and spatial patterns of synoptic physical and biological data from satellites, improved bathymetry, and output from global models. In the past two decades, oceanography has become a more mature, integrative, interdisciplinary science. The roles of major taxa in chemical and geological processes in the ocean are becoming better understood and the field of biogeochemical research has grown rapidly. Biologists and physical oceanographers are … ~BIS is a component of the Census of Marine Life (CoML) an international program to assess and explain the diversity, distribution, and abundance of marine life. There is no adequate system for retrieval of ocean biological data. The few existing databases do not usefully summarize known distributions and abundance of marine life nor are they organized to encourage frequent use and intercomparison of datasets. An on-line, user-friendly system for absorbing, integrating, and assessing data about life in the oceans will stimulate taxonomic and systematic research and generate new hypotheses concerning evolutionary processes, factors related to maintenance of species distributions, and roles of marine organisms in marine ecosystem function. Over the past three decades there have been major advances in understanding the outlines of relationships among broadly-defined trophic units and their biogeochemical roles in ecosystems. Understanding of spatial pattern of marine ecosystems, their evolution, and how they respond to environmental change will require greater use of species-level data. The geographical boundaries of ecosystems are poorly defined and there is little agreement on the number of faunal provinces and their boundaries. The United Nations Environmental Program Global Biodiversity Assessment (UNEP, 1995) provides two biogeographic maps: one recognizes six classes of "Oceanic Realms" (p. 100) and the other classifies coastal fisheries management areas into 49 "Large Marine Ecosystems" (p. 502). A more recent global classification of surface waters recognizes 51 geographic provinces (Longhurst, 1998). Accurate data on the spatial and temporal distribution of most marine species are not readily available. There are no good maps of marine biodiversity (Grassle and Stocks, 1999) and biogeographic classifications seldom consider life in deep-sea sediments where communities have high marine biodiversity. Description of previously unknown species and higher taxa of marine life from both deep-sea and shallow bottom communities continues steadily and, in the case of some groups, at an increasing pace ( Figure 1). As accurate data on abundance and distribution of marine taxa accumulate, there is an urgent need for information systems to retrieve and analyze data in the context of temporal and spatial patterns of synoptic physical and biological data from satellites, improved bathymetry, and output from global models. In the past two decades, oceanography has become a more mature, integrative, interdisciplinary science. The roles of major taxa in chemical and geological processes in the ocean are becoming better understood and the field of biogeochemical research has grown rapidly. Biologists and physical oceanographers are working together to model interactions of taxonomic and functional groups of organisms amongst each other and with the physical environment. Such coupled physical/biological models increase the resolution of biological elements to provide greater realism. In parallel, greater integration within the biological sciences is Bouchet (1997 reflected in new programs to study biocomplexity and use of the term "integrative biology" to describe some academic departments and a scientific journal. The trend toward integration of biological research is stimulated by the rapid growth in the application of molecular techniques to a broad range of biological processes. Species are the basic units of biological organization and evolutionary change. Species definitions now include characteristics of the genome using a variety of molecular technologies, morphological description based on new tools for identification, abundance data, as well as the traditionally-used data on geographical location of collecting sites. Molecular genetics data useful in defining species can also be analyzed to indicate evolutionary relationships among species (systematics), variations in the temporal and spatial scales of contemporary species interactions, relationships between species assemblages and the environment (ecology), and, by study of specific enzymes, their relative contribution to ecosystem function (biogeochemistry). A system for retrieval and analysis of geo-referenced data from all levels of biological organization in the context of data describing the full range of oceanographic processes is required. The success of OBIS will be measured by its ability to store, search, and retrieve data; to analyze the spatial and temporal organization of new biological information in a four-dimensional map-based environmental context; and to suggest previously unobserved ecological and evolutionary relationships. The present record of life in the ocean is mostly based on collections in museums and scientific publications that provide lists of species names, location (latitude, longitude, and depth), sometimes numbers or biomass, and usually something about how they were collected. These names and numbers are a pale reflection of reality but, to an experienced marine biologist, these data describe a rich world of plants and animals interacting amongst themselves and their environment. Coral reef data may evoke the experience of swimming over a reef edge surrounded by a plethora of brightly-colored fish. Data from a deep-sea mud sample recall months looking at small invertebrates under a microscope for a glimpse at one of the most species-rich environments on the planet. A sample from Buzzards Bay in Massachusetts yields a rare crustacean near the base of an evolutionary tree leading to shrimp and crabs. Counts and names of worms and clams evoke porthole views from submersible dives to deep-sea hydrothermal vents. Every year, optical or acoustic imaging devices used on every imaginable platform from submersibles to satellites provide broader coverage and more highly resolved representations of life in the ocean. Despite these advances, even the best images do not provide a systematic, quantitative representation of the distribution of and relationships among organisms and how this living environment changes with time. OBIS will make it possible to conjure an image of the environment through relationships among datasets, including databases of images themselves. To provide quality control for OBIS as methodologies change and improve, museum specimens will be even more important as reference material for comparison. Many of the papers in this issue originated at an international Workshop on the Ocean Biogeographical Information System held in Washington, D.C., 3-4 November 1999. Names of attendees, organizations represented, and a brief summary can be found on the Consortium for Oceanographic Research and Education web site (http://core.cast.msstate.edu/censobisl.html). After considerable discussion, participants defined OBIS as: "An on-line world-wide marine atlas 'infrastructure' providing scientists with the capability of operating in a four-dimensional environment so that analyses, modeling, and mapping can be accomplished in response to user demand through accessing and providing relevant data." Emphasis was placed on interoperability through common definition of metadata standards and protocols for the distributed, multi-tiered architecture of OBIS. It was agreed that OBIS would be managed as a federation of database sources that reach agreement on the means to achieve interoperability, yet allow a high degree of autonomy with respect to both existing and developing data systems. This OBIS federation will be governed by an international Steering Committee and an OBIS Program Office will provide management, communications, and networking support for members. OBIS will make geographicallyreferenced information on distribution and abundance of species readily available, provide linkages to genomic databases of DNA type sequences, indicate location of reference specimens, and facilitate use of analytical, mapping, and modeling tools. The rescue and placement in an accessible digital format of existing databases will receive high priority. To facilitate the development of new hypotheses about factors controlling the distribution and abundance of marine life, the OBIS network will: 1) provide access to global, synoptic environmental data from satellites, 2) data from in situ measuring systems, and 3) relevant geo-referenced products from physical and chemical models. Global coverage of data on variables such as primary production, export production, phytoplankton taxa distinguishable from satellite images, temporal patterns of temperature variation, river discharge, currents, and near-bottom kinetic energy are examples of datasets potentially important in studying biogeographic patterns of distribution. The presentation and analysis of co-occurrence data on distribution and abundance of species in the context of environmental variables will help revitalize the study of marine taxonomy, systematics, and biogeography. In May 2000, the U.S. Government Agencies in the National Ocean Partnership Program together with the Alfred P. Sloan Foundation announced a set of 8 grants to initiate OBIS (http://core.cast.msstate.edu/censprl.html). The grants involve researchers in more than 60 institutions in 15 countries, and address overall system architecture as well as a range of taxonomic groups including fishes, bivalves, gastropods, cephalopods, calanoid copepods, euphausiids, anemones, corals, and several gelatinous zooplankton taxa. The National Ocean Partnership Program is expected to consider additional OBIS-related proposals this year. The organization of OBIS will require continued meetings with representatives of international database organizations such as GBIF, FAO, UNESCOqOC Register of Marine Organisms, Species 2000, ETI, Gaia 21, GenBank, Zoological Record, FishBase, CephBase, FISHNET, European Register of Marine Organisms (ERMS), and representatives of national biodiversity or oceanographic data centers. At the end of 2000, the OBIS Steering Committee and plans for the Program Office will be formed and, following recommendations from the Workshop, its first tasks will be: • To develop a metadata catalog of sources of primary data • To agree on minimum metadata standards and protocols • To support mechanisms to assure accuracy of taxonomic identifications and common agreement on systematic relationships • To agree on a strategy for data discovery, setting of priorities for data rescue, and assessment of data quality, and an international network of taxonomic authorities to resolve nomenclature issues • To agree on the issue of intellectual property rights Consonant with widespread interest in understanding the biocomplexity of natural systems, the Census of Marine Life is planning research programs to consider oceanic biology from genes to ecosystems at time and space scales requiring use of new technologies. Simultaneously a rapidly-developing Global Ocean Observing System is already providing a continuous stream of physical observations to challenge existing means for data access, analysis, and presentation. In the Internet Age, new strategies for deploying this information and knowledge must evolve. The papers in this issue demonstrate that ideas and willingness abound to create the Ocean Biogeographic Information System. The biggest challenge is for the various stakeholders worldwide in marine biological data to learn quickly to work together.
2018-05-08T17:53:42.583Z
2000-01-01T00:00:00.000
{ "year": 2000, "sha1": "50cd43646f19077599ba6e85ea45ab2ccef5dae6", "oa_license": "CCBY", "oa_url": "https://tos.org/oceanography/assets/docs/13-3_grassle.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "50cd43646f19077599ba6e85ea45ab2ccef5dae6", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Computer Science" ], "extfieldsofstudy": [ "Geography" ] }
253237591
pes2o/s2orc
v3-fos-license
Disorder effects in the Kitaev-Heisenberg model We study the interplay of disorder and Heisenberg interactions in Kitaev model on honeycomb lattice. The effect of disorder on the transition between Kitaev spin liquid and magnetic ordered states as well as the stability of magnetic ordering is investigated. Using Lanczos exact diagonalization we discuss the consequences of two types of disorder: (i) random-coupling disorder and (ii) singular-coupling disorder. They exhibit qualitatively similar effects in the pure Kitaev-Heisenberg model without long-range interactions. The range of spin liquid phases is reduced and the transition to magnetic ordered phases becomes more crossover-like. Furthermore, the long-range zigzag and stripy orderings in the clean system are replaced by their three domains with different ordering direction. Especially in the crossover range the coexistence of magnetically ordered and Kitaev spin-liquid domains is possible. With increasing the disorder strength the area of domains becomes smaller and the system goes into a spin-glass state. However, the disorder effect is different in magnetically ordered phases caused by long-range interactions. The stability of such magnetic ordering is diminished by singular-coupling disorder, and accordingly, the range of spin-liquid regime is extended. This mechanism may be relevant to materials like $\alpha$-RuCl$_3$ and H$_3$LiIr$_2$O$_6$ where the zigzag ground state is stabilized by weak long-range interactions. We also find that the flux gap closes at a critical disorder strength and vortices appears in the flux arrangement. Interestingly, the vortices tend to form kinds of commensurate ordering. I. INTRODUCTION Quantum spin liquids (QSLs) are entangled states of matter, where quantum fluctuations prevent the formation of magnetic order down to lowest temperatures. They have attracted substantial attention in the developments of topological quantum computing due to its remarkable properties, including long range entanglement, topological ground state and fractionalized excitations [1, 2]. One of the most promising theoretical models to realise QSLs, was proposed by Kitaev in his seminal paper [3]. The model is based on a two dimensional honeycomb lattice with bond anisotropic interactions among quantum spin-1/2 particles. Remarkably, such Kitaev-type interactions are predicted to be realized in transition metal oxides with spin-orbit couplings [4]. Since then, layered honeycomb materials A 2 IrO 3 (A = Na, Li) [5,6] and α−RuCl 3 [7] have been thoroughly investigated as the candidate Kitaev materials [8,9]. Even though thermodynamic and dynamical properties of these materials match the theoretical prediction of Kitaev model, all the candidate materials are known to magnetically order at low temperatures [10,11]. Magnetic orders in such materials are manifested by the interactions beyond the Kitaev interactions [12][13][14]. In addition, presence of disorder like vacancies, impurities, lattice distortions, and stacking faults, inevitable in real materials, are often responsible for the instability of QSL [15][16][17]. Recently, a new class of intercalated compounds have been synthesized which are even more susceptible to disorder H 3 LiIr 2 O 6 , Cu 2 IrO 3 , and Ag 3 LiIr 2 O 6 [18,19]. Role of disorder in Kitaev materials has gained special interest since the experimental observation of low temperature divergence in specific heat of the proximate Kitaev material, H 3 LiIr 2 O 6 , is understood as the consequence of bond disorder in the Kitaev model [20,21]. Theoretically, it is known that bond randomness in the Kitaev model affects the low-energy dynamical spin-correlators [22]. Thermodynamic properties like specific heat and susceptibility, spin transport, and spin dynamics at low energy excitations of the Kitaev model with bond randomness and site dilution have been also studied at finite but low temperatures [16,[23][24][25][26][27]. Moreover, the effect of bond randomness on the zero-temperature topological properties of the Kitaev model [28,29] as well as the spin excitations in a diluted Kitaev system containing spin vacancies and bond randomness [30] have been discussed. While a lot of studies have been devoted to the thermodynamics (relevant for experiments) of disordered Kitaev models, very little is understood about the ground state properties of the Kitaev model in the presence of disorder. Furthermore, presence of any type of disorder such as impurity or stacking faults must affect the interactions beyond the Kitaev limit. However, even less is known about the disorder effects, for example, in the presence of Heisenberg interaction. Once the Heisenberg interaction is added in the Kitaev model (referred as Kitaev-Heisenberg model), variety of magnetically ordered phases are stabilized as a consequence of order by disorder mechanism. Understanding the effect of disorder in the Kitaev-Heisenberg model is crucial from the point of view of interplay between frustration and disorder. Motivated by this, we employ numerical diagonalization of 24-site cluster on a honeycomb lattice, to study the effects of disorder in the ground state of Kitaev-Heisenberg model. This serves as a putative minimal model for several QSL candidates. We emphasise on two different types of bond-disorder introduced in both Kitaev and Heisenberg interactions. In the pristine Kitaev-Heisenberg model, Kitaev QSL state survives the small perturbations like Heisenberg interaction [31]. Here, we show that the QSL remains robust against weak disorder and Heisenberg interaction, the stability of QSL region is narrowed with increasing disorder strength. The long-range zigzag and stripy orderings are replaced by a mixture of their three domains with different ordering direction. The area of domains is reduced with increasing disorder strength and the system goes into a spin-glass state. Moreover, we show that a flux state can be induced by bond-disorder in the ground state. The transition from flux-free state to random flux state in the presence of bond disorder has already been predicted just in the Kitaev limit [22,23]. Interestingly, this random-flux state persists even for a small admixing of nearest-neighbor Heisenberg interaction. We also find that disorder in presence of further neighbor interaction can indeed result in a QSL or disordered state. This is relevant to experimental observations of H 3 LiIr 2 O 6 [32]. The paper is organized as follows: In Sec. II, we begin by introducing the type of disorders considered in the Kitaev-Heisenberg model and the details of numerical methods applied. In Sec. III, we present the results of disorder effects on ground state phases in K-H parameter space. Moreover, we also discuss the effect of further neighbour interactions on the ground state of disordered KH model. Section IV is devoted to the summary. A. Kitaev-Heisenberg Model The Kitaev honeycomb model is described by anisotropic Ising-type interactions among spin-1/2 degrees of freedom on a honeycomb lattice, where the interaction in spin space (γ = x, y, z) is dictated by the bond direction. The bonddirectional Ising-type interactions are called Kitaev interactions. The model is exactly solvable with the ground state being QSL and its elementary excitations described by Majorana fermions coupled to the Z 2 gauge field. In general, the systems exhibiting the Kitaev interactions are referred as Kitaev materials. However, as revealed from extensive research, real Kitaev materials possess interactions beyond Kitaev, such as diagonal exchange couplings, i.e., Heisenberg terms, as well as off-diagonal couplings, i.e., so-called Γ terms [12,13]. Nevertheless, a QSL ground state state is maintained even in the presence of weak Heisenberg terms [31]. In order to derive a common understanding on the disorder effects in Kitaev systems, we here restrict ourselves to models with the Heisenberg and Kitaev terms, i.e., the Kitaev-Heisenberg model. Clean case The Hamiltonian of the Kitaev-Heisenberg model reads as where K γ ij is the Kitaev interaction of nearest neighbor spins on three different bonds γ = x, y, z and J ij is the Heisenberg interaction between nearest neighbours. We assume all the Kitaev couplings are isotropic with K x ij = K y ij = K z ij = K and the nearest-neighbor Heisenberg coupling is constant with J ij = J in the clean case. For simplicity, we parameterize as following: J = A cos φ and K = A sin φ, where A = √ J 2 + K 2 is used as an energy unit. The ground state phase diagram is known to host four ordered and two QSL phases as a function of φ: , and stripy (1.58π φ 1.89π) [31]. In this paper, we consider two types of bond disorders explained below, and mainly focus on the effect of disorders around the two Kitaev QSL phases, i.e., around φ = π/2, 3π/2. Random-coupling disorder The first type of disorder is a random distribution of bond strength (called "random-coupling disorder"), which commonly appear due to the structural disorders in crystals. This can be achieved by changing the parameter A → (A + δA ij ) for all nearest-neighbor bonds, such that J → (A + δA ij ) cos φ and K → (A + δA ij ) sin φ. Namely, the Kitaev and Heisenberg couplings of 36 nearest-neighbor bonds in our 24-site periodic cluster (see Fig 1(a)) take different values from each other. Nevertheless, note that the ratio between K and J remains as K/J = tan φ for each bond. The random deviation from the original bond, δA ij , is defined by a Gaussian distribution N (0, σ 2 ) where σ is the standard deviation. The disorder strength is controlled by ∆ = 2σ. The cases of ∆ = 0.125, 0.25, and 0.5 are studied in this paper. For this kind of bond disorder, a box probability has been also frequently used in the previous studies but just for the Kitaev limit, K/J → ∞. Although the effect of disorder with a box probability may be somewhat a little more drastic than that with a Gaussian distribution, their results are expected to be qualitatively similar. Singular-coupling disorder The second type of disorder is a random mixture of two kinds of bond strengths (called "singular-coupling disorder"). This can be achieved by replacing a part of the original bonds with r-times stronger bonds, where the fraction of replaced bonds to the total bonds is defined by n. In other words, randomly selected fraction n of original bonds are changed as J → rJ and K → rK. We call n disorder density and r disorder strength. Since we use a finite-size periodic cluster as described below, it would be reasonable to keep equal number of replaced x, y and z bonds. In fact, we vary the number of replaced bonds in steps of 3. The case of n = 0 and/or r = 1 corresponds to the clean limit. The case of n = 1 also corresponds to the clean limit but the energy scales as rA instead of A in the original clean limit n = 0. In this paper, the effect of singular-coupling disorder is widely studied from n = 0 to 1 for r = 2, 4, and 10. Interestingly, the appearance of this type of disorder with r = 4 was predicted due to the presence of hydrogen cation vacancies in H 3 LiIr 2 O 6 [33]. B. Method We employ the Lanczos exact diagonalization method of the model described by Eq. (1) in the presence of disorder. A 24-site cluster with periodic boundary conditions is used [Fig 1(a)]. For each parameter set, we perform the simulations for 30-50 random realizations and take an average over them if needed. This setup is sufficient to investigate the instability of Kitaev QSL around the Kitaev limit because longrange magnetic orderings are not considered as discussed below. We calculate the ground-state energy E 0 , spin-spin correlation functions S i · S j , static spin structure factor S(Q), and the expectation value of flux operator W . Peak positions in the second derivative of ground-state energy with respect to particular parameters provide valuable information on the position of phase boundaries. Also, the width of the peaks may reflect the 'sharpness' of phase transitions; roughly speaking, a sharp (broad) peak indicates a firstorder (second-order or continuous) transition. To identify the magnetic structure for each phase, we calculate the static spin structure factor where N is the number of sites in the periodic cluster (N = 24) and r i is the position of site i. For ordered phases the magnetic structure can be determined from the reciprocal-space Bragg-peak positions. The Brillouin zone and magnetic Bragg peaks for zigzag and stripy states are shown in Fig 1(b). Note that in our calculations for the clean limit any spatial and spin symmetries are not broken, so that the structure factor exhibits all Bragg peaks for three ordering directions. However, as shown below, the translation and rotation symmetries can be broken when the disorder is introduced. In QSL phase S(Q) is nearly structureless and the intensity is relatively small for Q's. Furthermore, to see how close is the system to the Kitaev limit, we calculate the expectation value of flux operator. The flux operator is defined on a hexagonal plaquette by where a possible numbering of sites is denoted in Fig 1(a). In the Kitaev limit, the flux operator is a conserved quantity, commuting with the Hamiltonian, having values exactly ±1. Away from the Kitaev QSL regime, this value drops to zero. Therefore, we can also use this quantity as a measure of overlap between an observed QSL and ideal Kitaev QSL. A. Random-coupling disorder We begin by discussing the consequence of the randomcoupling disorder. Since we are particularly interested in the possible emergence of QSL by disorder, we focus on the parameter regions around the Kitaev points φ = 0.5π, 1.5π; where any magnetic order can be avoided due to the fully dominant Kitaev interactions over the Heisenberg ones. Fig. 2(a,b) show second derivative of the ground-state energy with respect to φ, i.e., −∂ 2 E 0 /∂φ 2 , as a function of φ for several values of the disorder strength ∆ around the AFM (φ = 0.5π) and FM (φ = 1.5π) Kitaev points. Similarly, expectation values of the flux operator (3), i.e., W p , are plotted in Fig. 2(c,d). In the presence of disorder, the averaged values of −∂ 2 E 0 /∂φ 2 and W p are taken over 50 random realisations. Around AFM Kitaev point, assuming that a peak position in −∂ 2 E 0 /∂φ 2 vs. φ indicates a critical point, a transition from Néel to Kitaev QSL phases occurs at φ = 0.486π(≡ φ Neel−QSL ) and a transition from Kitaev QSL to zigzag phases at φ = 0.514π(≡ φ QSL−zigzag ) in the clean limit (∆ = 0). We find that the phase transitions are hardly affected by the disorder strength, ∆ = 0.25. However, with further increasing ∆ to 0.5, the deviation from the clean limit becomes prominent. The critical points are changed to φ Neel−QSL = 0.488π and φ QSL−zigzag = 0.511π. Thus, the range of Kitaev QSL has shrunk by the disorder. Even more interestingly, both peaks in −∂ 2 E 0 /∂φ 2 at ∆ = 0.5 are lower and broader compared to the sharp peaks indicating first-order transitions in the clean limit (∆ = 0). This means that the disorder makes the phase transitions more crossover-like. It is also confirmed by a rather blunt behavior of W p around the critical points. Accordingly, the system may be characterized by a short-range Néel phase in a range with peak width around φ = φ Neel−QSL and by a mixture of small zigzag domains and QSL regions in a range with peak width around φ = φ QSL−zigzag . The possibility of the formation of zigzag domains is discussed below. On the other hand, Néel domain walls would be not created because the random-coupling disorder does not lift the degeneracy of spin-rotation symmetry unlike that of spatial-rotation symmetry. The effect of disorder around FM Kitaev point seems to be even more sensitive than around AFM Kitaev point. A transition from FM to Kitaev QSL phases occurs at φ = 1.40π(≡ φ FM−QSL ) and a transition from Kitaev QSL to stripy phases at φ = 1.58π(≡ φ QSL−stripy ) in the clean limit; whereas, they are φ FM−QSL = 1.42π and φ QSL−stripy = 1.56π at ∆ = 0.5. Although the shifts of critical φ values may look tiny, we can find them somewhat considerable by thinking in terms of K/J as follows: The condition to achieve a QSL is modified from K/J > 3.1 or K/J < −3.9 in the clean limit to K/J > 3.9 or K/J < −5.2 at ∆ = 0.5. Nevertheless, we can say that the effect of the random-coupling disorder on magnetic order is relatively weak, which is consistent with the previous study [34]. Then, as in the case around AFM Kitaev limit, peaks in −∂ 2 E 0 /∂φ 2 are broadened. Thus, the system may be characterized by a short-range FM phase in a range with peak width around φ = φ FM−QSL and by a mixture of small stripy domains and QSL regions in a range with peak width around φ = φ QSL−stripy . The possibility of the formation of stripy domains is discussed below. It is also interesting that | W | keeps nearly 1 in the both AFM and FM Kitaev QSL phases despite the presence of sizeable disorder. In the Kitaev limits φ = ±0.5π (K = ±1, J = 0), the flux operator is still commutative with the Hamiltonian even for finite ∆. Thus, as confirmed in Fig. 2(c,d), the ground state is still characterized by | W | = 1 (∀ hexagonal plaquette). Up to ∆ = 0.5, we found W to be the unity for all studied disorder samples. Furthermore, as naively expected, we find that the nearest-neighbour spin-spin correlations are finite and longer-range ones are zero as is the case with the clean limit. Nevertheless, the value of the nearestneighbour correlations can deviate from the clean limit value | S i · S j NN | = 0.1323 depending on the coupling value, namely, | S i · S j NN | > 0.1323 for the bond with δA > 0 and | S i · S j NN | < 0.1323 for the bond with δA < 0. Let us next consider how the ordered phases are affected by the random-coupling disorder. In the clean limit, the zigzag and stripy ordered states are associated with spatial rotation symmetry breaking. Yet, the rotation symmetry is not broken in our calculations with 24-site periodic cluster and three states with different ordering directions are energetically degenerate. Fig. 3(a,e), show that the Bragg peaks for all three ordering directions in zigzag and stripy order appear with the same intensity in the static structure factor. One of the degenerate three states is then selected as a ground state when its global magnetic ordering is constructed. However, the rotation symmetry can be explicitly broken once the randomcoupling disorder is introduced. Examples of static spin structure factors with broken rotation symmetry for the zigzag state (φ = 0.52π) and stripy state (φ = 1.65π) are plotted in Fig. 3(b-d) and (f-h), respectively. In a disordered system, one of the symmetry-broken state with lowest energy tends to be locally realized depending on the disorder distribution. This means in the ordered phase, the three domains with different ordering directions can coexist as a result of disorder. In Fig. 3(i,j) their energy distributions of 50 disorder samples are shown. For both of φ = 0.52π and 1.65π, the variance of energy distribution is increased with increasing ∆. The local ordering structure with lower energy is strongly stabilized and that with higher energy is easily excluded. Thus, the averaged area of zigzag (or stripy) domains becomes smaller and the system is in more like a spin-glass state for larger ∆. It is because the short-range correlation is dominated only by the local disorder structure when the disorder is strong. In addition, a coexistence of four kinds of domains could be possible in the crossover regions between QSL and magnetic ordered phases, namely, one QSL domain and three ordering domains as mentioned above [35]. Next, we turn to the case of singular-coupling disorder. For this case we can study the effect of disorder as functions of the disorder strength r and the disorder density n. As in the case of random-coupling disorder, the impact of singular-coupling disorder on the two phase transitions around the FM Kitaev point φ = 1.5π is investigated by estimating −∂ 2 E 0 /∂φ 2 with φ. Here, it is convenient to renormalize the energy unit as A → A/(nr + (1 − n)). In Fig. 4(a-c), we plot −∂ 2 E 0 /∂φ 2 as a function of φ with fixed r = 2, 4, and 10. The disorder density n is varied from 0 to 1 at intervals of 3/36. Results shown in Fig. 4 are averaged values of −∂ 2 E 0 /∂φ 2 and W p over 30 disorder samples. Interestingly, −∂ 2 E 0 /∂φ 2 is nearly symmetric about n = 18/36 for any r. This means that the qualitative properties are determined only by the ratio of distinct bonds. At n = 0 and n = 1, where the system is clean, the range of FM QSL is indicated by two sharp peaks at φ = 1.40π and 1.58π. The transitions to FM (φ 1.40π) and stripy (φ 1.58π) phases are of the first order. For weak disorder strength (r = 2), we find that the QSL range is slightly reduced in the whole density region 0 < n < 1 and it is narrowest at n = 0.5. This reduction seems to increase continuously with increasing r. As a result, for strong disorder strength (r = 10), the QSL range almost vanishes and a nearly direct transition between FM and stripy phases may occur around n = 0.5. The remaining QSL seems to be still of the Kitaev-type because of W p ≈ 1. Another effect of singular-coupling disorder is that the peak of −∂ 2 E 0 /∂φ 2 indicating a transition is significantly broadened even for weak disorder. This implies that peak position for single disorder sample differs from each other. Thus, transition in the disordered system becomes continuous or even a crossover-like one. As in the case of random-coupling disor-der, the system may be characterised by a short-range FM or a short-range stripy ordering in the crossover region. This might be interpreted as a QSL in the broad sense of the term. It is rather surprising that the broadening of peak in −∂ 2 E 0 /∂φ 2 seems to be almost independent of r. Furthermore, in order to see how the ordered state is affected by the singular-coupling disorder, we study the static structure factor in the stripy phase. Three representative structure factors at φ = 1.64π, n = 0.5 for r = 10 are shown in Fig. 5(a-c), indicating a stripy state with broken rotation symmetry. Since most of samples for any r basically exhibit one of the three structures, the system consists of the three stripy domains with different ordering directions. As discussed in Sec. III A, the variation of energy distribution is related to the averaged area of stripy domains. In Fig. 5(e-g) the energy distributions of 30 disorder samples in the stripy phase (φ = 1.64π) are shown. As expected, the variance of energy distribution increases with increasing r, and however, it clearly decreases with increasing n. As a result, their energy distributions are completely different between small and large n regions, although the second derivative of energy is symmetric about n = 0.5. This may be explained by considering random-singlet formations. At lower n, the energy can be easily lowered because the stronger r bonds are mostly isolated and spin-singlets are formed on them. The direction of stripy ordering is closely related to the distribution of randomsinglets. Therefore, the area of stripy domains tends to be small at smaller n [ Fig. 5(e)], and it would be relatively large at larger n [ Fig. 5(g)]. For example, at n = 30/36 the vari-ance seems to be very small and the energies of disorder samples are almost degenerate for any n. This situation is close to the clean limit. Thus, the system consists of large stripy domains if they exist. On the other hand, at n = 6/36 and r = 10 a wide distribution of the variance may indicate the appearance of small stripy domains. This is because a local stripy configuration can be realised against a global ordering if it significantly lowers the total energy. In sum, a spin-glass state due to a mixture of tiny stripy domains may be achieved at large r and small n. Let us look at the crossover region between stripy and Kitaev QSL phases in a little more detail. We take the case of r = 10 and n = 18/36 for example. A structureless structure factor as in Fig. 5(d) appears other than the three symmetry broken ordered ones. Also, the variation of energy distribution is relatively larger than that in the deep stripy phase [ Fig. 5(h)]. Thus, a coexistence of four kinds of domains may be possible in the crossover region between Kitaev QSL and stripy phases, namely, one QSL domain and three stripy ordering domains. At φ = 1.55π, 10 of 30 samples exhibit such a QSL-like structure factor, and the remaining 20 samples exhibit either of stripy ones. Assuming that W = 0 for the stripy samples and W = 1 for the QSL-like samples, we may roughly expect W ∼ 0.33. This is comparable to the actual value W ∼ 0.4 for φ = 1.55π at r = 10 and n = 18/36. Appearance of flux state by disorder We find an intriguing evolution of flux states with the disorder strength r and disorder density n in the Kitaev limit φ = 3π/2. The introduction of disorder may mimic thermal fluctuations in real materials. Since the relations W 2 p = 1, [H, W p ] = 0, and [W p , W p ] = δ pp are fulfilled at φ = 3π/2 even in the presence of disorder, Therefore, the eigenstate for the system (1) can be still characterized by a set of W p for all plaquettes, where W p has a value of 1 or -1. For each disorder sample we see the W p values for 12 plaquettes in our 24-site periodic cluster. For the clean case (r = 1), it is known that the ground state is given by W p = 1 (∀p), the so-called flux-free state [ Fig. 6(a)]. Then, the excited flux states can be created by flipping W p from 1 to −1 for even number of plaquettes. The flux gap has been estimated to be E gap = 0.066K as a lowestenergy excitation when the values of W p for two neighboring plaquettes are flipped [22]. The flux density is defined by n F = N F /N p , where N F is the numbers of plaquettes exhibiting W p = −1 and N p is the total number of plaquettes. In particular, the flux density n F = 1/6 is achieved by a 24-site periodic cluster containing 2 plaquettes with W p = −1 and 10 plaquettes with W p = 1, i.e., N F = 2 and N p = 12. Similarly, n F = 1/3 corresponds to N F = 4 and N p = 12 in our calculations. We refer to n F = 1/6 and n F = 1/3 as 2-flux and 4-flux state, respectively. Typical examples of flux configurations for (excited) 2-flux and 4-flux states are illustrated in Fig. 6(b-f). The appearance of fluxes in the ground state implies the closing of flux gap by disorder. It is consistent with the previous study [29]. Flux states appears as a ground state on introduction of singular-coupling disorder. The number of samples exhibiting flux states out of 30 samples is shown as a function of disorder density n for several disorder strengths in Fig. 6(gi). For a weak disorder strength (r = 2) only ∼ 10 − 20% of disorder samples exhibit flux states. Interestingly, all of them are the 4-flux states shown in Fig. 6(e). Since the effect of disorder is most highlighted around n = 0.5, we find the probability of observing a flux state is also maximum around n = 0.5. With further increasing r up to 4 the probability of finding flux states increases. The 4-flux configuration still accounts for a large fraction of observed flux states, although a different pattern for 4-flux configurations such as in Fig. 6(f) as well as the 2-flux configuration like in Fig. 6(b,c) appear as minority. In other words, ordered 4-flux state would tend to be energetically more favourable than the other flux states. This may be related to the "Majorana insulating" state [36]. Looking at all of the flux configurations, the fluxes seem to keep distance from each other. Thus, a kind of repulsion may exist between fluxes, and the energetically-favoured orderedflux state could be a consequence of the repulsive interaction (also see the Appendix VI A, VI B). This also implies no possibility of a phase-separation-like flux configuration induced by disorder. Note that, however, the ordered-flux state is not globally stabilized in our disordered system because various flux patterns locally exist depending on the disorder realization, for example, as shown in in Fig. 6(b-f). For a very strong disorder strength (r = 10) the 4-flux and 2-flux states appear with similar probability, and the fluxfree state still exists. Furthermore, various flux configurations are mixed. For example, a periodic configurations like in Fig. 6(f) are also contained. These results indicate a trend to the random-flux states [22,37]. This can be naively understood if we can assume that the effect of disorder is essentially the same as thermal fluctuations. Our focus here has been only the ground state, we show that random flux state appears as a consequence of disorder itself, which indicates that the flux gap closes at some disorder strength, r for a particular disorder configuration. Eventually, the presence of disorder in a random flux sector can explain the abundance of low energy states in H 3 LiIr 2 O 6 compound [32]. Thus, the low-lying flux and spin excitations [38] of the Kitaev-Heisenberg model should be studied in future. C. Further neighbor interactions So far we have considered the effect of disorder on quantum phase transitions between Kitaev QSL and magnetic ordered states in the Kitaev-Heisenberg model [Eq. (1)], where the ordered states are caused by the competition of nearestneighbor exchange coupling J and Kitaev term K. However, in real materials like Na 2 IrO 3 and α-RuCl 3 , the experimentally observed zigzag ordering may be possibly stabilised by long-range neighbor couplings J 2 and J 3 [39]. Therefore, it would be informative to study the effect of disorder on zigzag state stabilised by J 2 and J 3 . Thus, the Hamiltonian is modified as where J 2 and J 3 are uniform Heisenberg exchange interactions on 2nd and 3rd nearest neighbours, respectively. We fix K = −6, J = 1 as a typical parameter set for the Kitaev materials. We omit the Γ terms from our model [40] and set J 2 = J 3 (≡ J 2,3 ) for simplicity. We now focus on the case of n = 1/3, i.e., 12 bonds out of original 36 bonds are randomly replaced by singular couplings with disorder strength r = 4. This value of r was actually predicted as a singularcoupling disorder due to the missing of hydrogen cations in H 3 LiIr 2 O 6 [33]. Let us first see the second derivative of ground-state energy as a function of J 2,3 , i.e., −∂ 2 E 0 /∂J 2 2,3 . The results for the clean and disordered cases are compared in Fig. 7(a). In the clean case, two phase boundaries are signalled by the sharp peaks. We thus find three phases depending on the value of J 2,3 : FM (J 2, 3 −0.34), Kitaev-type QSL (−0.34 J 2,3 0.29), and zigzag (J 2,3 0.29). At J 2,3 = 0, although the ground state is a QSL, the system is in the vicinity of the stripy phase (the phase boundary is K = −4, J = 1), For the disordered case, the averaged values over 30 disorder samples are plotted. Compared to the clean case, the heights of both peaks are much reduced and the widths are significantly broadened. Furthermore, it is interesting that the phase between FM and zigzag, seems to be (c) extended by the disorder since the left (right) peak position is shifted from J 2,3 ≈ −0.34 (J 2,3 ≈ 0.29) to J 2,3 ≈ −0.5 (J 2,3 ≈ 0.5). In order to take a more detailed look, let us see the result for each disorder sample. We find that most of disorder samples exhibit three or four peaks, i.e., the appearance of additional one or two peaks to the clean case, in −∂ 2 E 0 /∂J 2 2, 3 . Typical examples of −∂ 2 E 0 /∂J 2 2,3 for two disorder samples are plotted in Fig. 7(b). For both samples there exist five 'regions' separated by four peaks. Fig. 7(c) shows the static spin structure factor at representative J 2,3 values for each region: J 2,3 = −0.75, −0.5, 0, 0.5, and 0.75. Incidentally, in the clean case, the system is in a FM phase for J 2,3 = −0.75 and −0.5; in a QSL phase at J 2,3 = 0; and in a zigzag phase at J 2,3 = 0.5 and 0.75. The two sets of S(Q) for disordered samples show similar qualitative features at the same J 2,3 value. At J 2,3 = −0.75 we find a sharp peak at Q = 0 indicating FM state, as in the clean case. At J 2,3 = −0.5 a stripy structure appears in addition to the FM peak. Therefore, the state may be characterized as a quasi-FM order with short-range stripy fluctuations. Note that the position of stripy Bragg peaks reflects one of three possible ordering directions in each disorder sample. At J 2,3 = 0 it is interesting that a QSL-like structure, i.e., structureless distribution of the intensity over the first Brillouin zone, and a stripy Bragg peaks coexist. Since the system is in the proximity to stripy phase in the clean case, a short-range stripy correlations could be enhanced by a pinning effect with disorder. Nevertheless, as seen in Fig. 7(d) the value of W p is still significantly larger than zero for most of the disorder samples, so that the system is broadly characterized as a Kitaev QSL. At J 2,3 = 0.5 the intensity is widely distributed and there is no sharp Bragg peaks indicating a specific ordering. Since the value of W is nearly zero for most of the disorder samples, the system is in a non-Kitaev QSL state. In this QSL phase, unlike a very rapid decay of the spin-spin correlation functions in the Kitaev QSL phase, the correlation survives at longer distance (see the Appendix VI C for more details). At J 2,3 = 0.75 the structure factor for each disorder sample exhibits a symmetry-broken zigzag state with one of three possible ordering directions. Thus, the system consists of three zigzag domains. As discussed in Sec. III B the size of domains depends on the disorder density and strength. Also, it is worth mentioning that the QSL range is somewhat expanded by disorder. It may be because the disorder-induced stripy fluctuations push out the phase boundaries of neighboring ordered phases. In Fig. 8(a) the evolution of the static structure factors with J 2,3 is compared for the clean and disordered case. For the disordered case, we show the structure factor averaged over ones over 30 disorder samples. Although the overall explanation has been already given above for two disorder samples,the effect of disorder can be more obviously seen in the averaged S(Q). Of particular interest is that the sharp Bragg peak features in the clean system are totally collapsed in the disordered system at J 2,3 = −0.4, 0.3, and 0.5. This means that even short-range magnetic orderings are destroyed by disorder. Accordingly, the J 2,3 -range of QSL phase is extended from −0.34 J 2,3 0.29 to −0.45 J 2,3 0.55 by the disorder as shown in prospective ground-state phase diagrams [ Fig. 8(b)]. This is in a strong contrast to the results for the case of NN Kitaev-Heisenberg model, where the QSL phase is always shrank by any types of disorder. As mentioned above, this value of r = 4 corresponds to the singular-coupling disorder in H 3 LiIr 2 O 6 [33], however, the actual values of J 2 , J 3 , and n are unknown. Nevertheless, since the qualitative features would be unaffected by changing n, assuming typical values of J 2 and J 3 as several tens of percent of J we can speculate that either spin glass or QSL is likely as lowtemperature state of H 3 LiIr 2 O 6 . Moreover, we find random-singlet formations between second-and third-neighbor spins in our system. This is qualitatively consistent with heat capacity measurement under magnetic field [32]. For more quantitative considerations the Γ terms should be also taken into account because they are expected to be sizeable IV. CONCLUSION Using Lanczos exact diagonalization technique we have studied the Kitaev-Heisenberg model on a honeycomb lattice in the presence of bond disorder. Especially, the effects of disorder on the transition between Kitaev spin liquid and magnetic ordered states as well as the stability of magnetic ordering have been investigated. We have considered two types of disorder: random-coupling disorder and singular-coupling disorder. The former is achieved by a random distribution of bond strength, and the latter by a random mixture of two kinds of bond strengths. They exhibit qualitatively similar effects in the pure Kitaev-Heisenberg model without long-range interactions. The range of Kitaev spin liquid phase is reduced and the transition to magnetic ordered phases becomes more crossover-like. Accordingly, although the regions of zigzag and stripy phases are enlarged by disorder, their long-range orderings in the clean system are replaced by a mixture of the three domains with different ordering directions. With increasing the disorder strength the area of domains becomes smaller and the system goes into a spin-glass state. Particularly in the crossover regime, the coexistence of magnetic domains and Kitaev spin-liquid domains may be possible. On the other hand, qualitative trend is different if magnetic ordered phases caused by long-range interactions are doped by disorder. The stability of such magnetic ordering is diminished by singular-coupling disorder, and as a result, the range of spin-liquid is extended. This mechanism may be relevant to materials like α-RuCl 3 and H 3 LiIr 2 O 6 where the zigzag ground state is stabilized by small long-range interactions. We have also found that the flux states are induced by a certain amount of disorder strength. Interestingly, the fluxes tend to form kinds of commensurate ordering at intermediate disorder strength. This may be related to the Majorana insulating" state. This may indicate an existence of repulsive interaction between fluxes. With further increasing disorder strength the system goes into the random-flux states. At any disorder strength we have seen no signature for a phase-separation-like flux configuration. V. ACKNOWLEDGMENTS We acknowledge Joji Nasu, Rajyavardhan Ray, and Pranay Patil for fruitful discussions. We thank Ulrike Nitzsche for technical assistance. This project is funded by IFW Excellence Programme 2020 and the German Research Foundation (DFG) via the projects A05 of the Collaborative Re- It is known that the ground state of the Kitaev model on a honeycomb lattice is characterized by W p = 1 (∀p), the socalled flux-free state. The flux operator is defined as W p = 2 6 S x 1 S y 2 S z 3 S x 4 S y 5 S z 6 (also see the main text). An excited flux state can be created by flipping W p from 1 to −1 for even number of plaquettes. We here consider the Kitaev model with K = −2. To study flux configuration in the excited flux state, we apply flux field h f W p to a single plaquette on finite-size clusters shown in Fig. 10(a). This is a kind of the pinning-field method which is usually used to study magnetically ordered state. With increasing h f , the value of W p for the fieldapplied plaquette is flipped from 1 to −1 at some h f . We call it critical flux field. In Fig. 9 the critical flux field is plotted as a function of 1/N . Above the critical flux field, the clusters 24a and 8b contain four fluxes, and the other clusters contain only two fluxes. For the clusters with two fluxes, the position of fluxes may be not uniquely fixed due to the symmetry of cluster. For example, let us see the cluster 24b in Fig. 10(a). A red plaquette corresponds to W p = −1 where the flux field is applied. The remaining W p = −1 is put into either of two thin red plaquettes. Interestingly, the flux configuration for 24a and 18a exhibits a staggered order. (Note that the cluster denoted by 18a is not perfectly consistent with a staggered order because three fluxes are needed to complete it. Nevertheless, the possible positions of fluxes are of the staggered order [see Fig. 10(a)].) As seen in Fig. 9, the critical flux fields for 24a and 18a are lower than those for the other clusters with the same system size. Especially, the critical flux field for 24a is appreciably lower than that for24b and 24c. This may mean that the staggered ordered state is more stable than the other flux state. This is consistent with the fact that the staggered ordered state has significantly lower energy than the random flux state at the flux density n F = 1/3 [36]. The clusters having relatively large critical flux field, i.e., 12b, 20b, and 24c, might be related to a random flux state. Furthermore, some clusters having non-flux-free ground state are shown in Fig. 10(b) as additional information. For the clusters 12b, 16b, 16c, and 20c, the flux-free state and fullflux state are (nearly) degenerate in the ground state. For the cluster 14b, the ground state contains five W p = 1 and two W p = −1. For the cluster 8a, the ground state is full-flux state. These results might be useful to consider the finite-size effects in the Majorana basis for future studies. B. Repulsive interaction between fluxes If a flux-ordered state such as staggered order is more stable than random flux state, there must be a kind of repulsive interaction between fluxes. We now estimate the effective value of the repulsive interaction by applying flux field h f W p to two plaquettes on the clusters exhibiting excited flux states with two nonneighboring fluxes. Such clusters are 24b, 24c, and 18a in Fig. 10(a). First, we obtain two critical flux fields: (i) h f,c (NN) with flux field applied to two neighboring plaquettes and (ii) h f,c (NNN) with flux field applied to two nextnearest-neighbor plaquettes. Since there is a repulsion be- Table I. Thus, we may estimate the effective value of the repulsive interaction V eff from the difference between h f,c (NN) and h f,c (NNN). The obtained value of V eff are very close for three clusters, i.e., V eff ≈ 0.007 in unit of |2K|, although a finite-size scaling analysis is required to further confirm this value. C. spin-spin correlations in the non-Kitaev QSL phase As shown in FIG. 8, the region of −0.45 J 2,3 0.55 for the disordered system is characterized as a QSL based on the structureless static structure factor. Part of the QSL phase, i.e., at 0.35 J 2,3 0.55, is recognized as non-Kitaev one due to the reduced W p value. Here, we look at the spin-spin correlation functions S i · S j to further support this discrimination of Kitaev and non-Kitaev QSLs. In a pure Kitaev model, only the NN correlations are finite and longer-range ones are zero; that is faithfully reproduced by the 24-site calculations. Even away from the pure Kitaev limit, the spin-spin correlation decays rapidly with distance if the system is in the Kitaev QSL state. In Fig. 11 the absolute values of spin-spin correlation functions for nearest-neighbor, 2nd-neighbor, 3rd-neighbor, and 4th-neighbor bonds are plotted as a function of J 2,3 for typical three disorder samples. Since the system is disordered, the correlations are averaged over all possible bonds. In the Kitaev QSL region, we can see a rapid decay of | S i · S j |. Whereas in the non-Kitaev QSL region, we find an enhancement of the 4th-neighbor correlation, and it becomes larger than that for the 3rd-neighbor bond. This is clearly in contrast to the Kitaev QSL feature. Interestingly, this reverse of 3rd-neighbor and 4th-neighbor correlations in the non-Kitaev QSL region is confirmed in all of our 30 disorder samples.
2022-11-01T01:16:03.436Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "6fa59c5bcdb308200a155c52955affd88b633d91", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6fa59c5bcdb308200a155c52955affd88b633d91", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
253130951
pes2o/s2orc
v3-fos-license
Decolonising global health in Africa: research agendas in public health, law, and human rights Abstract Background In recent years the Groningen Centre for Health Law (‘GCHL’ - formerly the Global Health Law Groningen Research Centre), Netherlands, has held annual summer schools on global health, law, and human rights. Responding to calls to decolonise global health (Fofana, 2021), in February 2022 GCHL convened an online academic colloquium to explore the issues in Africa. Panellists and discussants comprised leading African academics and advocates for public health, law, and human rights. Objectives 1. Identify priority current and emerging issues in global health, law, and human rights in the African region with, where possible, reference to the climate crisis. 2. Explore opportunities for identifying academic institutions, networks, and researchers working these issues across Africa. 3. Identify opportunities to support collaboration between institutions, networks and researchers and other actors to address the issues identified across the region Results Top public health issues identified for further research included: public health law frameworks in Africa; One Health and climate change; inequality in the distribution of the determinants of health and disease; international trade and public health; the right to benefit from scientific progress (e.g. in accessing vaccines for COVID-19); gender-based violence; public health and agri-food systems; noncommunicable diseases; healthy diets; poverty; mental health; social protection; and plastic pollution. The first meeting of the network on health, law and human rights in Africa was held in May 2022. The second academic colloquium was held in July 2022, co-hosted with Moi University and the University of Nairobi, Kenya. Conclusions Public health and legal academics in Africa are ready to engage systematically with European partners to address key health-related law and human rights issues of global interest. Research agendas should reflect African priorities, and collaboration should be led by African institutions. Key messages • Capacity must be built to understand the links between public health, law and human rights in Africa. • Collaboration with European institutions to build capacity in public health, law and human rights is welcome, however priorities should be identified by - and responses led by - African academics. Introduction: The use of Vape has increased during the pandemic due to the changes generated by it. Currently we have finished the conditions of confinement, so it is important to identify Objective: To determine the factors associated with post-COVID-19 confinement vape consumption in young adults Methods: A cross-sectional, prospective study was carried out between January and April 2022, including men and women residents of Veracruz aged between 18 and 35 years, excluding participants with addiction treatment or with a diagnosis of a lung disease. A survey was applied through Google Forms to identify the factors associated with Vape consumption after confinement by COVID-19, including depression, anxiety and stress evaluated with the DASS-21 instrument (Cronbach's alpha between 0.79 and 0.87). For data analysis, SPSS v22 software was used, X2 test with Odds Ratio (OR) and 95% confidence interval (CI95%) and MannWhitney U test, assigning statistical significance with p < 0.05. Results: 514 participants were included, with a prevalence of vape use of 28.5%. Physical activity, cigarette consumption by a family member, levels of anxiety, depression and stress showed a value of p > 0.05 for Vape consumption, while other factors (OR/ 95%CI) such as being female (0.6/0.4-0.9), identifying vape advertising on Facebook (0.35/0.19-0.65), having a family member who vapes (2.4/1.5-3.6), consuming cigarettes (4.2/ 2.7-6.4) and identifying vape advertising on Instagram (1.5/ 1.0-2.3) had values of p < 0.05 Conclusions: Post-pandemic vape use is not affected by anxiety, stress or depression, identifying other factors that favor its use such as the environment in which it develops, such as being someone with a history of tobacco use, in addition to the family smoker and advertising on Instagram, which is a social network that often works as an aspirational image for users, a situation contrary to what is shown on Facebook. Key messages: Vape consumption continues to be a post-pandemic public health problem, so it is necessary to reinforce educational measures on the subject and the post-Covid complications of its use. It is necessary to regulate vape advertising on social networks, as has been done with tobacco in different countries, since it can become a risk factor by showing aspirational images for users. Background: In recent years the Groningen Centre for Health Law ('GCHL' -formerly the Global Health Law Groningen Research Centre), Netherlands, has held annual summer schools on global health, law, and human rights. Responding to calls to decolonise global health (Fofana, 2021), in February 2022 GCHL convened an online academic colloquium to explore the issues in Africa. Panellists and discussants comprised leading African academics and advocates for public health, law, and human rights. Objectives: 1. Identify priority current and emerging issues in global health, law, and human rights in the African region with, where possible, reference to the climate crisis. 2. Explore opportunities for identifying academic institutions, networks, and researchers working these issues across Africa. 3. Identify opportunities to support collaboration between institutions, networks and researchers and other actors to address the issues identified across the region Results: Top public health issues identified for further research included: public health law frameworks in Africa; One Health and climate change; inequality in the distribution of the determinants of health and disease; international trade and public health; the right to benefit from scientific progress (e.g. in accessing vaccines for COVID-19); gender-based violence; public health and agri-food systems; noncommunicable diseases; healthy diets; poverty; mental health; social protection; and plastic pollution. The first meeting of the network on health, law and human rights in Africa was held in May 2022. The second academic colloquium was held in July 2022, co-hosted with Moi University and the University of Nairobi, Kenya. Conclusions: Public health and legal academics in Africa are ready to engage systematically with European partners to address key healthrelated law and human rights issues of global interest. Research agendas should reflect African priorities, and collaboration should be led by African institutions. There is great urgency for action to achieve the Sustainable Development Goals, especially in fragile settings, which face acute and complex challenges. Yet, the public sector may be limited in its capacity to address these appropriately, with devastating effects on the health of people and environment now and in the future. Background: Colorectal cancer (CRC) is one of the main causes of mortality and morbidity worldwide. To date, the relationship between regional deprivation and CRC incidence or mortality has not been studied in the population of Cyprus. The aim of this study was to analyse the geographical variation of CRC incidence and mortality and its possible association with socio-economic inequalities in Cyprus for the periods between 2000 and 2015. Methods: A small area ecological study in Cyprus, with census tracts as units of spatial analysis, for the period between 2000 and 2015. Results: There are geographical areas having 15% higher SIR and SMR, with most of those areas located at the east coast of the island. Higher M/I ratio values were found in the rural, remote, and less dense areas of the island while lower rates were observed in the metropolitan areas. An inverted U-shape pattern in CRC incidence and mortality was observed with higher rates in the areas classified in the second quartile of the socio-economic deprivation index and lower rates in rural, remote, and less dense areas. A different pattern emerged in the M/I ratio 15th European Public Health Conference 2022
2022-10-27T15:02:36.507Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "d8793d391cd61e4a2709a174b3c5f661be450b4f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/eurpub/ckac131.100", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "393ea34245fabcf4128e515e6cce37ce81e1f63e", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14528034
pes2o/s2orc
v3-fos-license
Increased Pain Intensity Is Associated with Greater Verbal Communication Difficulty and Increased Production of Speech and Co-Speech Gestures Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain. Introduction Pain is an experience familiar to most people, with survey studies estimating the prevalence of pain within the general population to be between 30% and 72% [1][2][3]. While pain is usually self-limiting and does not require treatment, there are some instances (e.g. when pain is severe or persistent) in which help and support may be sought from healthcare professionals or close friends and family, and pain is a frequent feature of medical consultations [4,5]. Due to the private and subjective nature of pain, it is necessary for sufferers to communicate their experience if others are to be aware of its presence and nature. Health care professionals are encouraged to elicit information about various dimensions of pain, including intensity, location, type and sensation, onset and duration, previous treatments, associated symptoms and impact on activities [6][7][8]. That communication about these (and other) dimensions of pain is effective is particularly important within healthcare settings to enable appropriate clinical responses such as diagnosis, treatment and support to occur. Pain communication can be conceptualised as a three stage process in which the internal pain experience (A) is encoded into a pain message (B), from which the recipient can draw inferences (C) about the pain [9][10][11][12][13]. In producing a pain message (B), speakers often utilise numerous modalities to communicate their experience, including speech, nonverbal pain behaviours, and co-speech gestures. Through speech it is possible to provide detailed descriptions of pain, including sensation, intensity and duration of pain, where the pain is located, and the emotional and functional impact of pain. Indeed, it is because of this ability to convey a wide range of information that speech is considered to be the 'gold-standard' for pain communication [8,9,14]. Nonverbal pain behaviours such as grimacing, position shifts, restricted movement, rubbing the painful area, and sighing are also produced during pain descriptions [12,[15][16][17], and can serve to signal the presence and intensity of pain. While these pain behaviours are produced with varying degrees of automaticity and intentionality [12] and can be categorised according to whether they serve a protective function (e.g. quickly withdrawing and rubbing the finger after trapping it in door) or a communicative function (e.g. saying 'ow' or grimacing) [16,17], they provide a visible manifestation (B) of the pain experience (A) from which the observer can draw inferences (C) and are thus considered a form of 'pain communication'. Finally, recent research has revealed that co-speech gestures are also produced during pain communication [18][19][20]. Co-speech gestures are defined as the communicative movements of the hands, arms and other body parts that are spontaneously produced alongside speech [18][19][20][21][22]. These movements can be broadly categorised into representational and non-representational gestures [23], with the former conveying semantic information related to the content of speech (e.g. producing a circular gesture while saying ''it was a large table''), and the latter serving pragmatic functions such as managing turn-taking, marking the delivery of information or structuring the discourse [24][25][26][27]. In the domain of pain communication, the focus has been on representational gestures as these have the capacity to convey information about the pain experience. These representational co-speech gestures differ from nonverbal pain behaviours (such as facial expressions and posture shifts) and other movements (such as playing with the hair or fiddling with a pen) in that they are tightly connected to the speech system, and frequently contribute propositional information relating to the content of speech [27,28]. Within the context of pain communication, these co-speech gestures often convey detailed visual information about various aspects of the pain message, including its location, size and sensation, and frequently contribute additional information about pain that is not contained in the accompanying speech [18][19][20]. For example, a speaker may say ''it just comes on really suddenly'', while producing a gesture in which the hands are rapidly clenched and unclenched, indicating a cramping sensation that is not alluded to in speech. Taken together, the above suggests that in order to make inferences (C) about pain, healthcare professionals should attend to the information conveyed across the different modalities (speech, nonverbal pain behaviours and co-speech gestures). However, pain communication is a complex process and the multimodal pain message (B) may be influenced by a variety of factors, including the nature of the pain experience, the individual experiencing the pain, and the social and cultural context [9,12,13]. For example, a number of studies have demonstrated increased levels of verbal and nonverbal pain communication amongst those who catastrophize about pain [16,[29][30][31][32], while others have shown that depressed individuals engage in more facial displays of pain than those without depression [33,34]. Pain communication can also be supressed or exaggerated depending on factors such as cultural norms, interests of the sufferer or reinforcement and conditioning [9,12,35,36]. For example, Larochette and colleagues [36] found that children reported having both pretended to have pain and hidden their pain, with the most common reasons for pretending to have pain being to get attention, miss school, as a joke or to get their sibling in trouble, while reasons for hiding pain were to avoid embarrassment, to avoid worrying their parents or to be allowed to keep playing. Concerning social factors, studies indicate that people who are alone display more facial expressions of pain than those who are with a stranger [37,38] but are more expressive in the presence of solicitous others [39] or those who may underestimate their pain [40]. Characteristics of the pain experience may also influence the communication of pain and the association between pain intensity and communication is one that has received particular attention. For example, a meta-analysis of 29 studies found a positive association between self-reported pain intensity and observed nonverbal pain behaviour [15], and chronic orofacial pain patients have been found to select more words on a pain questionnaire when pain is intense [41]. This is an interesting finding as it suggests that increased displays of nonverbal pain behaviours may serve as an additional cue to pain intensity that can be used alongside self-reported judgements of pain intensity provided verbally or by means of pain assessment tools. However, research has not considered the influence of pain intensity on spontaneous verbal communication or the production of co-speech gestures. Given that speech and co-speech gestures convey detailed information about pain, their production may increase when pain is intense due to a desire to communicate the experience as effectively as possible in order to receive help from healthcare providers or concerned others. This paper reports a first attempt to consider the impact of pain intensity on communication, with a focus on both the production of speech and co-speech gestures. We hypothesised that people would enhance their communication (i.e., produce more speech and co-speech gestures) following a high-intensity than a lowintensity pain procedure. Due to the myriad of factors that impact upon pain communication, this study will use an acute, experimentally-induced pain experience (pressure pain to the fingernail bed), allowing for the manipulation of pain intensity, while keeping constant other factors, such as type, cause and duration of pain. The use of experimentally elicited pain also allows for a repeated-measures design, with all participants experiencing both high and low intensity pain, reducing interindividual differences (such as catastrophizing, previous experiences of pain and pain communication) that might impact on pain communication across the two conditions. Experimentally elicited pain is well accepted within research settings and has been used in a number of studies considering factors that influence pain communication [31,42,43]. To further ensure that any differences between the high and low intensity conditions can be attributed to the intensity of the pain, we also asked participants to judge how difficult they found verbally communicating about the high and low intensity pain. It is reasonable to suspect that participants may produce less speech (and accompanying co-speech gestures, due to the tight integration of the two modalities) if verbal communication is difficult. Considering communication difficulty will therefore ease the interpretation of our findings. Participants Twenty-six undergraduate psychology students (21 female; 24 right-handed, Mean age = 19 years) took part in return for course credit. All were native English speakers and none had suffered any known language impairment or taken part in a similar study. Design A within-groups design was employed in which participants underwent two experimentally-elicited pain procedures, one highand one low-intensity, with order of exposure counterbalanced. Participants took part in a semi-structured interview about the pain immediately following each pain elicitation. Materials A pneumatically controlled pressure pain apparatus (Dancer Design, UK), with a plastic probe lowered by means of a control dial, was used to elicit pain. The apparatus was fitted with an emergency pressure release button. Three pain questionnaires were also employed: 1) an 11-point numerical rating scale (NRS) for pain intensity (0 = 'no pain', 10 = 'worst pain'), 2) [44,45] an 11-point NRS for pain unpleasantness (0 = 'not unpleasant', 10 = 'very unpleasant') and 3) a Communication Difficulty Questionnaire (CDQ). The CDQ required participants to rate how difficult (from 1 = Easy to 5 = Very Difficult) they found it to verbally express information about seven aspects of the pain experience (location, intensity, sensation, size, duration, cause, and effects) during the semi-structured interview. The ratings across the seven items are summed to provide a total 'difficulty' score ranging between 7 and 35, with a higher score indicating increased difficulty in verbal communication. The CDQ was designed for the purpose of the present study and the seven aspects of pain were empirically derived on the basis of the information contained in speech and co-speech gestures during pain descriptions in a previous study [18]. Internal consistency of the CDQ was assessed using Chronbach's alpha, yielding a value of a = .75, and a bivariate correlation to assess the test-retest reliability revealed a significant correlation between scores across the two testing phases (high intensity and low intensity; r(26) = .46, p = .019. Although the test-retest reliability is lower than the usual acceptable values (i.e. above ,.70) it should be noted that this has been calculated on the basis of slightly different administrations of the questionnaire, such that for one administration the questionnaire was completed with reference to difficulty expressing information about high intensity pain, while the other concerned the expression of low intensity pain. Procedure Following instructions about the operation of the pain apparatus, participants aligned the fingernail of the middle finger of their non-dominant hand underneath the probe. Participants then turned the dial to increase the pressure until it reached the specified level on the eleven-point NRS for intensity. For the low intensity condition participants increased the pressure until the pain reached a level that they would rate as a '3' on the NRS, and for the high intensity they increased the pressure until it reached a level they would rate as a '7' on this scale. The intensity levels of '3' and '7' was chosen following piloting to establish the points at which participants judge the feeling of pressure to become painful rather than simply uncomfortable ('3') and 'painful' without being unbearable ('7'). Participants kept the pressure at the specified level for thirty seconds before releasing it. The mean level of pressure applied in the low intensity condition was 0.43 bar (SD = 0.04; Range = 0.40-0.50), and 0.53 bar (SD = 0.08; Range = 0.40-0.75) in the high intensity condition ('bar' is a metric measurement unit for pressure and one bar is equivalent to 100,000 Pascal, ten newtons per cm 2 , or 2.25 pound-forces per cm 2 ). Participants were discouraged from describing the pain experience during the pressure application. Following each pain elicitation, participants took part in a semi-structured interview about the pain. The topic guide for the interviews was based on pain assessment within clinical settings [6,46,47] and previous research [18][19][20] and began by asking the participant to describe their pain in as much detail as possible. Follow-up questions were used to elicit additional information about pain location, sensation, intensity, duration, impact and comparisons with other pain experiences. At the end of each interview participants were also asked to complete the three questionnaires (NRS for intensity and unpleasantness, and CDQ). During the interview, the participant and researcher sat opposite each other across a low table at a comfortable conversational distance. The entire procedure was video-recorded split-screen using wall-mounted cameras to give frontal views of the participant and researcher. Ethical Considerations Ethical approval was obtained from The University of Manchester School of Psychological Sciences Research Ethics Committee (Ethics code: 154/07P). All participants provided written informed consent to take part and for the session to be video-recorded. Participants were made aware at the beginning of the study that course credit was not linked to completing the experiment and that they could withdraw at any time and would still receive the study credits. Analysis of video data The analysis focused on participants' responses to the first part of the interview (i.e. where they were asked to describe the pain in as much detail as possible). Speech transcription. The selected portions of the interviews were transcribed verbatim and the total number of words calculated. All speech was included in the word count, including filled pauses ('er', 'um') and aborted speech. Speech transcription was completed by SR. Co-speech gesture identification. All co-speech gestures produced during the selected portions of the interviews were identified. A co-speech gesture was defined as any movement of the hands, arms, or other body part that occurs alongside speech and is related to that speech [27]. Unrelated movements such as self-or object-adaptors (e.g. playing with the hair or a pen) [48] were not classed as co-speech gestures. Co-speech gesture identification for the entire dataset was carried out by AJW, and to ensure reliability a second coder blind to the experimental hypotheses (SH) independently identified all co-speech gestures in a subset of seven randomly selected interviews from each condition (27% of the data set). Percentage agreement between coders as to which movements constituted co-speech gestures (agreements on individual gestures, divided by total agreements plus total disagreements) was 85% for the high intensity condition and 80% for the low intensity condition, indicating a high level of agreement. From these co-speech gestures, we then identified all those belonging to the subset classed as representational co-speech gestures. Representational co-speech gestures [23] contain semantic information, such as pointing to the painful area, or rapidly clenching and unclenching the fingers to indicate a throbbing sensation, and were the main focus of the present study. Identification of representational co-speech gestures for the whole dataset was completed by one coder (AJW), while the same second coder (SH) identified representational co-speech gestures in the seven randomly selected files from each condition, yielding Cohen's Kappa values of k = .78 (93% agreement; high intensity) and k = .63 (85% agreement; low intensity), indicating substantial agreement between the coders [49]. Upon completion of the gesture coding the total number of co-speech gestures overall and the total number of representational co-speech gestures was calculated. Statistical analysis Paired t-tests were performed to compare numbers of words and co-speech gestures and questionnaire scores across the high and low intensity conditions and an alpha criterion level of ,.05 (twotailed) was employed. Paired t-tests were conducted using SPSS Version 20 [50] and post-hoc power analysis was performed using G-Power Version 3.1.3 [51]. Independent sample t-tests comparing the scores of males and females for the number of words and gestures, and pain intensity, unpleasantness and communication difficulty revealed no significant differences (all p..05) and so the data for males and females was analysed together. Results When talking about high intensity pain, participants used significantly more words, as well as more co-speech gestures (both overall and representational co-speech gestures specifically) than when talking about low intensity pain. Participants also reported significantly more difficulty in verbally communicating about pain (CDQ score) in the high intensity pain condition. Finally, questionnaire measures showed that participants perceived pain to be significantly more intense and unpleasant in the high intensity condition (see Table 1). Discussion The present study aimed to investigate whether people produce more speech and co-speech gestures when communicating about high intensity compared with low intensity pain. Questionnaire ratings confirmed that people found the high-intensity pain significantly more intense and unpleasant than the low-intensity pain. In line with our hypotheses, when describing high intensity pain, communication was indeed enhanced in comparison to descriptions of low intensity pain. More specifically, people produced both significantly more speech and significantly more co-speech gestures than when describing low intensity pain. These findings show that within an experimental setting, the intensity of acute pain influences the amount of verbal and gestural communication produced. This extends on previous research by indicating that increased pain intensity leads to increases in spontaneous verbal communication [41]. In addition, it reveals that, within the domain of visible bodily communication, it is not only the use of nonverbal pain behaviours that increases when pain is more intense [15], but also the use of co-speech gestures. In contrast to pain behaviours, these gestures are rich in semantic content and add important information about a sufferer's pain experience [18][19][20], suggesting that when pain is more intense, people draw on multimodal communication resources to provide richer information about their pain. The results revealed that participants also rated the high intensity pain to be more difficult to communicate verbally. Despite this, the production of both speech and co-speech gestures increased when pain was more intense. This may suggest that people are motivated to communicate effectively when pain is intense, drawing on speech and co-speech gestures together in an active attempt to overcome the difficulties of speech and communicate their pain as effectively as possible. The present findings suggest that alongside the semantic content of patients' verbal and gestural depictions of pain, the production of nonverbal pain behaviours such as facial expression, and the results of any pain assessment tools (such as the NRS or multidimensional pain questionnaires), the frequency of speech and co-speech gestures may permit an additional cue to pain intensity. However, in order to assess the utility of the present findings for this purpose it is necessary to go back to the pain communication model we introduced at the beginning of this article and consider how speech and co-speech gesture frequency (B) might influence observers' ability to judge pain intensity (C). While significant, the increases in verbal and gestural behaviour in the high intensity condition could be considered relatively small and it is not yet known whether observers would be able to detect such differences. In the domain of facial expression, research suggests that observers are able to distinguish between real and exaggerated or supressed expressions of pain [52,53], while research on co-speech gestures indicates that recipients are able to glean the additional information contributed by this modality [54,55]. However, it is not yet known to what extent the frequency of speech or co-speech gestures serve as accurate cues to pain intensity. An important next step is to consider whether and how observers use the extra cue of increased speech and gestures alongside other information about pain intensity (e.g. from the content of speech and the production of nonverbal pain behaviours) to inform their understanding of pain intensity. In doing so it is also necessary to consider the factors that may influence observers' inferences (C) about the pain (A) being communicated (B). For example, factors such as suspected diagnosis, perceptions about patient motivation, and empathy may all influence clinicians' judgements about pain intensity and must be taken into account (see [9] for a more complete discussion of the factors that influence observers' inferences about pain). The present study represents the first attempt to consider the impact of pain intensity on the production of speech and co-speech gestures and there are a number of limitations that must be acknowledged. Firstly, an experimental pain procedure was used in order to minimise communication differences that may have arisen between the high and low intensity conditions had participants been describing naturally occurring pain that differed in type, past experiences communicating about that pain with others, duration of pain and/or previous treatment. The use of an experimental pain procedure also allowed us to test the same participants across both the high and low intensity pain conditions, minimizing between-participant differences that might have further influenced communication about pain, such as level of worry or catastrophizing about pain. This allowed us to 'isolate' the impact of pain intensity on communication for the purpose of this first investigation. However, it is recognised that communication about acute experimental pain cannot be taken as an analogue of the clinical situation. In particular, experimental pain differs in important ways from naturally occurring pain in that it is short-lived, controllable, can be stopped at any time and has a clear cause. In contrast, naturally occurring pain may have no known cause, be a symptom of a potentially serious condition, be ongoing, have an impact on functioning and/or be resistant to attempts at relief, and thus be associated with fear, anxiety and other negative emotions as well as differing motivations to communicate. While participants in an experiment might be motivated to be 'good' participants and comply with experimental instructions, the motivation of pain patients and their subsequent communication may differ depending on their personal concerns and goals. For example, communication is likely to differ depending on whether the patient desires reassurance, medication and/or other outcomes such as time off work or a referral to a specialist. Thus, while the present study provides a good starting point for investigations of the impact of pain intensity on verbal and gestural communication of pain, more research is needed to consider the effect of pain intensity on communication in clinical contexts. Such studies will need to measure and control statistically (rather than experimentally) for the influence of variables that might impact on the communication of pain in order to consider the influence of pain intensity. In order for the findings to be generalizable future work should also consider the role of pain intensity in communication about both chronic as well as acute pain experiences. Secondly, we recognise that the current sample was primarily made up of females, and all participants were interviewed by a female researcher. This limits the generalizability of the findings as it is well known that there are gender differences in the perception [56][57][58][59] and communication [60] of pain, while the gender combination of patient and healthcare professional (i.e. male-male, female-female or male-female) within clinical settings can also influence communication, with more information sharing occurring in female-female pairings [61][62][63][64]. Although comparisons of the scores obtained by males and females on each of the measures within the present study (i.e. production of speech and gestures, difficulty communicating about pain and pain intensity and unpleasantness) revealed no gender differences, future work should aim for a more representative sample and should use a variety of gender pairings in order to increase the generalizability of the findings. It is also acknowledged that the present findings are only applicable within populations with unimpaired verbal and gestural communication. Populations with language or motor impairments may use speech and gestures differently (if at all) and the assessment of pain intensity is likely to be based primarily on non-verbal pain behaviours (including facial expression and postural movements). Thus the frequency of speech and gestures as a possible indicator of pain intensity is not relevant within such populations. Finally, although the present study indicated that people engage in more verbal and gestural communication when pain is more intense, what has not yet been considered is whether pain intensity influences the content of this communication. Both speech and cospeech gestures convey semantic information about the pain experience, such as where it is located, the sensation, duration, intensity and cause, and the physical or emotional impact of pain [18][19][20]. The present findings suggest that more information about pain is being conveyed in the high intensity condition; however, in order to fully appreciate the influence of pain intensity on the communication of pain, future research should consider whether the information conveyed via the verbal and gestural modalities changes as a function of pain intensity. This is particularly important in light of the finding that despite being associated with increased speech and co-speech gesture production, high intensity pain was perceived to be more difficult to communicate verbally than low intensity pain. In conclusion, this research represents a first step in demonstrating the potential impact of pain intensity on the communication of pain by revealing that people produce more speech and co-speech gestures when communicating about high versus low intensity pain, despite finding intense pain more difficult to communicate. In addition to contributing detailed information about the nature of pain, speech and co-speech gesture production may provide an additional indicator of pain intensity, with possible implications for the treatment and support received by pain sufferers. However, more research is needed to assess the impact of pain intensity on communication in a variety of clinical settings to assess the generalizability of these findings.
2017-06-20T02:03:41.928Z
2014-10-24T00:00:00.000
{ "year": 2014, "sha1": "c09607699adf89a39adc288d0700f06cf3a36f13", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0110779", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c09607699adf89a39adc288d0700f06cf3a36f13", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
92983579
pes2o/s2orc
v3-fos-license
Bulk and surface states carried supercurrent in ballistic Nb-Dirac semimetal Cd3As2 nanowire-Nb junctions A three-dimensional Dirac semimetal has bulk Dirac cones in all three momentum directions and Fermi arc-like surface states, and can be converted into a Weyl semimetal by breaking time-reversal symmetry. However, the highly conductive bulk state usually hides the electronic transport from the surface state in Dirac semimetal. Here, we demonstrate the supercurrent carried by bulk and surface states in Nb-Cd3As2 nanowire-Nb short and long junctions, respectively. For the 1 micrometer long junction, the Fabry-Perot interferences induced oscillations of the critical supercurrent are observed, suggesting the ballistic transport of the surface states carried supercurrent, where the bulk states are decoherent and the topologically protected surface states still keep coherent. Moreover, a superconducting dome is observed in the long junction, which is attributed to the enhanced dephasing from the interaction between surface and bulk states as tuning gate voltage to increase the carrier density. The superconductivity of topological semimetal nanowire is promising for braiding of Majorana fermions toward topological quantum computing. Si substrates with 285 nm thick oxide layer. After the process of standard electron beam lithography, Nb (100nm)/Pd (5nm) electrodes were deposited by magnetron sputtering. The thin Pd layer was deposited with the aim to prevent Nb from oxidization. To improve the interface quality, the nanowire surface was etched with Ar-ion sputtering in situ before Nb deposition. The scanning electron microscopy (SEM) image in Fig. 1b shows the nanowire-Nb junctions. The nanowire diameter is ~100 nm. Three junctions with channel length of 100 nm (Junction-A), 118 nm (Junction-B), and 1 μm (Junction-L) were measured, respectively. Electrical measurements were carried out in an Oxford Instrument Triton Cryofree dilution refrigerator with a base temperature of 12 mK. The heavily doped Si substrate and the SiO 2 dielectric together served as a back gate to tune the Fermi level of the nanowire. Figure 1c shows the current-voltage (I-V) curves from Junction-A measured at gate voltages V g = 16, 0, -16 V, respectively. A clear zero resistance state and a dissipative state above the critical switching current I c are observed. The I c has a close relationship with V g . Figure 1d displays the mapping of the differential resistance dV/dI with V g and source-drain current I sd . The I sd was swept from negative to positive and the upper boundary of the purple region (the superconducting state) corresponds to I c . The I c decreases monotonically when sweeping V g to lower values. For V g < -25 V, I c saturates to a small but finite value. The normal state resistance R n is extracted at I sd = 400 nA. The I c R n product is nearly constant as function of gate voltage with a mean value of 115 μV ( Supplementary Fig. S1), indicating that the Josephson device is in the short junction limit where the electrode separation is less than the coherence length of the interlayer. Cd 3 As 2 nanowire devices fabricated using a similar process but with a long channel length (~ m) and with Au contacts usually show the Dirac point near 0 V. However, the Dirac point of the short Nb-Cd 3 As 2 -Nb junction is still not reached at V g = -80 V ( Supplementary Fig. S2). This result suggests that the Nb contacts provide heavy electron doping into the Cd 3 As 2 nanowire. The I c demonstrates a monotonous decay with increasing the perpendicular magnetic field B (Fig. 2a). Similar behavior has also been observed in InSb nanowire based Josephson junctions [22]. The magnetic field dependence of I c agrees well with a Gaussian fitting (Fig. 2b), which is usually observed in narrow Josephson junctions as the junction width (W) is comparable or smaller than the magnetic length To reduce the influence of electron-doping from the Nb contacts, a long junction (Junction-L) with a channel length ~1μm was studied. Figure 3a displays the V g dependence of dV/dI measured at an excitation current I ac = 2 nA. Clearly, the long junction still demonstrates supercurrent in the V g region of 0 to 20 V. While tuning V g below 0 V, the resistance increases dramatically and the supercurrent disappears. Also for V g > 20 V, a non-zero resistance state emerges, showing dissipative transport. The gate switchable supercurrent indicates that there are different conduction channels depending on the Fermi level of the nanowire. The critical current is ~10 nA at V g = 4 V, while almost decreases to 0 at V g = 20 V (Fig. 3b). The overall evolution of dV/dI with V g and I sd is exhibited in Fig. 3c. The I c has its maximum at around V g = 4 V. topological insulator. In Dirac semimetals, there is a highly conductive bulk state, and the scatterings between surface and bulk states would produce significant dephasing. As tuning the gate voltage to increase the electron density, the interaction between bulk and surface states will be enhanced. It destroys the coherent transport of the Cooper pairs and the supercurrent. Unambiguous oscillations of I c as function of gate voltage are observed. The rich pattern of dV/dI as a function of I sd and V g in the region of -1 < V g < 0 V is shown in To verify the ballistic nature of the Josephson supercurrent, the temperature dependence of I c at different gate voltages was investigated (Fig. 4). By applying a gate voltage, it is possible to tune the transmission coefficient as well as the chemical potential and the ratio of surface/bulk contributions to transport. For Junction-A, The I c (T) dependence at V g = -80 and -50 V show a linear behavior (Fig. 4a). At V g = 0 V, the I c (T) dependence shows a concave behavior at high temperatures, and gradually saturates at low temperatures. In the short and ballistic limit, the critical supercurrent is given by [29,40]: where is the transmission coefficient of the SN interface, is the phase difference between the two superconducting electrodes, is the normal resistance and corresponds to the reduction of due to a non-ideal environment. The temperature dependent superconducting gap is assumed to be ∆≈ ∆ 0 √1 − ( ) 2 , where ∆ 0 is the gap as T → 0, and is the critical temperature. As the multiple Andreev reflections in our measurements are not distinct enough to determine ∆ 0 , it was treated as a fitting parameter. At V g = 0 V, the fitting results give = 0.37, = 0.40, and ∆ 0 = 0.36 meV. For V g = -80 V, the fitting results give = 0.23, = 0.32, and ∆ 0 = 0.14 meV. The reduced transmission coefficient at -80 V is consistent with the observed Nb contact induced n-type doping near the Cd 3 As 2 /Nb interface. The reduced gap ∆ 0 is consistent with the decrease of the I c R n product. Since the gate also influences the bulk contribution to R n , especially when V g approaches the Dirac point, the overall magnitude of I c is changing with gate too. In the long junction-L, the ( ) dependence shows that I c at V g = 4 V is always larger than at V g = 10 V (Fig. 4b), which would be consistent with the fact that the surface carried supercurrent can be For Junction-A, the critical current , normal state resistance and the product as a function of gate voltage are shown in Fig. S1. The normal state resistance is deduced from the differential resistance at . The product is nearly a constant as , while it fluctuates strongly with a nearly constant background as . The maximum of (248 V) yields a lower limit on the induced gap The coherent length (1, 2) thus can be estimated to be between 254 to 845 nm, according to the Fermi velocity in the range of (3,4). Consequently, the transport in Junction-A should be close to the short junction limit. Supplementary Fig. S1. The critical current, normal state resistance and their product in Junction-A as a function of gate voltage. According to the transfer curve in Fig. S6, the Dirac point is roughly estimated to be , where the p-n junction is starting to be forming and the resistance changes most sharply. Therefore, the ~ dependence can be converted to ~ dependence according to the gate induced carrier density and the band structures of Cd 3 As 2 near the Dirac point. As shown in Fig. S3, the shows a periodic oscillation as a function of in the range of . The period of the oscillation is estimated to be , and the corresponding effective cavity length . Supplementary Fig. S3. The oscillation as a function of . The period is ~ .
2019-03-28T19:42:32.628Z
2018-03-27T00:00:00.000
{ "year": 2018, "sha1": "38205e96940e0a1dfdfe4801935927d465366c2d", "oa_license": null, "oa_url": "https://ris.utwente.nl/ws/files/30549044/bulk.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "420b4a1c8a9d6a315ef665d61bad84fddf7c8c4c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
235681581
pes2o/s2orc
v3-fos-license
Associations between Food Pantry Size and Distribution Method and Healthfulness of Foods Received by Clients in Baltimore City Food Pantries This study aimed to evaluate the association of the overall nutritional quality and the weight share of specific types of foods received by food pantry clients with food pantry size and distribution method. Data on healthy food weights using the gross weight share (GWS) of select foods and the validated Food Assortment Score Tool (FAST) were collected from 75 food pantry clients in Baltimore, Maryland. The average FAST score across the study population was 63.0 (SD: 10.4). Overall, no statistically significant differences in average FAST scores by pantry size and distribution method were found. However, among client-choice pantries, clients of small pantries had higher scores (p < 0.05) while among medium pantries, clients of traditional pantries had higher scores (p < 0.01). Subgroup analysis of GWS was stratified by pantry size and distribution methods. Findings suggested multi-level, multi-component interventions combining environmental strategies are needed to enhance the healthfulness of foods received by clients. Our analysis provided data to consider further refinements of pantry interventions and planning of more rigorous research on factors influencing the effectiveness of pantry interventions. Introduction Food insecurity was experienced by 11.1% of United States (U.S.) households in 2018, including households in large urban centers such as Baltimore city [1]. According to Feeding America, about 23% of people residing in Baltimore currently experience food insecurity, including more than 30,000 children [1]. In Baltimore, there are over 220 food pantries working with the Maryland Food Bank [2]. Roughly half of these organizations are operated by volunteers at community-based nonprofit organizations such as churches and homeless shelters, whereas the other half are located within Baltimore City Public Schools. Clients of food pantries tend to be food insecure and may be vulnerable to nutritional deficiencies [3]. The rise in obesity and diet-related diseases among food insecure individuals in the U.S. brings to question the nutritional quality of foods accessible to households that rely on food pantries [4]. Additionally, the most recent updates to international food guides have also confirmed the importance of dietary healthfulness [5][6][7]. Feeding America and many food banks in the United States, including the Maryland Food Bank, present the client choice method as a best practice. The main motivation 2 of 10 for promoting client choice is to provide a dignified experience to clients. Pantries were classified by distribution method: traditional (distributing pre-packed bags) or client choice (allowing clients to make selections on the foods they receive). Additionally, client-choice can be implemented using a number of different models: the supermarket model (clients can shop like at a store), table model (food items/groups are displayed on tables), inventory list model, and food weight model (clients can select a set poundage of food), among others. A study using a Freshplace intervention included a client-choice pantry in the north end of Hartford and increased fruits and vegetables by one serving per day compared with the traditional group [8]. However, it is not clear whether client choice influences the overall healthfulness or share of specific groups of client food selections. If it is found that a switch to client choice is independent from the availability of healthy foods in client bags, food assistance organizations will need to take additional measures beyond promoting client choice to promote healthy options to their clients. In addition, food pantry size may also matter while considering the healthfulness of foods clients receive. Pantries of different sizes may have different capacities and resources. For example, larger pantries may have more resources like fridges which are good for healthier foods including fresh fruit and vegetables to be stored. Pantry size may also matter in developing environmental strategies to try and nudge healthier client selections because we can see if different intervention strategies like client-choice or not will have different impacts in different-sized food pantries. It is also unclear whether the pantry size differentiates the healthfulness of client food selections. The main objective of organizations like food pantries are to minimize chronic hunger. However, the managers of food pantries we spoke with in our previous studies said they were also interested in stocking and providing healthier foods if they knew clients were interested in using them, but perceived barriers associated with it [9]. Previous studies have assessed the quality of food distributed in food pantries [10][11][12][13]. At the pantry level, the quality and quantity of foods accessible to clients may be determined by the food distribution method used in the pantry [14]. In urban food pantries in the Bronx, NY, USA the nutritional quality of foods available varied by item type (fresh, shelf-stable, refrigerated/frozen), sourcing, distribution method (prefilled bags and client choice), and client position in line. They found that client choice pantries in Bronx, NY, USA had healthier foods available than traditional pantries. However, at client choice pantries, earlier clients selected the less healthy options first. This suggests that a switch to client choice or stocking healthy foods alone might not be sufficient to promote healthy options to clients [10]. However, the generalizability of these limited findings in other US settings remains unclear. Likewise, no previous studies have investigated the association with many food pantry characteristics such as pantry size and food distribution method together with nutritional quality of foods received by clients [13]. Thus, in this study, we evaluated the association of the overall nutritional quality and the weight share of specific types of foods received by food pantry clients with food pantry size and distribution method. We assumed that both food pantry size and distribution method (separately or combined) would affect the healthiness of food clients get. By including sociodemographic factors, we also wanted to see if, in a client choice pantry, clients from certain sociodemographic backgrounds selectively preferred healthier foods. In the pantries distributing pre-packed bags, we wanted to see if pantries serving certain sociodemographics were more likely to try to distribute healthier foods. This information would help us identify clients who typically do not get healthier foods at pantries and target them in our intervention messaging in the future. Study Population This study used baseline data from a feasibility trial to promote healthy foods and beverages in Baltimore City food pantries [15]. Data collection was conducted in seven food pantries in Baltimore, Maryland during September and October of 2018. We identified 102 food pantries using the Maryland Food Bank's database of partnering Baltimore community food pantries located outside of Baltimore City Public Schools. Indeed, this descriptive formative research study was nested under a larger interventional study that iwas occurring at the randomly selected pantries that were analyzed. Pantries were stratified into tertiles of size, based on pounds of food distributed in the previous fiscal year: small (65 to 10,000 pounds), medium (10,001 to 24,600 pounds), and large (24,601 or more pounds). Exclusion criteria for food pantries included (1) operating less than once per week, (2) being located in a school, (3) already receiving nutrition education from the Maryland Food Bank or the University of Maryland Extension, and/or (4) having a new manager (<2 months in position). Of the pantries in the Maryland Food Bank's database, 21 food pantries could not be reached, 58 were not eligible, and 9 were not interested; 7 food pantries of differing sizes (determined by weight of food distributed in the previous fiscal year) were randomly selected out of 14 eligible and interested food pantries. Food pantry clients were recruited during food pantry distribution hours and they were be provided with a 10-dollar Walmart card after completing the whole survey. Clients participated in an on-site survey and data of 75 participants were collected by trained research assistants (approximately 10 clients/pantry, a convenience sample, in which the first 10 clients approached and who agreed to participate comprised the sample). Eligibility criteria for participants included being at least 18 years old and receiving food from the pantry at the time of data collection. The flowchart of the study is shown in Figure 1. Study Population This study used baseline data from a feasibility trial to promote healthy foods and beverages in Baltimore City food pantries [15]. Data collection was conducted in seven food pantries in Baltimore, Maryland during September and October of 2018. We identified 102 food pantries using the Maryland Food Bank's database of partnering Baltimore community food pantries located outside of Baltimore City Public Schools. Indeed, this descriptive formative research study was nested under a larger interventional study that iwas occurring at the randomly selected pantries that were analyzed. Pantries were stratified into tertiles of size, based on pounds of food distributed in the previous fiscal year: small (65 to 10,000 pounds), medium (10,001 to 24,600 pounds), and large (24,601 or more pounds). Exclusion criteria for food pantries included (1) operating less than once per week, (2) being located in a school, (3) already receiving nutrition education from the Maryland Food Bank or the University of Maryland Extension, and/or (4) having a new manager (<2 months in position). Of the pantries in the Maryland Food Bank's database, 21 food pantries could not be reached, 58 were not eligible, and 9 were not interested; 7 food pantries of differing sizes (determined by weight of food distributed in the previous fiscal year) were randomly selected out of 14 eligible and interested food pantries. Food pantry clients were recruited during food pantry distribution hours and they were be provided with a 10-dollar Walmart card after completing the whole survey. Clients participated in an on-site survey and data of 75 participants were collected by trained research assistants (approximately 10 clients/pantry, a convenience sample, in which the first 10 clients approached and who agreed to participate comprised the sample). Eligibility criteria for participants included being at least 18 years old and receiving food from the pantry at the time of data collection. The flowchart of the study is shown in Figure 1. Measurements A food pantry client questionnaire was developed for this study, which had 11 questions including a client bag audit and sociodemographic information, which takes approximately 15 min. The questionnaire was validated and tested in our formative research. The survey was enumerated by research assistants. Client-sourced data were important in the case of our study as (1) we were interested in the specific foods selected by each client (rather than general information on foods in the inventory in the pantry), and (2) food pantries often did not keep a record on which foods each client selected, thus necessitating us to actively source the data ourselves. The overall nutritional quality of the foods received was assessed using the Food Assortment Scoring Tool (FAST) developed by Caspi et al. (2018) [12]. A higher FAST score reflects a greater proportion of healthy foods. The FAST measure was developed with 13 scored categories and 31 sub-categories. The FAST scores were generated by sorting and weighing food in categories, multiplying each category's weight share by a healthfulness parameter, and summing the categories (range 0-100). The FAST tool has been previously validated against the Healthy Eating Index 2010 (HEI-2010) in food pantry setting [12]. Gross weight share (GWS) is the proportion of the client's selection that is allocated towards a food category. To calculate the GWS of certain food groups, the weight of a proportion of select food groups was extracted from the FAST score calculations and separately analyzed. Sociodemographic information collected included age, sex, ethnicity, household information, marital status, employment, food assistance program participation, and medical history. We collected this information because, in Baltimore City, the availability of healthy foods in the food environment is closely associated with sociodemographic factors [1]. Statistical Analysis A descriptive quantitative analysis of pantry clients' sociodemographic information was conducted and stratified by food pantry distribution method and pantry size. Analysis of variance (ANOVA) was used to assess significant differences by group. The final analytic sample included 74 participants who completed all parts of our questionnaires. All analysis was performed in R Version 3.5.3 (https://www.r-project.org/, available on 29 June 2021). Description of Pantry Client Characteristics and FAST Scores Among our sample, there were four traditional pantries and three client-choice pantries. An overview of participant characteristics is displayed in Table 1. The mean age in years of the participants was 56.6 years old. Of the participants, 90% were self-identified as African American. Most clients lived in households of one or two individuals (52.7%) and lived in households without children (58.1%). More than half of clients received SNAP benefits (57.5%), but few received WIC benefits (5.6%). In terms of self-reported medical history, most reported high blood pressure (62.2%), followed by diabetes (25.7%), obesity (13.5%), and cancer (8.1%). Associations between Food Pantry Characteristics and FAST Scores The average FAST score across the study population was 63.0 (SD: 10.4). There was no difference in FAST scores by food pantry size or by distribution method, overall (Table 1). Clients who received WIC benefits had lower FAST scores compared with clients who did not receive WIC (−12.2, p < 0.05). No other sociodemographic variable was significantly associated with FAST scores for the overall sample (Table 2). Sub-group analysis was limited by the relatively small number of participating pantries. However, in medium-sized food pantries, clients from the client-choice pantry received food bags with lower FAST scores compared with clients from pre-packaged pantries (p < 0.01). In pre-packaged pantries, the healthiness of client bags was not statistically significantly different among different sized pantries. Among client-choice pantries, food selections received by clients from medium pantries had lower FAST scores, followed by large and small pantries (p < 0.05) ( Table 3). Analyses of GWS Stratified by Pantry Size and Distribution Methods When stratified by pantry size, clients of small pantries had higher GWS of fresh fruits and vegetables (p < 0.01), higher GWS of beverages (p < 0.001), vegetable protein (p < 0.01), and mixed meals and side dishes (p < 0.001) compared with clients of medium and large pantries. Medium pantry clients had higher GWS of processed fruits and vegetables (p < 0.01) and higher condiments, baking, and cooking (p < 0.01) compared with clients of small and large pantries. Large pantries' clients had higher GWS of highly processed meat (p < 0.05), desserts and snacks (p < 0.05), dairy (p < 0.001), and meat, poultry, fish, and eggs (p < 0.001) than small and medium pantries' clients. Discussion This is the first study to explore whether food pantry distribution method and size can predict the healthfulness of food obtained by pantry clients in an urban setting. Increasingly, attention is being directed to the client choice model and the need for a healthier food environment for all pantry clients. In client choice pantries, clients are not required to receive items they may already have, may not like, or cannot eat for health or personal reasons, which may decrease food waste and be more efficient for food distribution [8,16]. In a past study, a food pantry intervention that involved transforming a traditional pantry to client choice saw improvements in client food insecurity and fruit and vegetable intake [17]. Overall FAST scores were not significantly different among clients of different sized pantries and between clients of pantries having different distribution methods, suggesting no noteworthy differences in healthiness of foods clients received by pantry size and distribution methods founded in our study. Thus, the client-choice method may not significantly improve the nutritional quality of foods received by clients compared with traditional prepackaged methods in these seven pantries. However, outside of simply nutritional outcomes, such a method may still catalyze other positive outcomes, such as less food waste and better connection with clients, which may also have salient ramifications for nutrition and health outcomes of food pantry clients. Among client choice pantries, clients of small pantries received healthier foods than clients of large pantries, followed by medium pantries. This suggests that the size of clientchoice pantries impacts nutritional quality of foods received by clients. Small food pantry clients received the largest proportion of healthy foods. In addition, medium and large food pantries often distribute more foods and serve more clients than small ones. These findings suggest clients who visit larger food pantries may have the greatest potential to benefit from nutrition programs and interventions in pantry settings. Clients of client-choice pantries obtained foods of significantly lower nutritional quality than clients of traditional pantries for medium-sized pantries. This suggests that, although clients are able to make their own food selections in the client choice model, the foods available in the pantry food environment and clients' nutritional knowledge and motivation remain important determinants of the healthfulness of the client bag [13]. From our formative research [9], the pantry managers using the traditional distribution method we spoke with did not purposefully try to push healthier foods to clients in pre-packed bags. The reasons for preferring pre-packed bags were more related to maintaining safety, order, respect in the pantry, and for serving clients as quickly as possible. Additionally, not all client-choice pantries offered true client choice by allowing clients to select whatever they wanted, but some of them had restrictions as to the types of foods that could be selected. It's important to note that not all client-choice pantries offered the same level and flexibility of choice. In terms of the proportional weight of select food groups received by food pantry clients, clients across food pantries of different sizes and distribution methods received a significant portion of both healthy and unhealthy foods. In other words, clients received a variety of both fresh fruit and vegetables, beverages, desserts, and snacks, which suggests food pantries are offering a diverse range of foods. Nonetheless, while healthy food options were indeed received, this study supports past research suggesting there is still room to significantly enhance the proportion of healthy foods for food pantry clients [10]. In our intervention studies, we were aiming to promote lean and low-sodium proteins, fruits and vegetables, and healthy carbohydrates. This study had several limitations. Chief among these was the generalizability of the results due to sampling issues. The sample size was small, participation was low among pantries invited to participate, and the sample selection of pantries and of clients was not random. Nevertheless, our analysis provided data to consider further refinements of pantry interventions and planning of more rigorous research on factors influencing the effectiveness of pantry interventions. This also limited us from doing some regression analyses to examine if certain client characteristics were predictors of the nutrition quality of foods received. For example, if the pantry used a client-choice model, their clients' disease history or health status may have influenced their food selection. The power of the factors studied in our study explaining the differences in healthier foods obtained by pantry clients were limited by this study's low effect modification. These results as preliminary descriptive information require confirmation in a larger study. In particular, the client choice method does not appear to be a panacea for ensuring higher nutrition quality foods are selected, further intervention is needed to promote healthful selection. Second, we used FAST to determine the healthiness of clients' food, instead of the Healthy Eating Index (HEI) [18], to better target food pantry settings. Although FAST is correlated with HEI-2010 scores [12], the HEI is commonly used to report findings on other food sources (supermarket and corner store purchases, school meals, etc.) and diets, thus limiting our ability to compare results with studies considering other sources of food for low-income, food-insecure individuals [12]. Third, clients taking more nutritious food home from a pantry does not necessarily imply a healthier diet. For many clients, pantries supply only a portion of their overall food supply. However, for clients who usually visit pantries, those food are a very important part of diets for their daily lives. In addition, the eligibility criteria were more suitable to selecting pantries for our intervention study, which may reduce the generalizability of this sample. Conclusions The nutritional quality of foods that clients receive at food pantries may impact clients' diet and be associated with health outcomes. While average FAST scores across clients from pantries of various sizes and distribution methods did not show statistically significant differences, stratifying FAST scores in subgroup analysis and examining GWS of key food groups suggested that clients from larger food pantries received the largest proportion of unhealthy foods. These findings suggest the need to prioritize larger food pantries in future interventions, and likewise increase clients' nutritional literacy to facilitate healthy food choices. Multi-level, multi-component, and combined interventions including environmental strategies, distribution system changes, and clients' nutritional intention and behavior on food pantries are needed to improve the nutritional quality of food pantries serving food-insecure communities in the US.
2021-06-30T07:57:52.076Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "e5d61417250ee3ec3af0cf4cda47296f4c3d7774", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/13/6979/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5d61417250ee3ec3af0cf4cda47296f4c3d7774", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
72607906
pes2o/s2orc
v3-fos-license
Advanced hepatocellular carcinoma extending to the inferior vena cava and right atrium Abstract is not required for Clinical Imagesis not required for Clinical Images (This page in not part of the published article.) International Journal of Case Reports and Images, Vol. 6 No. 4, April 2015. ISSN – [0976-3198] Int J Case Rep Images 2015;6(4):255–257. www.ijcasereportsandimages.com Habib et al. 255 CASE REPORT OPEN ACCESS Advanced hepatocellular carcinoma extending to the inferior vena cava and right atrium S. M. Habib, L. C. W. De Jonge, R. A. Carels, C. Verveer, A. A. M. Zandbergen CASE REPORT OPEN ACCESS Advanced hepatocellular carcinoma extending to the inferior vena cava and right atrium S. M. Habib, L. C. W. De Jonge, R. A. Carels, C. Verveer, A. A. M. Zandbergen CASE REPORT A 65-year-old Surinamese male visited our outpatient clinic due to fatigue and weight loss of five kilograms in the last three months. The patient was diagnosed with chronic hepatitis B five years ago for which he did not receive treatment. Furthermore, he was a moderate alcohol drinker (1 drink/day) and had a 40 pack-year history of smoking. At physical examination, a cachectic man was seen with a blood pressure of 109/67 mmHg and a pulse rate of 84 beats per minute. He had no signs of jaundice. Physical examination of the abdomen revealed distended superficial abdominal veins and an enlarged liver, but no signs of ascites. Laboratory testing revealed a normal hemoglobin level of 9.9 mmol/L, an elevated bilirubin level of 95 μmol/L, an elevated alphafetoprotein level of 36 µg/L, and elevated liver enzymes (alanine aminotransferase 91 U/L, gamma glutamyl transferase 576 U/L, and alkaline phosphatase 245 U/L). Subsequently, a computed tomography (CT) scan was performed, showing a large mass in the right lobe of the liver extending to the caudal lobe with occluded portal vein branches. Importantly, the mass was found to be invasive, compressing the vena cava inferior, and extending into the right atrium ( Figure 1). No sign of liver cirrhosis were seen on computed tomography (CT) scan. CliNiCAl imAgES PEER REviEwEd | OPEN ACCESS A magnetic resonance imaging (MRI) scan was performed to establish a final diagnosis. MRI scan confirmed the presence of a 13.5x8.4 cm mass in segment 6 and 7 of the liver with a tumor thrombus invading the inferior vena cava ( Figure 2) and right atrium. DISCUSSION Hepatocellular carcinoma (HCC) is frequently a fatal malignancy and accounts for the majority of cases with primary liver cancer worldwide. The incidence of hepatocellular carcinoma (HCC) is low, however, seems to be as high as 11-20 cases per 100,000 male inhabitants in some countries [1]. Significant risk factors of HCC include chronic hepatitis B virus infection and other chronic liver diseases, usually in combination with cirrhosis [1,2]. The most common locations of extrahepatic metastases of HCC include the lung, abdominal lymphnodes, and bones [3]. An initial presentation of HCC with tumor thrombus invasion in the vena cava inferior and right atrium is extremely rare (1-4%) and hazardous. An atrial thrombus can cause right heart failure and pulmonary embolism, and most patients die within the first year after diagnosis [4]. Establishing the diagnosis of HCC is a challenge, but preferably must be done based on non-invasive techniques, such as imaging (CT and/or MRI), in most cases without histological confirmation [2]. HCC in our patient was diagnosed based on the presence of the typical radiological criteria including the irregular tumor surface, the early arterial enhancement and the fast washout of tumor areas on the dynamic contrastenhanced MRI scan. The management of HCC requires a multidisciplinary approach and is closely linked to the stage of disease. Treatment options include radiofrequency ablation, partial liver resection, liver transplantation, systemic medical treatment (e.g., Sorafenib), and transcatheter arterial chemoembolization [2]. Given the limited treatment options and incurability of HCC with an atrial tumor, our patient was discharged from the hospital after one week with supportive care. CONCLUSION In conclusion, this case illustrates an uncommon presentation of advanced hepatocellular carcinoma (stage IIIC), which could be successfully diagnosed noninvasively by radiological imaging. AN INTRODUCTION Edorium Journals: On Web About Edorium Journals Edorium Journals is a publisher of high-quality, open access, international scholarly journals covering subjects in basic sciences and clinical specialties and subspecialties. Edorium Journals: An introduction Edorium Journals Team But why should you publish with Edorium Journals? In less than 10 words -we give you what no one does. Vision of being the best We have the vision of making our journals the best and the most authoritative journals in their respective specialties. We are working towards this goal every day of every week of every month of every year. Exceptional services We care for you, your work and your time. Our efficient, personalized and courteous services are a testimony to this. Editorial Review All manuscripts submitted to Edorium Journals undergo pre-processing review, first editorial review, peer review, second editorial review and finally third editorial review. Peer Review All manuscripts submitted to Edorium Journals undergo anonymous, double-blind, external peer review. Early View version Early View version of your manuscript will be published in the journal within 72 hours of final acceptance. Manuscript status From submission to publication of your article you will get regular updates (minimum six times) about status of your manuscripts directly in your email. Mentored Review Articles (MRA) Our academic program "Mentored Review Article" (MRA) gives you a unique opportunity to publish papers under mentorship of international faculty. These articles are published free of charges. Favored Author program One email is all it takes to become our favored author. You will not only get fee waivers but also get information and insights about scholarly publishing. Institutional Membership program Join our Institutional Memberships program and help scholars from your institute make their research accessible to all and save thousands of dollars in fees make their research accessible to all. Our presence We have some of the best designed publication formats. Our websites are very user friendly and enable you to do your work very easily with no hassle. Something more... We request you to have a look at our website to know more about us and our services. We welcome you to interact with us, share with us, join us and of course publish with us. Invitation for article submission We sincerely invite you to submit your valuable research for publication to Edorium Journals. Six weeks You will get first decision on your manuscript within six weeks (42 days) of submission. If we fail to honor this by even one day, we will publish your manuscript free of charge. Four weeks After we receive page proofs, your manuscript will be published in the journal within four weeks (31 days). If we fail to honor this by even one day, we will publish your manuscript free of charge and refund you the full article publication charges you paid for your manuscript.
2019-03-10T13:06:44.962Z
2015-03-20T00:00:00.000
{ "year": 2015, "sha1": "0c8dd6d24d0aec1c334f758723b24ac4def53f5a", "oa_license": "CCBY", "oa_url": "http://www.ijcasereportsandimages.com/archive/2015/004-2015-ijcri/CL-10071-04-2015-habib/ijcri-1007104201570-habib.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f41e8cc0695546a7f8110c20fd4876f61537e973", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
167217264
pes2o/s2orc
v3-fos-license
MAGICal GRB 190114C: Cutoff in the spectrum at sub-GeV energies GRB 190114C is a famous Gamma-Ray Burst (GRB) due to its detection at sub-TeV energies by MAGIC, seen at redshift z = 0.42. This burst is one of the brightest GRB detected by \fermi. We present a detailed analysis of GRB 190114C prompt emission, using the two \fermi detectors: GBM and LAT. The \emph{LAT low energy events} (LLE) data is also considered. A joint GBM-LAT analysis reveals a sub-$GeV$ spectral cutoff. A similar high energy cutoff was reported in GRB 160509A and GRB 100724B earlier, and a handful of other sources. The cutoff can be explained by the intrinsic opacity due to pair production within the emitting region. Such morphology in these GRBs suggests that they belong to one specific class having a similar source of the radiation mechanism. GRB 190114C shows a transition from non-thermal to a quasi-thermal-like spectrum along with radiation due to external shock. From spectrum analysis and $Lorentz$ factor evolution from the trigger time to late emission, considering the fact that sub-TeV photons are detected in MAGIC, we are able to draw an emission mechanism picture, where the prompt emission spectrum is more consistent with spectrum via photospheric dissipation with presence of external shock emission simultaneously. INTRODUCTION Gamma-ray bursts (GRBs) prompt emission spectrum are traditionally modeled by the Band function (Band et al. 1993). However, the deviations from Band function are observed and reported previously, such as the presence of an extra thermal component (Ryde 2005;Page et al. 2011;Guiriec et al. 2011), spectral breaks (Oganesyan et al. 2017a,b), Band function with highenergy cutoff (Ackermann et al. 2013;Tang et al. 2015;Vianello et al. 2017), multiple thermal + non-thermal components (Guiriec et al. 2015b(Guiriec et al. ,a, 2016Basak & Rao 2015). Observational evidences, however, could be affected by selection effects. Two very different models are sometimes barely distinguishable when folded with the response of a detector. Time-resolved spectral analysis, though a very reliable method to understand the emission mechanism, has implementation limits because of poor statistics. A break at low energy is observed in GRBs observed simultaneously in soft and hard X-rays † vikas. chand.physics@gmail.com in Swift . However, bumpy structure in this range is also observed (Basak & Rao 2013;Iyyani et al. 2013). The presence of a thermal component and its effect on the non-thermal spectral emission have also been studied (e.g., Li 2019). Thermal components are considered as signature of a photospheric model. The drawbacks of the empirical models can be avoided by using physical emission models. Burgess et al. (2018) used synchrotron model from a cooling population of electrons and showed most of the GRBs spectrum are consistent with synchrotron cooling. Photospheric models are also used in some studies (Vurm & Beloborodov 2016;Vianello et al. 2017). On the theoretical side, consensus is built over two contending models, photospheric emission (dissipative or non-dissipative) and synchrotron emission in many possible settings (Beloborodov & Mészáros 2017;Pe'er 2008;Burgess et al. 2018). The sub-GeV radiation from the bright burst with good count statistics can give important insights. The question of whether the ∼100 M eV emission has an external shock origin or internal dissipation origin is still under debate (see Tang et al. 2015, and references therein). Time-dependent broadband model fits from keV to GeV energies can help us to distinguish these two origins. On the other hand, in cases where both the external and internal dissipation is contributing to the ∼100 M eV emission, model fits with short time slices (e.g., 1 s) can reveal the time evolution of both contributions. With the Fermi space observatory, the broadband spectrum of the GRBs can be studied from a few keV up until hundreds of GeV s in some bright bursts. Some bright bursts have shown bright emission also in Fermi -LAT, thus allowing enough photon statistics even in short time slices. GRB 160625B is one such example where the emission was seen in sub-GeV LAT band. A joint analysis of Fermi detectors shows a cut-off in the spectra in ∼ 100 M eV energy range . Similarly, GRB 160509A is yet another example. A break similar to GRB 160625B exists for this GRB and GRB 100724B (Vianello et al. 2017). From its light curves this emission seen up to sub-GeV band is most likely related to prompt emission. GRB 160509A has shown remarkable evolution of GeV spectrum from prompt to afterglow (Tam et al. 2017). A detailed analysis with LAT emission during the prompt emission can reveal that the contamination of the spectrum by lower energy components can lead to this dramatic evolution. Although GRB 160509A shows two bright LAT pulses, GRB 190114C shows one bright pulse in the LAT. Several spectral studies of GRB 190114C have been performed recently (Wang et al. 2019, Ravasio et al. 2019), but caveats of these works include: (a) incomplete spectral analysis, due to the absence of LAT-LLE data and they have analyzed the GBM and LAT data separately instead of joint analysis; (b) wider time bins are chosen while performing the spectral analysis. There are hints from these studies though that the initial part of the LAT spectrum could be affected by prompt emission spectrum. We have presented the evolution of the net spectral shape of the prompt emission by joint GBM-LAT analysis, by trying several typical empirical spectral models. We summarize the major observations of GRB 190114C in Section 2. We draw a parallel of GRB 190114C with GRB 160509A and also demonstrate the transition of the LAT emission from prompt to afterglow for the later in Section 3. Conclusions and implication of the results are discussed in Section 4. OBSERVATIONS GRB 190114C triggered the Neil Gehrels Swift Observatory -Burst Alert Telescope (BAT) at 20:57:03 UT (T 0, trigger time), on 14 th of January 2019. Later, the optical counterpart was detected by several observatories in various bands and with detection of absorption lines the redshift found to be, z = 0.4245 ± 0.0005 (Castro-Tirado et al. 2019). Surprisingly, MAGIC detected GRB 190114C in the sub-T eV energy domain starting at T 0 + 50 s. A clear excess of gamma ray events were detected with a significance greater than 20 σ within the first 20 minutes with energies greater than 300 GeV (Mirzoyan et al. 2019). The Fermi GBM light curve shows a bright, multi peaked pulses from T 0 + 0 s to T 0 + 15 s followed by a fainter emission lasting up to T 0 + 200 s. The calculated T90 (Koshut et al. 1995) duration of the light curve was found to be 116 s ( within 50 − 300 KeV ), along with an energy fluence (within 10 − 1000 keV ) of (3.99×10 −4 ± 8.10×10 −7 ) erg cm −2 and the estimated isotropic energy release was 3×10 53 erg. This source was also detected by AGILE/MCAL in the 0.4 − 100 M eV energy band for duration of 6.2 s (Ursi et al. 2019). As observed by Konus-Wind, the main part of the burst showed a hard-spectrum multipeaked pulse starting from T 0 to T 0 + 6 s with a fluence of (4.83 ± 0.10) × 10 −4 erg cm −2 (Frederiks et al. 2019). METHOD AND ANALYSIS The data were downloaded from publicly available data on the Fermi science Support Center (FSSC) 1 . The spectra were reduced using Fermi science tools software gtburst by standard methodology 2 . For LAT, transient event class and its instrument response function P8 TRANSIENT020 were used. The spectral analysis is performed in XSPEC (Arnaud 1996), and pgstat was used for testing various models since the data is Poissonian and the Gaussian background is derived from modeling the off-source intervals by a polynomial 3 . Furthermore, to fit the different components of the spectrum we used the Band model (Band et al. 1993) (B) for one, and B + powerlaw with a multiplicative cutoff component (B + CPL) for the other. A model with exponential cutoff applied to Band model (BC) is also used (see section A for the form of funtion. To find which model fits the data best, we used Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC). Given their properties, AIC is preferred to compare non-nested models such as Band function, or powerlaw. Whereas, BIC is preferred when nested models such as black-body with a band function are compared (Kass & Rafferty 1995). The change in AIC or BIC can predict the model with strong correlation to the data. All errors are calculated within 90% confidence level. We take brighter GBM-NaI detectors with off-axis angles less than 50 • and GBM-BGO covering same hemisphere of spacecraft as NaI detectors. In case of GRB 190114C , we have considered NaI 3, 4, 7, 8 and BGO 0, 1 (n3, n4, n7, n8, b0 and b1). LAT data is also used (both LLE and > 100 M eV data). Here, we would be referring to energies > 100 M eV as LAT-HE. To account for inter-instrument calibration, we applied a multiplicative constant factor (effective area correction factor) w.r.t the detector having highest count rate. The factor is allowed to vary up to ∼ 20 -30% as the EAC constant factor is not expected to differ by more than 30%. 3.1. GRB 160509A LAT-HE: transition from prompt to afterglow A detailed analysis of the prompt emission of GRB 160509A is presented in a previous study done by Vianello et al. (2017). A high energy cutoff (around 100 M eV ) to the Band function is found to be the best fit model. LAT emission can be divided into two major episodes. Initially, the emission is observed simultaneously with GBM. The LAT-HE flux evolution shows two components with different hardness (see left panel of Fig. 1). The former being a softer emission that lasts till ∼ 40s and the latter being a harder extended emission. The former is a fast varying (FV) component since its photon flux varies with time as ∝ t −3.98±0.53 . The latter being a slow varying (SV) LAT-HE component which may extend to an earlier time, thus showing us hardening of the spectral component as a result of its superposition with the earlier softer emission. Tam et al. (2017) have also studied the LAT-HE emission and have noted the soft and hard components. We, here, track the hard component by monitoring the LAT emission during the overlapping time-window which helps us smoothly observe the evolution of the spectral index. The FV component is soft and in the time-integrated spectrum can be thought to be the spectrum above the cut-off in the Band spectrum. The two components are dominant in different energy regions. The FV component is majorly populated by the photons with energies that are less than 200 M eV whereas the SV component by ones with energy greater than 200 M eV . The lightcurves in Fig. 2 (right panel) show photon with energies near 1 GeV are first observed after ∼ 20 s. This implies the presence of LAT-HE afterglow starting earlier than or beginning from 20 s. This claim can further be supported by the flux evolution of LAT in the energy range 0.1-10 GeV as seen in Fig. 1. By looking at the evolution of spectral index, we clearly see the transition from prompt to afterglow emission. In wider bins, this soft to hard transition can be seen in the 20 -27 s and 27 -37 s bins. To see this as a smooth transition we made narrower bins of 3 s duration and used the sliding window technique with a step of 1 s, or 2 s (for the last few bins). We plot the spectral index of the powerlaw fit obtained for these windows. The index evolves from a softer value observed in the bins 8 -13, 13 -15 and 15 -18 s to a harder value observed for the bins after 37 s (see Fig 1). Joint GBM-LAT analysis The energy range 8 -900 keV was used for NaI detectors, ∼ 0.2 -38 M eV was used for the BGO detectors, 20 -100 M eV was used from LLE and >100 M eV was used for LAT-HE. We neglected ∼ 30 -40 keV from our spectral analysis to exclude the 33.17 keV K-edge feature. In Fig. 2, we can observe that contrary to GRB 160509A , the initial emission in GRB 190114C is limited to the 1 -30 M eV band only, however, the bright pulse during the peak finds its correspondence in the 30 -100 M eV LLE band, and in LAT-HE only some photons with relatively low energies are observed. As in case of GRB 160509A , the later emission can contaminate the prompt emission. That is the afterglow component, the presence of which could be felt prominently at low energies and during the prompt emission. This component noticeably pollutes the prompt spectrum after 4.8 s (Ravasio et al. 2019). Interestingly, the LAT photon index also shows soft to hard evolution (Wang et al. 2019) similar to GRB 160509A . We thus explore the joint GBM-LAT data for the possibility of a spectral cutoff in the prompt emission. Looking at the light-curve morphology, it is intuitive that this cutoff, if present, will show considerable evolution as well as contamination from the afterglow. We resolve the spectrum in 1 s bins which is the shortest possible bin using archived LLE data. Interestingly, a fit to the Band function has a systematic trend in its residuals beyond 100 M eV , and this contrast is most prominent in the 3 -4 s bin as shown in Fig 3. This could be regarded as the signature of a cutoff in the energy spectrum around this energy range. So we added a powerlaw component with an exponential cutoff. The added component returned a well constrained cutoff ∼ 50 M eV at 3 -4 s since the GBM trigger. The improvement in statistics strongly favours the addition of a cutoff powerlaw. Alternatively, we modeled with BC, B + BB and BB + BC; and BB + BC is strongly favoured among these, however, in comparison with B For further confirmation of the cutoff, we just take the data above 10 M eV and model it by both powerlaw and a multiplicative cutoff. The fit to powerlaw resulted in a slope of 2.53 +0.14 −0.13 along with pgstat degree of freedom (dof)= 107.2(72), and that to cutoff powerlaw shows a cutoff at 60 +70 −22 M eV with slope 1.5 ± 0.5 and pgstat (dof) = 92(71). Thus, the feature could be recovered with ∆BIC = 10 which shows that an energy cutoff is very strongly preferred over a simple powerlaw decay. We show the fits to both models in Fig 4, and also derive confidence contours for cutoff energy (E c ) and index (ξ) of the ∼ E −ξ exp(-E/E c ) function. The cutoff can be constrained well (Fig 4) and is also close to what is determined from the entire data fit in this interval. The cutoff thus obtained can be because of the γ − γ absorption, for these < 100 M eV cutoffs the target photon energy should be comparable to the cutoff energy E c thus allowing us to estimate the bulk Lorentz factor Γ (Lithwick & Sari 2001) as: However, it is only true in the case when the effect of pair production is ignored. In a fully time-dependant model this could be 1.5 -2 times lower (Gill & Granot 2018). We can also estimate the limiting energy from synchrotron emission (e.g. Guilbert et al. 1983;de Jager & Harding 1992;Piran & Nakar 2010;Atwood et al. 2013), which can produce cutoff energy feature, using, where α F is fine structure constant. For our results, we used lower limit of the Lorentz factor and divided it by factor 2 to compensate over-prediction by Equation 1. Further time-resolved analysis In a previous work, Wang et al. (2019) argue for the presence of a blackbody component during the peak of the initial phase (0.7 -1.71 s) However, we should be cautious here as this could be as a result of larger bin size taken for their analysis. They have used Bayesian blocks (Scargle et al. 2013) for constructing the time intervals for analysis during the prompt phase. The Bayesian blocks are the segments in time with statistically constant signal in a particular bin and a new block is formed when the change is statistically significant. Their bins (two in number) during 0.7 -1.71 s are comparatively larger than their other bins around this. This is because of the low variability and an almost constant signal for a longer duration. Wider bin is risky as it can smear the spectral evolution and can even appear as a blackbody component. An example can be found in Chand et al. (2018) where a fast evolution of peak energy in ∼ 1 s appeared as a blackbody in the coarse bins. We, here, further divided the bins by using time-tagged-events (TTE) data in higher resolution and found that the models without a BB in these bins are favoured or are equally well in explaining the underlying spectrum. We chose a signal to noise ration of 50 (from n4). In the time-interval (0.7-1.7), we could construct 8 bins in this manner. We chose models Band function (B) A1, a blackbody added to band function (B + BB) A3, a blackbody added to CPL (BB + CPL), a broken powerlaw model with two sharp breaks bkn2pow and a broken powerlaw with sharp breaks and a cutoff (bknpowC). The formulas for all the models used are reported in the Appendix (Section A). We presented our results in Table. B. During 0.7-1.7 s, bknpowC describes the spectrum at par with BB + CPL. So, the black body fitted in Wang et al. (2019) can be modeled by a low energy break. However, in the later phase the spectrum could modeled by BB + Band and Ravasio et al. (2019) also modeled the spectra with smoothly broken powerlaw, however, they also found the spectral index becoming harder during these times (2.45 -5.69 s). Therefore, we can say that the spectrum is initially non-thermal with a low energy cutoff (sub-M eV ) which later becomes quasi-thermal. DISCUSSION AND CONCLUSIONS We summarize the aforementioned results and other information at the serving end to draw conclusions: 1. Prompt emission happens to have multiple pulses with different evolution as reflected in lightcurves plotted in multiple energy bands (Fig. 2). In the same figure we also draw analogy with GRB 160509A . We showed in the light-curves, photons separated by 4.5 s (see Fig. 2). LAT-HE spectrum attains hard spectral indices (∼-2) after 6 s (Ravasio et al. 2019). 2. Contrary to GRB 160509A , GRB 190114C has shown considerable spectral evolution. At first the spectrum is non-thermal and at 2.7 s transforms into hard spectrum. Including LLE data, the timeresolved spectra during 2 -6 s show a cutoff in the spectrum. The cutoff is clearly found during the time 3 -4 s and hints can be seen after this time. The spectrum in 4 -5 s and after is affected by the emerging afterglow component. We also showed that the cutoff is less probable to be orig- inating from the intrinsic cutoff in the electrons energy distribution spectrum (Table 1 last column is the maximum energy of the photons that can be produced through synchrotron emission in these bins). 3. The Lorentz factor calculated from the sub-GeV cutoff (using Eq. 1) of the 3 -4 s bin is 138 +29 −23 , and a hint of an increase in the Lorentz factor towards the afterglow onset can be noticed. 4. The time-resolved spectrum shows a transition from non-thermal to thermal. 5. MAGIC observed GRB 190114C from T 0 + 50 s. A significant excess of gamma rays (>0.3 T eV ) up to T 0 + 1200 s was observed. After the first bright flash, the source faded rapidly. 6. Derishev & Piran (2019) suggested the origin for the sub-T eV radiation to be inverse Compton scattering of the soft X-ray photons. A confirmation from spectral fits is yet to be seen. For the radiation to survive from annihilation they obtained limits on the bulk Lorentz factor as well as the Lorentz factors of the electrons. 7. Fraija et al. (2019), through multiband spectral and temporal analysis of the forward and reverse shocks along with the best fit value of the circumburst density, estimated the value of the initial bulk Lorentz factor to be ∼ 600. After ∼400 s the synchrotron radiation undergoes a phase transition from a stratified stellar-wind like medium into a homogeneous ISM-like medium. The Lorentz factor found from the onset of the afterglow is higher than that derived from the prompt emission cutoff (see Table 1). The hard spectrum during the second pulse may suggest dissipation below the photosphere. The outgoing photons produced in this dissipation undergo Comptonization and escape at the photosphere which is a fuzzy zone in itself (Beloborodov & Mészáros 2017). The specialty of such a model spectrum is its strikingly similar shape as the observed one during this time, as it accounts for both the hardness as well as the high-energy cutoff seen in the spectrum. Low derived values of the bulk Lorentz factor using Equation (1) then implies the dissipation happens in the phase where the Lorentz factor is still not saturated. On the other hand, the spectrum of the initial pulse is much softer and can be explained originating from internal shocks in coasting phase. The sharp cutoffs in this case for the first pulse can be attributed arising from intrinsic electrons energy distribution. The start point of MAGIC detection (T 0 + 50 s) implies that the radiation is observed in the stellar-wind like medium since the phase transition occurred at 400 s. The Lorentz factor in wind-like medium evolves as where Γ t is the bulk Lorentz factor after time t, t d ∼ 4 s is the shock crossing time . Γ ∼ 600 is the initial Lorentz factor. The transition from stratified wind medium to constant density ISM occurs at ∼ 400 s and the Lorentz factor (Γ 0 ) during the transition is reported to be ∼ 220 (Fraija et al. 2019). At t=50 s, Γ t=50 s ∼ 320. This safely places the inverse Compton in the Thomson regime, however, it is much greater than Γ > 108 in a wind medium (Derishev & Piran 2019). The limit of 320 however, will require the seed photons to be ∼ 1keV , an order of magnitude less that 10 keV required for producing a 0.5 T eV photon in (Derishev & Piran 2019). This also has implication on the synchrotron photons produced, either γ e or the magnetic field will be affected. The limit 108 was derived using L X,iso calculated after 70 s. The analysis in Derishev & Piran (2019), might be revisited for accommodating these changes. However, the results seems to be consistent. From Lorentz factor decay in the constant density ISM, t = t0 (Γ 0 /Γ t ) 8/3 , t0 is the time at transition, the time it takes Lorentz factor 3800 s to decay to from 220 to 96 which is much larger than 20 min detection period observed in MAGIC. In the literature, several bright GRBs have shown a peculiar cutoff in the sub-GeV energies (Tang et al. 2015;Wang et al. 2017;Vianello et al. 2017). These GRBs are observed in a broadband. The analysis of the prompt emission for many GRBs could be marred by limited band observations and by contamination from the bright afterglow component existing simultaneously with prompt emission. The contamination has been seen in GRB 190114C by Ravasio et al. (2019), and in case of GRB 160509A , we highlighted the transition region by showing the evolution of prompt emission and afterglows. In case of GRB 160509A , the effect is feeble while GRB 190114C is severely affected. In our analysis of the joint data from Fermi detectors, we have recovered such a break in the time resolved analysis which could be smeared in the time-integrated emission and at certain point dominating external component. GRB 190114C also shows a spectacular evolution of the spectral shape. The initial spectrum becomes hard after ∼ 2.7 s where a fresh injection seems to be occurring. Such a hard spectrum can be possible in case of dissipation occurring at high optical depth below the photosphere. The shape of the spectrum can be explained in such a case arising from the Comptonization of the outgoing photons. The complete picture from our analysis would be (a) initial phase produced away from photosphere, (b) second hard phase is produced in a sub-photospheric dissipation and where the jet is in acceleration phase, (c) the afterglows are produced in external shock, (d) the sub-T eV radiation can also be consistent in this picture arising from inverse Compton scattering (Derishev & Piran 2019) where E b = (α − β)E p /(2 + α). The Band model with a high energy exponential cutoff at E c (BC) is given by Other models considered in this paper include: a blackbody 4 (BB), blackbody added to Band (B+BB), a broken powerlaw model with two breaks (bkn2pow 5 ), a broken powerlaw model with one break and a high energy cutoff (bknpowC 6 ), and a powerlaw model with a high energy exponential cutoff added to blackbody (BB+CPL 7 ), as given by:
2019-05-29T13:10:29.919Z
2019-05-28T00:00:00.000
{ "year": 2019, "sha1": "ae4135752b93c9e790819d273ba46edd796b21dd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae4135752b93c9e790819d273ba46edd796b21dd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15709730
pes2o/s2orc
v3-fos-license
On the Scalability and Message Count of Trickle-based Broadcasting Schemes As the use of wireless sensor networks increases, the need for efficient and reliable broadcasting algorithms grows. Ideally, a broadcasting algorithm should have the ability to quickly disseminate data, while keeping the number of transmissions low. In this paper, we analyze the popular Trickle algorithm, which has been proposed as a suitable communication protocol for code maintenance and propagation in wireless sensor networks. We show that the broadcasting process of a network using Trickle can be modeled by a Markov chain and that this chain falls under a class of Markov chains, closely related to residual lifetime distributions. It is then shown that this class of Markov chains admits a stationary distribution of a special form. These results are used to analyze the Trickle algorithm and its message count. Our results prove conjectures made in the literature concerning the effect of a listen-only period. Besides providing a mathematical analysis of the algorithm, we propose a generalized version of Trickle, with an additional parameter defining the length of a listen-only period. Introduction Wireless sensor networks (WSNs) have become more and more popular in the last few years and have many applications [1]. These networks consist of compact, inexpensive sensor units that can communicate with each other by wireless transmissions. They require efficient and reliable communication protocols that can quickly propagate new information, while keeping the number of transmissions low, in order to conserve energy and maximize the lifetime of the network. Several data dissemination protocols have been proposed in recent years for this purpose [3,6,13,25]. In [13], the Trickle algorithm has been proposed in order to effectively and efficiently distribute and maintain code in wireless sensor networks. Trickle relies on a "polite gossip" policy to quickly propagate updates across a network of nodes. It uses a counter method to reduce the number of redundant transmissions in a network and to prevent a broadcast storm [17]. This makes Trickle also a very energy-efficient and popular method of maintaining a sensor network. The algorithm has been standardized by the IETF as the mechanism that regulates the transmission of the control messages used to create the network graph in the IPv6 Routing Protocol for Low power and Lossy Networks (RPL) [24]. Additionally, it is used in the Multicast Protocol for Low power and Lossy Networks (MPL), which provides IPv6 multicast forwarding in constrained networks and is currently being standardized [10]. Since the algorithm has such a broad applicability, the definition of Trickle has been documented in its own IETF RFC 6206 [12]. Because of the popularity of the algorithm, it is crucial to gain insight into how the various Trickle parameters influence network performance measures, such as energy consumption, available bandwidth, and latency. It is clear that such insights are crucial for optimizing the performance of wireless sensor networks. However, there are only a few results in that regard, most of them obtained via simulation studies [4,13,14]. Some analytical results are obtained in [2,11,14,15]. First, in [14], qualitative results are provided for the scalability of the Trickle algorithm, but a complete analysis is not given. More specifically, the authors of [14] state that in a lossless single-cell network without a listen-only period, the expected number of messages per time interval scales as O( √ n). Here n is the number of nodes in the network and, adopting the terminology of [14], single-cell means that all the nodes in the network are within communication range from each other. Supposedly, introducing a listen-only period bounds the expected number of transmissions by a constant. The analysis in our paper confirms these claims and provides more explicit results. Secondly, in [11], a model is developed for estimating the message count of the Trickle algorithm in a multi-cell network, assuming a uniformly random spatial distribution of the nodes in the network. However, the influence of specific Trickle parameters on the message count is not explicitly addressed. Lastly, in [2,15], analytical models are developed for the time it takes the Trickle algorithm to update a network. To the best of the authors' knowledge, no other ana-lytical models for the performance of the Trickle algorithm have been published yet. The goal of this paper is to develop and analyze stochastic models describing the Trickle algorithm and gain insight in how the Trickle parameters influence intertransmission times and the message count. These insights could help optimize the energy-efficiency of the algorithm and consequently the lifetime of wireless sensor networks. Furthermore, knowing how the inter-transmission times depend on the various parameters could help prevent hidden-node problems in a network and optimize the capacity of wireless sensor networks. Additionally, our models are relevant for the analysis of communication protocols that build upon Trickle, such as RPL [24], MPL [10], CTP [8] and Deluge [9], and could give insight into their performance. As key contributions of this paper, we first propose a generalized version of the Trickle algorithm by introducing a new parameter η, defining the length of a listenonly period. This addition proves to be useful for optimizing the design and usage of the algorithm. Furthermore, we derive the distribution of an inter-transmission time and the joint distribution of consecutive inter-transmission times for large-scale single-cell networks. We show how they depend on the Trickle parameters and investigate their asymptotic behavior. Additionally, we show that in a single-cell network without a listen-only period, the expected number of transmissions per time interval is unbounded and grows as √ 2nΓ k+1 where k is called the redundancy constant and n the number of nodes as before. When a listen-only period of η > 0 is introduced, the message count is bounded by k/η from above. We then use the results from the single-cell analysis to develop an approximation for the transmission count in multi-cell networks. All our results are compared and validated with simulation results and prove to be accurate, even for networks consisting of relatively few nodes. Lastly, as an additional contribution of a more theoretical nature, we analyze a general class of Markov chains, closely related to residual lifetime distributions and provide the general stationary distribution for this class of chains. Compared with [16], which the present paper is based on, this contribution is the most important addition. Organization of the paper The remainder of this paper is organized as follows. In Sect. 2, we give a detailed description of the Trickle algorithm. Furthermore, we introduce a new parameter defining the length of a listen-only period and discuss its relevance. Then, in Sect. 3, we first briefly list the main results, before presenting the details of the model and its analysis. In Sect. 4, we develop a mathematical model describing the behavior of the Trickle algorithm in large-scale single-cell networks. This is followed by an analysis of a special class of Markov chains in Sect. 5 and we use the results from this analysis to analyze our original model in Sect. 6. In Sect. 7, we then validate our findings with simulations. In Sect. 8, we use the results from Sect. 6 to approximate the message count in multi-cell networks and we again compare our approximations with simulation results. In Sect. 9, we make some concluding remarks. 2 The Trickle algorithm The Trickle algorithm has two main goals. First, whenever a new update enters the network, it must be propagated quickly throughout the network. Secondly, when there is no new update in the network, communication overhead has to be kept to a minimum. The Trickle algorithm achieves this by using a "polite gossip" policy. Nodes divide time into intervals of varying length. During each interval, a node will broadcast its current information if it has heard fewer than, say, k other nodes transmit the same information during that interval, in order to check if its information is up to date. If it has recently heard at least k other nodes transmit the same information it currently has, it will stay quiet, assuming there is no new information to be received. Additionally, it will increase the length of its intervals, decreasing its broadcasting rate. Whenever a node receives an update or hears outdated information, it will reduce the length of its intervals, increasing its broadcasting rate, in order to quickly update nodes that have outdated information. This way inconsistencies are detected and resolved quickly, while keeping the number of transmissions low. Algorithm description We now describe the Trickle algorithm in its most general form (see also [13]). The algorithm has four parameters: -A threshold value k, called the redundancy constant. -The maximum interval length τ h . -The minimum interval length τ l . -The listen-only parameter η, defining the length of a listen-only period. Furthermore, each node in the network has its own timer and keeps track of three variables: -The current interval length τ . -A counter c, counting the number of messages heard during an interval. -A broadcasting time θ during the current interval. The behavior of each node is described by the following set of rules: 1. At the start of a new interval, a node resets its timer and counter c and sets θ to a value in [ητ, τ ] uniformly at random. 2. When a node hears a message that is consistent with the information it has, it increments c by 1. 3. When a node's timer hits time θ , the node broadcasts its message if c < k. 4. When a node's timer hits time τ , it doubles its interval length τ up to τ h and starts a new interval. 5. When a node hears a message that is inconsistent with its own information, then if τ > τ l it sets τ to τ l and starts a new interval, otherwise it does nothing. In Fig. 1, an example is depicted of a network consisting of three nodes using the Trickle algorithm with k = 1 and τ = τ h for all nodes. During the first interval, node 3 is the first node that attempts to broadcast and consequently it is successful. The broadcasts of nodes 1 and 2 during that interval are then suppressed. During the second interval, the broadcast of node 2 suppresses the other broadcasts. Note that in the example in Fig. 1, the intervals of the three nodes are synchronized. However, in general, the times at which nodes start their intervals need not be synchronized. In a synchronized network, all the nodes start their intervals at the same time, while in an unsynchronized network, this is not necessarily the case. In practice, networks will generally not be synchronized, since synchronization requires additional communication and consequently imposes energy overhead. Furthermore, as nodes get updated and start new intervals, they automatically lose synchronicity. The listen-only parameter η Note that the parameter η is not introduced in the description of the Trickle algorithm in [12] and [13]. We have added this parameter ourselves, so we can analyze a more general version of the Trickle algorithm. The authors of [13] propose always using a listen-only period of half an interval, i.e., η = 1 2 , because of the so-called short-listen problem, which is discussed in the same paper. When no listen-only period is used, i.e., η = 0, sometimes nodes will broadcast soon after the beginning of their interval, listening for only a short time, before anyone else has a chance to speak up. If we have a perfectly synchronized network, this does not give a problem, because the first k transmissions will simply suppress all the other broadcasts during that interval. However, in an unsynchronized network, if a node has a short listening period, it might broadcast just before another node starts its interval and that node possibly also has a short listening period. This possibly leads to a lot of redundant messages and is referred to as the short-listen problem. In [13], it is claimed that not having a listen-only period and k = 1 makes the number of messages per time interval scale as O( √ n), due to the short-listen problem. When a listen-only period of τ/2 is used, the expected number of messages per interval is supposedly bounded by 2, resolving the short-listen problem and improving scalability. However, introducing a listen-only period also has its disadvantages. Firstly, when a listen-only period of τ/2 is used, newly updated nodes will always have to wait for a period of at least τ l /2 before attempting to propagate the received update. Consequently, in an m-hop network, the end-to-end delay is at least m τ l 2 . Hence, a listen-only period greatly affects the speed at which the Trickle algorithm can propagate updates. To prevent this, it has been proposed in [15] to dynamically adjust η, based on how new the current information of a node is, greatly increasing propagation speed. 123 Fig. 2 One node carries the complete transmission load of the network Secondly, introducing a listen-only period has a negative effect on the load distribution. This is illustrated by Fig. 2, where node 2's broadcasting intervals completely overlap with node 1's listen-only periods and vice versa. Consequently, one node will always transmit and suppress the other node's transmissions, depending on which node first starts broadcasting. The probability of having an uneven load distribution increases as η increases. For these reasons, one might want to consider using a listen-only period of different lengths, which raises the question of what length is optimal. Therefore, we have added the parameter η, which allows us to investigate the effect of using a listen-only period of general length on the message count. Main results for single-cell networks In this section, we first briefly list the main results concerning the message count and scalability of the Trickle algorithm in single-cell networks. In the following sections, we will then discuss the analytical model and its analysis leading to these results. We consider a steady-state regime, where all nodes are up to date and their interval lengths have settled to τ = τ h and without loss of generality, we will assume that τ h = 1. In this steady-state regime, as dictated by the Trickle algorithm, nodes still need to repeat the information they have, in order to be able to detect possible inconsistencies within the network. However, if no new information enters the network for a long period of time, all communication during this time is essentially redundant; hence, message count is the critical performance measure. When new information enters the network, nodes should quickly disseminate it and return to this steady-state regime. We denote by N (k,n) the number of transmissions during an interval of length τ h for a given threshold value k in a single-cell network consisting of n nodes. Additionally, let us denote an inter-transmission time for a given value of k and cell-size n by T (k,n) . First we look at the case k = 1. We show that the cumulative distribution function of T (1,n) for large n behaves as This lets us deduce that We conclude that E[N (1,n) ] ∼ 2n π , when η = 0. This proves the claim from [13] that when no listen-only period is used, , and shows that the pre-factor is 2 π . Furthermore, when η > 0, Trickle scales well and E[N (1,n) ] ↑ 1 η , with a convergence rate of √ n, proving another claim from [13]. For the case k ≥ 2, we first derive the density function for the distribution of k − 1 consecutive inter-broadcasting times. We then use this result to deduce that the density function of the distribution of T (k,n) for large n behaves as Here and Additionally, we find for the jth moment of T (k,n) : Hence, for We then use these results to derive the asymptotic distributions of inter-broadcasting times. When η > 0 and k ≥ 2, we show that and (1), as n → ∞ and k → ∞. For the case η = 0, we show that the density function f (k) (t) of the limiting distribution of n 2 T (k,n) as n → ∞ satisfies where Lastly we show that Modeling the broadcasting process In this section, we develop a mathematical model describing the Trickle broadcasting process, which allows us to analyze the message count in single-cell networks. Suppose we have a single-cell network consisting of n nodes which are all perfect receivers and transmitters. Furthermore, we assume that all nodes are up to date and τ = τ h = 1 for all nodes. Lastly, we assume that the interval skew of the nodes is uniformly distributed, meaning each node has one interval starting at some time in the interval [0, 1) uniformly at random. In order to analyze the message count, we treat the process of nodes attempting to broadcast as a Poisson process with rate n. This assumption is motivated by the following lemma, which shows that the properly scaled process of nodes attempting to broadcast behaves as a Poisson process with rate 1 as n grows large. Lemma 1 Let N n be the point process of times that nodes attempt to broadcast in a single cell consisting of n nodes with η ∈ [0, 1]. Then if we dilate the timescale by a factor n, the process N n converges weakly to a Poisson process with rate 1 as n grows large. Proof See Appendix. Let us denote the PDF and CDF of an inter-transmission time T (k,n) P for this process with Poisson broadcasting attempts by f (k,n) (t) and F (k,n) (t), respectively. Recall that by T (k,n) we denote an inter-transmission time for a given value of k and cell-size n for the original Trickle broadcasting process, which is not Poisson. We will use the fact that the distribution function of T (k,n) P will tend to provide an accurate approximation for that of T (k,n) for large n (because of Lemma 1), but the derivation of explicit error bounds or limit theorems is outside the scope of the present paper. It is important to keep in mind that we are analyzing the Poisson broadcasting process, and all results for T (k,n) (and N (k,n) ) are only true asymptotically, for n → ∞. k = 1 We first consider the case k = 1. Suppose at time 0, a broadcast occurs. Now assume at time t, a node ends its listening period and attempts to broadcast. In order for the node's transmission not to be suppressed, it must have started its listening period after time 0. If it started listening before this time, it would have heard the transmission at time 0 and its broadcast would have been suppressed. This means that the node's broadcast will be successful if its corresponding timer was smaller than t. Since broadcasting times are picked uniformly in [η, 1], this probability is 0 if t < η and t−η 1−η otherwise. Hence, if we define λ(t) to be the instantaneous rate at which successful broadcasts occur, we can write It is well known that the hazard rate [7], Theorem 2.1): Hence, We conclude Hence, for the case η = 0, we find E[N (1,n) ] ∼ 2n π . This proves the claim from [13] when no listen-only period is used. For η > 0, we get from (14) that This implies E[N (1,n) ] ↑ 1 η from below, proving the claim that introducing a listenonly period bounds the number of transmissions per interval by a constant. k ≥ 2 We now look at k ≥ 2, starting by examining the case k = 2. We can apply similar reasoning as we did for the case k = 1. Suppose again that at time 0 a broadcast occurs and let −T −1 be the time of the last broadcast before time 0. Now similarly to before, a broadcasting attempt at time t will be successful if the corresponding node started its listening interval after time −T −1 . Hence, the instantaneous rate at which successful broadcasts occur conditioned on T −1 = ν is given by This tells us that for k = 2, the sequence of consecutive inter-transmission times T (2,n) P = {T (2,n) P,i } ∞ i=0 forms a Markov chain with transition probabilities as in (17). For the general case k ≥ 2, the same reasoning applies. Suppose again that at time 0, a broadcast occurs and let −T −(k−1) be the time of the (k − 1)th broadcast before time 0. Then a broadcasting attempt at time t will be successful if the corresponding node started its listening interval after time −T −(k−1) . Hence, the instantaneous rate at which broadcasts occur conditioned on T −(k−1) = ν is again given by Eq. (16). Thus, in general, the sequence T forms a Markov chain. Moreover, as we will show in the next section, this Markov chain is a specific case of a class of Markov chains closely related to residual lifetime distributions. Markov chains of residual lifetimes In order to analyze the Markov chain T (k,n) P and determine its steady-state distribution, it will help us to first study a more general class of Markov chains. The most important result of this section is the following theorem. with support (a, b) for some a ≥ 0 and b > 0 and where we allow b = ∞. Denote by F(y) the cumulative distribution function of Y . Let X = {X i } ∞ i=1 be an m-dependent sequence with probability transition function of the following specific form: Theorem 1 Let Y be a continuous random variable Remark 1 The distribution function in Eq. (19) can be interpreted as the stationary distribution of the mth iterated overshoot of a renewal process with inter-renewal distribution F(·): see [23]. That is, the residual of the residual of the residual. . . of Y iterated m times. In preparation for the proof of Theorem 1, we shall first study the sequence and let Λ(y) = − log[F(y)] be the cumulative hazard function of Y . Note that we can write F(y) = 1 − e −Λ(y) . Clearly, X m constitutes a Markov chain with state space X = {x ∈ R m : m j=1 x j ≤ b and x i ≥ 0 for 1 ≤ i ≤ m} and Markov transition function P as in (18). That is We show the following Theorem 2 An invariant measure of the chain X m is given by where C m is a constant. Proof Using (20) and (21) we find (21) can be normalized with a constant C m that satisfies where we have used the following lemma: Lemma 2 Let m ∈ N and G(x) be a positive real-valued integrable function. Assume that Proof By writing x = m+1 i=1 x i and a change of variables, we can write The inner m-tuple integral is equal to the volume of the m-dimensional simplex given by which is known to be x m /m! [19]. Applying this result completes the proof. Remark 2 In general, the steady-state joint cumulative distribution function of X m cannot be written as neatly as its density (21). However, one can show through induction that its multivariate Laplace transform is given by where L Y (s) = ∞ 0 e −sy dF(y) is the Laplace-Stieltjes transform of Y . In Theorem 2, we have established that the chain X m has an invariant measure π as given in (21). We now proceed to show that the density function of X m also converges to π as in (21) with normalization constant as given in (22) for any starting vector Given the existence of a strictly positive finite invariant measure π , inspection of Theorem 4 in [18] shows that if the chain X m is aperiodic and φ-irreducible, then the distribution of X m will converge to the distribution associated with the invariant measure given in (21). It follows easily from the fact that Y has support (a, b) for some a ≥ 0 and b > 0 that X m is aperiodic. Furthermore, φ-irreducibility of the chain (i.e., having positive probability of reaching every set A with φ(A) > 0 from every state x ∈ X , for some nonzero measure φ(·)) is also easy to verify. We can take φ(A) = μ L (A ∩ X ), where μ L (·) denotes the Lebesgue measure on R m . Starting from any x ∈ X , the chain is able to move to the set {x ∈ R m : min(x 1 , . . . , x m ) ≥ a} within m steps, and from there it can reach any other set in X within m steps. Hence, under the conditions of Theorem 1, X m is aperiodic and φ-irreducible. Now, with the stationary density function of X m , we can also determine the associated steady-state density of the process Σ = , which we will denote by π Σ . Using (21), we find We are now in a position to prove Theorem 1. Proof of Theorem 1 We already derived that under the conditions of Theorem 1, the density of X m will converge to the normalized version of the expression given in Eq. (21). Consequently, the density of Σ will converge to (24). Therefore, we can write the limiting stationary distribution of X as Hence, we have proven Theorem 1. 123 Lastly, we use (19) to derive the moments of the stationary distribution of X. For where we have used the following lemma: Lemma 3 Let m ∈ N and G(x) be a positive real-valued integrable function. Assume that The last equality follows from a known identity involving the reciprocal of binomial coefficients (see [21], Corollary 2.2). We will now use the results from this section to analyze the Markov chain T (k,n) P from Sect. 4. Inter-transmission time distributions and message count We return to the model from Section 4. We already deduced that the sequence T (k,n) P = T (k,n) forms a Markov chain. Moreover, we showed that for k ≥ 2, with λ(t | ν) as in (16). Since T −(k−1) is actually the sum of the previous k − 1 interbroadcasting times, we find that (27) is of the form as in (18) with m = k − 1. Further examination then gives that for this case Y ∼ T 1,n P and F(t) is given by (12). Now, since all the moments of T 1,n P are finite and T 1,n P has support (η, ∞), Theorem 1 applies, and we can apply all the results from the previous Sect. to T (k,n) P . Recall that by f (k,n) (t) and F (k,n) (t), we denote the steady-state PDF and CDF of a transmission time T Here the normalization constant C (k,n) satisfies (29) Alternatively, by a change of variables and splitting the integral, we can write (29) in terms of a finite sum as which more clearly reveals the role of some of the parameters. There are two important observations we can make regarding the constant C (k,n) . First, for η = 0, (29) reduces to C (k,n) = (2n) Secondly, for η > 0, we have that C (k,n) ↓ (k−1)! η k−1 as n grows large. Let us now look at the sequence Σ (k,n) , i.e., the sequence of the sum of k − 1 consecutive inter-transmissions times. Equation (24) immediately gives its steadystate probability density function: 123 Finally, Eq. (19) gives and equivalently, Substitution of Eq. (16) reduces the right-hand side of (32) for t < η to and for t ≥ η, the right-hand side reduces to Lastly, we focus our attention on the moments of an inter-transmission time T (k,n) . Applying (26), we find for the first moment of T (k,n) For the case η = 0, this implies which is again O( √ n), as was conjectured in [13]. When a listen-only period is used, i.e., η > 0, implying that E[N (k,n) ] ↑ k η from below as n → ∞. This implies that for η > 0, the expected number of messages per interval is bounded by a constant and shows that Trickle scales well. Again using (26), we obtain the following expression for the jth moment of an inter-transmission time For the special case η = 0, this reduces to For the case η > 0, we deduce Limiting distributions The previous analysis allows us to determine the limiting distributions of T (k,n) as n and k grow large. We distinguish between two cases. Case 1: η > 0. First, Eq. (40) implies that for η > 0 and k ≥ 2, since all moments converge to the moments of the Beta distribution. For the density function of T (k,n) P (and hence also that of T (k,n) ), this implies that as n → ∞, We can relate this result to Equation (28). We know that the sum of k consecutive inter-transmission times converges to η as n → ∞. Equation (28) then tells us that if we look at an interval of length η, k − 1 broadcasting times are distributed uniformly in this interval, when n → ∞. Therefore, if we scale time by a factor k, we will see a Poisson process with intensity 1, as k → ∞, and expect to see exponential inter-broadcasting times, which is in agreement with (43). For the case η = 0, we can write the density function f (k) (t) of the limiting distribution of n 2 T (k,n) for k ≥ 2 after scaling Equation (34) as Alternatively, performing integration by parts gives Combining (44) and (45), we find after some rewriting Direct computation gives All other distribution functions then follow from Equation (46). Additionally, from Equation (39), we have Using the fact that we find E √ nkT (k,n) j → j!, as n → ∞ and k → ∞. This finally implies that since all moments converge to the moments of the exponential distribution. The last result can be related to Lemma 1. If we look at the special case k = n, we know all broadcasting attempts will be successful. Hence, the process of nodes broadcasting will be the same as the process of nodes attempting to broadcast, from which we know it converges to a Poisson process with rate 1, as n → ∞, if we scale time by a factor n, because of Lemma 1. Furthermore, (49) also tells us that for this case, if we scale time by a factor n, we expect to see exponential inter-transmission times with rate 1. So these results are in good agreement with each other. Discussion of results We briefly discuss the implications of our results. First, Equation (36) gives us insight into the effects of the short-listen problem. We found that without a listen-only period, the number of transmissions per time unit scales as O( √ n), due to the short-listen problem. However, by introducing a listen-only period, the expected number of transmissions per interval is bounded by k/η: see (37). Hence, having η = 1 2 as suggested in [13] reduces the number of redundant transmissions and resolves the short-listen problem. However, one could also consider using lower values of η. For example, one could consider setting η = 1 4 . We know this roughly doubles the expected number of redundant transmissions, but it could also double the propagation speed. Indeed, whenever a node gets updated, it only has to wait for a period of at least τ l /4 before propagating the update, as opposed to a period of τ l /2. Hence, the choice of η involves a trade-off. Lowering η increases not only the propagation speed, but also the number of transmissions. In order to get a full understanding of this trade-off, more knowledge is needed on the propagation speed of the algorithm. Combined with results from this paper, this knowledge could provide guidelines for how to optimally set the Trickle parameters and the length of the listen-only period. Second, Eqs. (12), (31), (42), (43), (46), and (49) provide insight in the distributions of inter-transmission times. In our derivations, we have assumed that transmissions are instantaneous. However, in reality, transmissions take some time, although this time is generally short compared to the interval length τ h . Using the inter-transmission time distributions, one can estimate the probability that transmissions would overlap in time and hence interfere. Potentially, one can use this knowledge to optimize the medium access protocol and the capacity of the wireless network. Therefore, the impact of having non-instantaneous transmissions on the performance of Trickle is an interesting topic for future research. investigate how well this assumption holds in networks with only a few nodes. In order to do so, we compare our analytic results for the message count and the distribution of inter-transmissions times with simulations done in Mathematica. We simulate a lossless single-cell network using the Trickle algorithm, where transmissions occur instantaneously. In Fig. 3, we compare simulation results for the case η = 0 with the analytic results from Eq. (36). For each combination of k and n, we simulate a network consisting of n nodes using the Trickle algorithm and average over 10 3 runs of 100 virtual time units. Each run, we use a different interval skew chosen uniformly at random for each of the nodes. We see that the simulation results and the analytic results coincide very well, even for small cell-sizes, and that the analytic results provide a conservative estimate for the mean number of transmissions per interval. In Fig. 4, we compare simulation results for the case η = 0, k = 1, and n = 50 with the analytic results from Eq. (12). We show a histogram of the observed inter- (12). We see a very good match between the analytic result and simulations, even though we consider a network consisting of only 50 nodes. In Fig. 5, we compare (32) with simulations for the case η = 0, k = 3, and n = 50. Again, both results are in good agreement with each other, despite the relatively small size of the network. Note, also, that the asymptotically exponential behavior from (49) can already be seen in the density plot. In Fig. 6, we compare simulation results for the case η = 1 2 with analytic results from Eq. (37). Here, also, we find that the analysis gives an accurate estimate for the mean number of transmissions in small networks. From our analysis, we also know that E[N (k,n) ] should converge to 2k as n grows large. From the graph, we see that this convergence is quite slow. In Figs. 7 and 8, we compare simulation results for k = 1 and k = 3, respectively, with the analytic results from Eqs. (12) and (32), where η = 1 2 and n = 50. Like before, we see a very good match between the analytic results and simulations. Note, that the asymptotic behavior from (41) can already be recognized in the density plot of Fig. 8. Additionally, we have simulated more realistic network settings in the OMNeT++ simulation tool. As mentioned in Sect. 6.2, in reality, transmissions do not occur instantaneously but take some time M; hence, collisions can occur. To investigate the effect of not having instantaneous and possibly colliding transmissions, we have simulated the same scenarios as we did with Mathematica and compared the results. The simulator incorporates IEEE standard 802. 15.4. We find that the OMNeT++ and Mathematica simulations produce nearly indistinguishable results, which is not surprising. Since τ h tends to be very large compared to a transmission length M, the probability of nodes trying to broadcast simultaneously tends to be very small. Furthermore, since we are dealing with a single-cell network, the CSMA/CA protocol prevents all collisions in the OMNeT++ simulations. Therefore, both simulators provide nearly the same results. Since the Mathematica simulations are computationally less demanding and much more easily reproducible, we have omitted the OMNeT++ simulation data. Multi-cell network Suppose now that we have a network consisting of n 2 nodes placed on a square grid, where not all nodes are able to directly communicate with each other. Instead, each node has a fixed transmission range R, which means that when a node sends a message, only nodes within a distance R of the broadcaster receive the message. While a fullyfledged analysis of multi-cell networks is beyond the scope of the present paper, we now briefly examine how the results for the single-cell scenario obtained in Sect. 6 can be leveraged to derive a useful approximation. We denote by N (k,n,R) MC the number of transmissions during an interval of length τ h for a given threshold value k in a cell consisting of n × n nodes with broadcasting range R. Hence, we are interested in determining the behavior of E N (k,n,R) MC . Approximation We will use the analytical results from Sect. 6 to develop an approximation for the expected number of transmissions per interval in multi-cell networks. Let us denote by S(R) the number of nodes a single broadcasting node having a broadcasting range of R can reach, i.e., the size of a single broadcasting-cell. Heuristically, we can reason as follows. We have an n × n grid consisting of approximately n 2 /S(R) distinct non-overlapping broadcasting-cells. Assuming each cell behaves independently of the others, we can approximate the expected number of transmissions per interval in the multi-cell case as follows: Using the fact that S(R) ∼ π R 2 and Eqs. (36) and (37), we can get more insight into the behavior of our approximation. For η = 0, we have that as R grows large: 123 Similarly, when a listen-only period is used, i.e., η > 0, we have that as R grows large: Consequently, we see how the short-listen problem potentially plays a role in multi-cell networks. Simulations We compare the approximation with simulations in order to evaluate its accuracy. We do so by plotting the ratio In Mathematica, we simulate a network consisting of 50 × 50 nodes placed on a grid for several values of k and R. For each combination of k and R, we simulate a lossless multi-cell network for 100 virtual time units. Each run, we use a different randomly chosen interval skew for the nodes. We use toroidal distance in order to cope with edge effects. In Fig. 9, we show a plot of (51) for the case η = 0. We see that the estimate given in Eq. (50) tends to slightly underestimate the number of transmissions per interval. However, the approximation is accurate within a factor 1.2. Furthermore, as k increases, the approximation becomes more accurate. We also note that for fixed k, the accuracy remains fairly constant as R grows. Finally, let us consider the effect of a listen-only period. In Fig. 10, we show a plot of (51) for the case η = 1 2 . We see again that the estimate given in Eq. (50) tends to The error slowly increases with the transmission range R. However, as k increases, the approximation becomes more accurate. Remark 3 In [11] , a method is presented for estimating the message count in synchronized networks. It is shown that this method also gives an accurate approximation for unsynchronized networks with η = 1 2 . However, the method is not suitable for smaller values of η and assumes a uniformly random spatial distribution of the nodes. Hence, if one is interested in listen-only periods of different length or regular network topologies, our approximation is preferable. Remark 4 Given the way we derived our approximation, one can conclude that its performance strongly depends on the fact that we are considering a network which is very homogeneous in terms of node density. For networks that are more heterogeneous in terms of node density, one cannot easily decide what value to take for S(R); hence, approximating the message count becomes difficult. Moreover, as shown in [22], Trickle's performance strongly depends on the network topology, which makes approximating the message count an even more difficult task. Conclusion In this paper, we presented a generalized version of the Trickle algorithm with a new parameter η, which allows us to set the length of a listen-only period. This parameter can greatly increase the speed at which the algorithm can propagate updates, while still controlling the number of transmissions. Furthermore, we have shown that this parameter influences how the transmission load is distributed among nodes in a network. We then presented a mathematical model describing how the message count and inter-transmission times of the Trickle algorithm depend on its various parameters. We showed that the broadcasting process in a single-cell network can be modeled as a Markov chain and that this chain falls under a special class of Markov chains, closely related to residual lifetimes. This class of Markov chains was then analyzed and its stationary distribution derived. For our model, these results lead to the distribution function of inter-transmission times and the joint distribution of consecutive inter-transmission times. We showed how they depend on the redundancy constant k, the network-size n, and the length of a listen-only period η. We also investigated their asymptotic behavior as n and k go to infinity. These distributions give insight in the energy-efficiency of Trickle and the probability that nodes try to broadcast simultaneously. These insights contribute to optimizing the design and usage of the Trickle algorithm. Specifically, we showed that in a network without a listen-only period, the expected number of transmissions grows as O( √ n), proving a conjecture from [13], and we identified that the pre-factor is √ 2Γ k+1 2 /Γ k 2 . Additionally, we showed that, when a listen-only period is used, the number of transmissions per interval is bounded from above by k η , proving a second conjecture from [13] on the scalability of Trickle. We have also performed a simulation study in Mathematica and the OMNeT++ simulation tool. We compared our analytic results, which hold for very large networks with instantaneous transmissions, with simulation results of small and more realistic wireless networks. We found a very good match between the analytic results and the simulations. Additionally, we used the results from the single-cell analysis to get an approximation for the message count in multi-cell networks. These results were also compared to simulation results from Mathematica and the OMNeT++ simulation tool. The approximation proved to be fairly accurate, in particular, for small values of η. A more comprehensive investigation of multi-cell networks and the influence of network topology on Trickle's performance would be interesting topics for further research. Finally, we note that the speed at which the Trickle algorithm can disseminate updates is discussed in [15]. Combined with results from this paper, this could provide insight in how to optimally set the Trickle parameters and the length of the listen-only period. Additionally, the impact of non-instantaneous broadcasts, interference, and combining Trickle with CSMA/CA on the performance of the Trickle algorithm is discussed in [20]. Theorem 3 Let M be a simple stationary point process on X = R with finite intensity λ, and let M n denote the point process obtained by superposing n independent replicates of M and dilating the scale of X by a factor n. Then as n → ∞, M n converges weakly to a Poisson process with parameter measure λμ L (·), where μ L (·) denotes the Lebesgue measure on R. Let N be the process of an arbitrary node's broadcasting moments. Then evidently the superposition of n of these processes N n gives us the process of broadcasting attempts by all the nodes in our cell. Hence, if we show that N is a simple stationary point process with finite intensity λ = 1, we can apply Theorem 3, resulting in Lemma 1. Let {B i }, with i ∈ Z, be the sequence of broadcasting times for an arbitrary node defining our process N . Then if we denote by S the interval skew and assume it is uniform, i.e., S ∼ U [0, 1], we can write
2015-09-29T09:49:34.000Z
2015-09-29T00:00:00.000
{ "year": 2015, "sha1": "804a31dde5ee05167e755107a42c56aec898b988", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11134-015-9438-x.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "acbbb59ffc9203cba1de0c95694c0ea252b74b9f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270450196
pes2o/s2orc
v3-fos-license
Clinical features of pediatric mucormycosis: role of metagenomic next generation sequencing in diagnosis Background Mucormycosis is an uncommon invasive fungal infection that has a high mortality rate in patients with severe underlying diseases, which leads to immunosuppression. Due to its rarity, determining the incidence and optimal treatment methods for mucormycosis in children is challenging. Metagenomic next-generation sequencing (mNGS) is a rapid, precise and sensitive method for pathogen detection, which helps in the early diagnosis and intervention of mucormycosis in children. In order to increase pediatricians’ understanding of this disease, we conducted a study on the clinical features of mucormycosis in children and assessed the role of mNGS in its diagnosis. Methods We retrospectively summarized the clinical data of 14 children with mucormycosis treated at the First Affiliated Hospital of Zhengzhou University from January 2020 to September 2023. Results Of the 14 cases, 11 case of mucormycosis were classified as probable, and 3 cases were proven as mucormycosis. Most children (85.71%) had high-risk factors for mucormycosis. All 14 children had lung involvement, with 5 cases of extrapulmonary dissemination. Among the 14 cases, 4 cases underwent histopathological examination of mediastinum, lung tissue or kidney tissue, in which fungal pathogens were identified in 3 patients. Fungal hyphae was identified in 3 cases of mucormycosis, but only 1 case yielded a positive culture result. All patients underwent mNGS testing with samples from blood (8/14), bronchoalveolar lavage fluid (6/14), and tissue (1/14). mNGS detected fungi in all cases: 7 cases had Rhizomucor pusillus, 4 cases had Rhizopus oryzae, 3 cases had Rhizopus microsporus, 1 case had Lichtheimia ramosa, and 1 case had Rhizomucor miehei. Coinfections were found with Aspergillus in 3 cases, bacteria in 3 cases, and viruses in 5 cases. Conclusion Children with mucormycosis commonly exhibit non-specific symptoms like fever and cough during the initial stages. Early diagnosis based on clinical symptoms and imaging is crucial in children suspected of having mucormycosis. mNGS, as a supplementary diagnostic method, offers greater sensitivity and shorter detection time compared to traditional mucormycosis culture or histopathological testing. Additionally, mNGS enables simultaneous detection of bacteria and viruses, facilitating timely and appropriate administration of antibiotics and thereby enhancing patient outcomes. Introduction Mucormycosis, a severe fungal infection caused by Mucorales fungi, aggressively invades human blood, organs, and tissues.It poses a significant threat to children with suppressed immune function post-transplantation, with a high mortality rate (Cornely et al., 2019;Hassan and Voigt, 2019).Recent epidemiologic research reveals that mucormycosis is the third most prevalent invasive fungal disease among children trailing behind aspergillosis and candidiasis (Francis et al., 2018).There has been a notable increase in mucormycosis cases over recent years (Bitar et al., 2014), with rates in Asia ranging from 1 to 12.3 per million (Hassan and Voigt, 2019).The condition predominantly impacts individuals with diabetes or compromised immunity, including those with hematologic malignancies, transplant recipients, and patients who have undergone surgery, experienced burns, or suffered trauma (Feng and Sun, 2018). Underlying disease is a major influence on the development of pediatric mucormycosis.Among children with trichinosis, 46% had a history of hematological malignant disease, 46% had a history of neutropenic disease, 15.9% had been treated with hematopoietic stem cell transplantation (HSCT), 4.8% had been treated with solid organ transplantation, and 4.8% had a history of diabetes (Pana et al., 2016;Skiada et al., 2018).Pediatric mucormycosis, especially in patients presenting with mucormycosis after HSCT, are usually immune-compromised and have a rapid progression of the disease.Therefore, a comprehensive treatment model with rapid etiological diagnosis, correction of susceptibility factors, early surgical debridement and systemic antifungal therapy is essential to improve prognosis and survival (Olivier-Gougenheim et al., 2021). Identifying mucormycosis early is a challenge due to its nonspecific symptoms and signs.Current diagnoses primarily rely on imaging, histopathology, and mycological culture.The varied pathogenic characteristics of mucormycosis, similar to other invasive fungal infections, make diagnosis difficult (Chamilos et al., 2005;Lass-Florl et al., 2007).Histopathology or culture is considered the "gold standard" for diagnosis, but due to sampling difficulties and limitations of culture methods, only about 50% of cases yield positive results (Roden et al., 2005;Walsh et al., 2012).Grocott's Methenamine Silver (GMS) is preferred in mucormycosis and can be shown in tissue specimen sections as broad, irregular, unseparated or minimally segregated or right-angled branched hyphae (Goldberg et al., 2015).In contrast, performing definitive species identification requires molecular methods (mainly through sequencing of internal transcribed spacer regions), or matrixassisted laser desorption ionization-time of flight mass spectrometry (Danion et al., 2023).Other methods like the (1-3)b-D-glucan assay (G test) and polymerase chain reaction (PCR) have limitations in accurately diagnosing mucormycosis (Bellanger et al., 2011).Currently, there are no serological tests or serum biomarkers available for early diagnosis, thus necessitating the need for new methods (Stone et al., 2021). Metagenomic next-generation sequencing (mNGS), a modern molecular biology technique, has emerged as a promising tool.It is capable of identifying over 15,000 pathogen species with known genomic sequences (Gu et al., 2019).mNGS offers high sensitivity, short detection times, and the ability to diagnose rare pathogen infections, significantly enhancing pathogen detection rates in clinical environments (Grumaz et al., 2016;Zheng et al., 2021). This study conducts a retrospective analysis of the clinical features, treatment approaches, and outcomes of 14 pediatric mucormycosis cases.It aims to assess the characteristics and treatment efficacy and explore the potential of mNGS in early diagnosis of mucormycosis in children. Study design and participants This retrospective study included 14 children with mucormycosis hospitalized at the Children's Hospital of the First Affiliated Hospital of Zhengzhou University from January 2020 to September 2023.The inclusion criteria were as follows: (1) Children proven or probable to have mucormycosis according to the definitions of invasive fungal diseases by the European Organization for Research and Treatment of Cancer/Mycoses Study Group of the National Institute of Allergy and Infectious Diseases (Donnelly et al., 2020)."Proven" cases required positive results from mucormycosis culture and/or histopathological examination.For "probable" mucormycosis, a joint diagnosis by imaging experts and clinical doctors of the hospital was needed; (2) Completion of mNGS testing.Patients who meet the following criteria were excluded: (1) age ≥18 years, (2) incomplete medical records.Furthermore, data on patients' baseline, clinical features, laboratory and imaging information, diagnosis, treatments, and outcomes were collected. mNGS protocol Clinical samples, such as blood, bronchoalveolar lavage fluid, or lung tissue, were collected using aseptic techniques.Nucleic acids were extracted using the TIANamp Micro DNA Kit (DP316) from Tiangen Biotech Co., Ltd.(Beijing, China).A total of 100 ng of the extracted DNA underwent fragmentation, end repair, library construction, and sequencing.Quality assessment was performed using the Agilent 2100 system.Sequencing was conducted on the BGISEQ-100 platform at the Beijing Genomics Institute (Wuhan, China).After mapping human sequences to the human reference genome (hg19) using the Burrows-Wheeler Alignment, nonhuman sequences were analyzed.Reads with low quality or shorter than 35 base pairs were discarded.The remaining sequences were compared against four microbial genome databases, including bacteria, viruses, fungi, and parasites.Comprehensive data analysis was conducted on the aligned sequences.Potential pathogens were identified, and their data were listed, including the number of precisely mapped reads, Abbreviations: mNGS, metagenomic next-generation sequencing; BALF, bronchoalveolar lavage fluid; HSCT, hematopoietic stem cell transplantation; L-AmB, Liposomal Amphotericin B; PM, pulmonary mucormycosis.3 Results Baseline characteristics This study initially included 20 children suspected of having mucormycosis.Six children were excluded due to the absence of mNGS testing and incomplete data, resulting in 14 children being selected for the study.8 cases (57.14%) were male among these children.The median age of the participants was 13.00 years, with a range of 7.00 to 14.00 years.Table 1 lists the clinical characteristics of these patients.Among them, 11 cases (78.57%) had hematologic malignancies, 1 case (7.14%) had a mediastinal tumor, 1 case ( 7 .1 4 % ) h a d d i a b e t e s , a n d 1 c a s e ( 7 .1 4 % ) h a d n o underlying diseases. Culture, pathology, and NGS findings All cases underwent cultures of sputum, or bronchoalveolar lavage fluid, but only one case yielded a positive culture.Among the 14 cases, 4 children (28.57%) underwent histopathological examination, in which fungal pathogens were identified in 3 patients.Fungal hyphae was identified in 2 cases of pulmonary mucormycosis and 1 case of disseminated mucormycosis (Figure 2).Fungal hyphae was also found in 4 cases (28.57%) using immunofluorescence microscopy (Figure 2).And all cases tested negative in GM and G tests.All 14 cases underwent cultures from bronchoalveolar lavage fluid or sputum, but only 1 case (7.14%) yielded a positive culture result.The mNGS test is more sensitive than conventional diagnostic methods (P<0.001), in Table 2. Surgical intervention was undertaken in 3 patients (21.43%), with two children undergoing lobectomy of the lung.Due to disease progression, 8 patients (57.14%) were transferred to the Intensive Care Unit.6 cases (42.86%) died, while 8 cases (57.14%) showed improvement within 3 months of hospital discharge. Discussion This retrospective analysis focused on the clinical characteristics and mNGS diagnostic effectiveness in 14 mucormycosis children.The study found that 12 of these patients had known risk factors for mucormycosis, with 11 cases involving hematologic malignancies and 1 case with no underlying disease.One notable case in this study (Case 3) involved a previously healthy child with no history of underlying disease, immunosuppressive drug use, trauma, or other high-risk factors typically associated with mucormycosis.This child developed lung infection following non-specific symptoms such as fever, cough, dizziness, and headache, with the etiology identified as Rhizopus oryzae infection.Therefore, mucormycosis should also be considered in otherwise healthy children. Mucormycosis can target various body parts, including rhinoorbital-cerebral mucormycosis (ROCM), pulmonary mucormycosis (PM), skin/soft tissue infections (SSTI), gastrointestinal or renal infections (GI), disseminated mucormycosis, and infections in atypical locations.Disseminated infection was defined as infection at ≥ 2 non-contiguous sites (Roden et al., 2005).Underlying disease is a major influence on the development of mucormycosis in children, and the clinical type of mucormycosis can vary from one underlying disease to another (Pana et al., 2016;Skiada et al., 2018).Pulmonary mucormycosis is notably prevalent in hematologic malignancies and neutropenia patients (Jeong et al., 2019).The mortality rate for mucormycosis ranges from 40% to 80%, influenced by the patient's underlying conditions and infection site (Cornely et al., 2019).The risk of death is higher in individuals with major risk factors compared to those with other diseases (Kennedy et al., 2016;Jestin et al., 2021).Epidemiological studies on pediatric mucormycosis are limited, with mortality rates of 26.5%-33.3%reported in children with hematologic malignancies Mycological evidence (Pana et al., 2016;Ziaka et al., 2022).In our study, 6 children (42.86%) died within 3 months of discharge, all having underlying diseases: 4 with hematologic malignancies, 1 with a mediastinal tumor, and 1 with diabetes.Pulmonary mucormycosis typically presents with non-specific symptoms like fever, cough, breathing difficulties, and chest pain (Danion et al., 2015).The infection often affects the lung parenchyma and may spread to the chest wall, pulmonary artery, aorta or pericardium, and infiltration into the pulmonary artery can cause hemoptysis (Steinbrink and Miceli, 2021).In this study, all 14 children had lung involvement, predominantly presenting with fever, cough, and chest pain, with 9 patients experiencing neutropenia.Chest CT scans revealed consolidation as the most frequent presentation of PM, alongside mass lesions, nodules, and cavities.The main chest CT findings in the children with mucormycosis in this study were pleural effusion and consolidation, not limited to patients with neutropenia. Overall, for patients with suspected pulmonary mucormycosis in hematologic malignancies, clinical symptoms and pulmonary imaging may not present typically.However, clinicians should be vigilant for signs of consolidation in lung CT scans or vascular blockages in CT pulmonary angiography (Busca et al., 2012). Early diagnosis and treatment can help reduce mortality in patients with mucormycosis (Sipsas et al., 2018).However, the diagnosis relies on histopathology and culture.GMS staining is preferred, as mucormycosis can appear in tissue samples as broad, irregular, non-septate, or minimally sparsely septate hyphae, often branching at right angles (Frater et al., 2001;Goldberg et al., 2015).Even if histopathological examination shows characteristic changes of mucormycosis, tissue cultures often turn out negative, and blood cultures are usually not positive (Hammer et al., 2018).Serological tests like the GM-test and G-test), commonly used for detecting fungal infections, are often negative in mucormycosis patients (Pyrgos et al., 2008;Lass-Florl et al., 2021). In our study, all cases underwent cultures of sputum, or bronchoalveolar lavage fluid, but only one case yielded a positive culture.Pathogens were found in the histopathological examination of only three cases, and all cases tested negative in GM and G tests.In this study, mNGS was used to detect pathogens in children's peripheral blood, bronchoalveolar lavage fluid, and tissue from the infection site, and all children were found to be infected with mucormycosis.In a retrospective study of mNGS for the detection of pathogens in lung infections, it was found that BALF mNGS greatly improved the accuracy and detection of pathogens in patients with lung infections (Wu et al., 2022).Therefore, the value of BALF mNGS should be focused on children suspected of having a pulmonary mucormycosis.In this study it was found that the mNGS test is more sensitive than Conventional microbiological tests (P<0.001).In a study that included 310 patients with suspected pulmonary invasive fungal infections, it was found that compared with Conventional microbiological tests, mNGS was superior in its diagnostic performance (AUC=0.967)(Wang et al., 2022).mNGShas proven the presence of mucormycosis at the molecular biology level, providing a basis for initiating early antifungal treatment against mucormycosis.Before the application of mNGS, conventional diagnostic methods had low success rates in While most mNGS samples in our study came from children's peripheral blood, it doesn't imply that the blood is the infection site.In a localized infection, mucor DNA fragments can access the bloodstream easily (Li et al., 2021).Simultaneously, mNGS is capable of detecting bacteria, fungi and viruses.In a study that included 13 children, mNGS was found to detect both fungi and bacteria in 53.85% of samples, and both fungi and viruses in 38.46% of samples (Zhang et al., 2022).In our study, 35.71% of the children were found to have viral infections, 21.43% had concurrent Aspergillus infections, and 21.43% had bacterial infections.Hence, the mNGS results should be interpreted alongside clinical symptoms and imaging to pinpoint infection presence and location, particularly when indicating multiple pathogen infections.Based on the mNGS results, the appropriate and timely use of antibiotics and antiviral treatments for patients with mixed infections better controlled the symptoms.Of course, further research is needed to understand the clinical significance of low pathogen sequence numbers in children detected by mNGS. Prompt administration of effective antifungal treatments and the surgical excision of necrotic tissue is crucial in preventing further damage to tissues and organs in mucormycosis patients, potentially decreasing long-term complications and improving survival chances (Jeong et al., 2015;Sipsas et al., 2018).In our study, 40% of children received prophylactic antifungal treatment before the onset of mucormycosis.All cases with potential breakthrough infections had underlying hematologic malignancies, which is close to the incidence of breakthrough invasive mucormycosis reported by Skiada et al. in HSCT patients despite having posaconazole prophylaxis (Skiada et al., 2020).85.71% (12/14) of the children in our study were initially treated with L-AmB, with 10 patients undergoing a combination treatment of L-AmB and posaconazole, and one patient (Case 3) receiving initial treatment with isavuconazole.In instances of severe, refractory or progressing mucormycosis, combining L-AmB with posaconazole has shown beneficial results (Cornely et al., 2014(Cornely et al., , 2019)).However, a separate study indicated that initiating combination therapy with L-AmB and posaconazole was not able significantly to reduce mortality rates in a cohort of patients with confirmed hematologic malignancies (Kyvernitakis et al., 2016).Therefore, further research is needed to assess the potential benefits of L-AmB or isavuconazole as monotherapy or in combination treatments, based on patient outcomes and drug tolerance.This study's limitations must be acknowledged.First, as a single-center retrospective analysis, it inherently carries certain biases.Second, since some mNGS samples were derived from patients' bronchoalveolar lavage fluid, it is challenging to determine whether the microbes reported by mNGS are clinically significant pathogens or merely colonizing organisms.The data generated by mNGS need to be parsed by sophisticated bioinformatics tools, and the results are limited by the completeness of the databases and current scientific knowledge.Therefore, it is important to give due consideration to the clinical setting when interpreting mNGS data.Standardized operational and parsing processes for mNGS are not yet fully established, which may affect the accuracy and reproducibility of the results.Finally, the results lack a consensus, and the diagnosis of mucormycosis according to guidelines is probable, as further histopathology or culture confirmation could not be pursued due to patient condition limitations. In conclusion, mucormycosis in children is rare but carries a high mortality risk.Early in the disease course, it initially manifests with non-specific symptoms like fever and cough.Children suspected of mucormycosis based on clinical presentation and imaging results should be diagnosed early.Compared to traditional mucormycosis culture or histopathological testing, mNGS offers higher sensitivity and a shorter detection period, making it a supplementary method for early diagnosis.mNGS can also aid in detecting mixed infections and informing timely antimicrobial therapy, thus improving patient outcomes.Therefore, the mNGS testing method holds significant value in the early diagnosis of mucormycosis in children. identifying mucormycosis, and treatment was more reliant on clinical experience. FIGURE 1 FIGURE 1 The chest CT findings of children with mucormycosis.(A) In the lower lobe of the left lung, there's a mass-like area of high density with an internal cavity.This suggests a localized infection with tissue destruction leading to cavity formation.(B) In the lower left lung, there's an encapsulated fluid density shadow accompanied by some consolidation.This could indicate an abscess or a localized collection of fluid, possibly due to infection.(C) There's evidence of obstructive pneumonia in the upper lobe of the left lung, with partial narrowing of the upper lobe bronchus and multiple patchy areas of high density observed distally.This is indicative of infection causing obstruction in the airways.(D) In the mediastinum, there's an unclearly demarcated mass-like soft tissue density shadow.Additionally, there's narrowing and truncation of the right upper lobe bronchus and multiple solid nodules in both lungs.This could represent the spread of infection or inflammatory response in the mediastinum and lungs. FIGURE 2 FIGURE 2The histopathological and fungal immunofluorescence microscopy findings.(A) Mediastinal biopsy showed chronic inflammation with necrosis in fibrous tissue and the presence of a few fungal hyphae, suggestive of mucormycosis.Special staining results were: Acid-fast bacilli (AFB) negative, Periodic Acid-Schiff (PAS) positive, and Gomori methenamine silver (GMS) positive.This indicates the presence of mucor species, as evidenced by the PAS and GMS positivity.(B) Lung interstitial fibrous tissue showed hyperplasia with infiltration of acute and chronic inflammatory cells.There were tissue cells aggregating in the alveolar spaces, extensive infarction with inflammatory necrosis, and small focal granulomas, consistent with an inflammatory lesion.Fungal hyphae and spores were observed in the necrotic tissue, indicative of fungal infection.Special staining results were also AFB negative, PAS positive, and GMS positive, confirming the presence of fungal elements.(C) Bronchoalveolar lavage fluid (BALF) examination revealed non-septate hyphae branching at 90°angles, suggestive of mucormycosis-like fungal filaments.(D) Another BALF sample similarly showed non-septate hyphae with right-angle branching, indicative of mucormycosis-like fungal filaments. The final clinical diagnosis was determined by integrating these findings with clinical symptoms and other laboratory test results. TABLE 1 Demographic and clinical characteristics of mucormycosis patients. TABLE 2 Comparison of mNGS and conventional detection of mucormycosis.
2024-06-14T15:17:19.135Z
2024-06-10T00:00:00.000
{ "year": 2024, "sha1": "836061d5c32bc8d9820e9b4a3642b6e82e8b174b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fcimb.2024.1368165", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "534207e34fb5a07c3ef0469b63b241982049bcad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
221859890
pes2o/s2orc
v3-fos-license
Gene and Protein Expression in Subjects With a Nystagmus-Associated AHR Mutation Recently, a consanguineous family was identified in Israel with three children affected by Infantile Nystagmus and Foveal Hypoplasia, following an autosomal recessive mode of inheritance. A homozygous stop mutation c.1861C > T; p.Q621∗ in the aryl hydrocarbon receptor (AHR) gene (AHR; MIM 600253) was identified that co-segregated with the disease in the larger family. AHR is the first gene to be identified causing an autosomal recessive Infantile Nystagmus-related disease in humans. The goal of this study is to delineate the molecular basis of this newly discovered human genetic disorder associated with a rare AHR gene mutation. The gene and protein expression levels of AHR and selected AHR targets from leukocyte cultures of healthy subjects and the patients were analyzed. We observed significant variation between mRNA and protein expression of CYP1A1, CYP1B1, and TiPARP under rest and AHR-induced conditions. The CYP1A1 enzymatic activity in induced leukocytes also differs significantly between the patients and healthy volunteers. Intriguingly, the heterozygous subjects demonstrate CYP1A1 and TiPARP gene and protein expression similar to homozygous patients. In contrast, CYP1B1 inducibility and expression vary between hetero- and homozygous subjects. Similarity and differences in gene and protein expression between heterozygotes and homozygous patients can give us a hint as to which metabolic pathway/s might be involved in the Nystagmus etiology. Thus, we have a unique human model for AHR deficiency that will allow us the opportunity to study the biochemical basis of this rare human mutation, as well as the involvement of AHR in other physiological processes. INTRODUCTION We recently described a consanguineous Israeli Arab family with three children affected by Idiopathic Infantile Nystagmus (IIN) and foveal hypoplasia, due to a homozygous stop mutation c.1861C > T; p.Q621 * in the gene encoding the aryl hydrocarbon receptor (AHR; MIM 600253) (Mayer et al., 2019). To date, only one gene has been associated with IIN, the FRMD7 gene, located in the Xq26 chromosome. X-linked mutations in the FRMD7 gene are the major known cause of familial IIN and idiopathic infantile periodic alternating nystagmus (Tarpey et al., 2006;Watkins et al., 2012). The AHR gene mutation in our patients results in a premature stop codon replacing a glutamine codon at positon 621 (out of 848 for the wild type protein) and thus, is expected to result in a truncated protein lacking a significant part of the C-terminus, including the Q-rich domain. This domain was recently shown to play a role in regulation of intracellular trafficking of the AHR, in the context of both nucleocytoplasmic shuttling and receptor activation. It was shown that the section between residues 648-661, which is in close proximity to the reported mutation, is necessary and sufficient to facilitate ligandinduced nuclear accumulation of the AHR and subsequent transcriptional activation (Kumar et al., 2001;Tkachenko et al., 2016). Recently, another family of Indian origin was described whose members carry a unique splicing mutation in the Q-rich domain of AHR. In this instance, the mutant AHR is truncated at approximately amino acid 400 which would make it even shorter than the mutant AHR in our family (Zhou et al., 2018). It is noteworthy that the homozygous mutant members in this family presented with a Retinitis pigmentosa-like retinal degeneration while any pupillary movements or morphological development of the fovea were not investigated in this study. Investigation of Ahr null mice (Ahr −/−) revealed that these animals were afflicted by horizontal nystagmus similar to that observed in our patients (Chevallier et al., 2013). AHR knockout mice (AHR-KO) were shown to possess an impaired optic nerve myelin sheath and exhibited modifications in lipid composition as well as in the expression of myelin-associated glycoprotein (MAG) (Juricek et al., 2017;Shackleford et al., 2018). Supporting the involvement of lipids in this pathology, AHR was recently shown to activate the promoter for a small subunit of the serine palmitoyltransferase, the first and rate-limiting step in sphingolipid synthesis, in both mice and HeLa cells (Majumder et al., 2020). Aryl hydrocarbon receptor is a highly conserved protein of the basic helix-loop-helix (bHLH)-PAS (PER-ARNT-SIM) family that acts as a ligand-activated transcription factor (Bock and Köhle, 2006;Barouki et al., 2012;Bock, 2013Bock, , 2014Zhou, 2016;Nebert, 2017). This receptor was shown to play a role under normal physiological conditions in a multitude of processes, such as immunity, inflammation, neurogenesis, and tumorigenesis (Bock, 2017;Kawajiri and Fujii-Kuriyama, 2017). The non-ligand bound AHR exists in the cytosol as part of a multiprotein complex containing heat shock protein 90 (hsp90), a 38-kDa AIP (AHR-interacting protein), and a less well characterized protein, the co-chaperone p23 , which maintains AHR in a ligand-binding conformation and prevents its nuclear translocation and/or dimerization with ARNT (Kewley et al., 2004). Ligand binding leads to a conformational change in the AHR which allows nuclear translocation and dimerization with ARNT (AHR nuclear translocator). AHR-ARNT dimerization causes dissociation of the AHR from its accompanying chaperones (Tsuji et al., 2014) and converts the AHR into a high affinity DNA-binding form Soshilov and Denison (2011). The heterodimeric AHR-ARNT interacts with xenobiotic-response elements (XREs) and upregulates transcription of xenobiotic-metabolizing enzymes such as the cytochrome P450 genes (e.g., CYP1A1 and CYP1B1) as well as the phase II enzymes (Whitlock, 1999;Abel and Haarmann-Stemmann, 2010;Bock, 2014;Nebert, 2017). The goal of our work is to investigate the biochemical consequences of the nystagmus-linked p.Q621 * AHR mutation. Although it was hypothesized that the protein might be degraded due to non-sense mediated mRNA decay (Mayer et al., 2019), this was never examined. In the present study, as a step toward understanding the disease mechanism, we examined the expression of the AHR gene and protein in leukocyte culture lysates from patients. Our results show that the AHR mutant is stably expressed at both the mRNA and protein levels, despite lacking a C-terminus. We further examined the expression of AHR target and AHR-linked genes in leukocytes from healthy controls, heterozygous subjects and homozygous patients upon induction with the known AHR ligand benzo(a)anthracene. Overall, we found a reduction in the expression levels of mRNA and protein of known AHR targets that was consistent with the patient genotype. This work presents the first biochemical analysis of this affected family, a unique "human model" for AHR dysfunction, which can help to further investigation of the multiplicity of AHR-related pathways. MATERIALS AND METHODS Blood samples from three patients and two unaffected heterozygotic family members, as well as samples from five healthy volunteers, were collected at the Hillel Yaffe Medical Center, Hadera, Israel, in accordance with the tenets of the Helsinki Declaration. The study was carried out at Tel-Aviv University and was approved by the Tel-Aviv University ethics committee. Written informed consent was obtained from participants in this study or from their legal guardians. Leukocyte Isolation Peripheral blood (15-20 ml) was collected from healthy and affected subjects. The blood was diluted twice in PBS and layered onto Ficoll-Paque Plus sterile solution. Centrifugation was carried out at 400 × g for 25 min at RT. The lymphocyte-rich fraction was collected for the subsequent steps. Primary Lymphocyte Cell Culture Primary lymphocyte cell cultures were established as described in Gurtoo et al. (1975) and Atlas et al. (1976). The previously collected lymphocyte-rich fraction was diluted in RPMI 1640 growth medium with L-glutamine supplemented with heat-inactivated Fetal Bovine Serum (FBS), and penicillin/streptomycin. The lymphocyte pellet was resuspended in growth medium containing the mitogen Phytohemagglutinin-M (PHA) and cells were counted and plated in T25 cell culture flasks at a density of 2-4 × 10 6 cells per ml. After incubation at 37 • C, 5% CO 2 for 48 h, benzo(a)anthracene (BA) in acetone was added to a final concentration of 5 µM for AHR induction and acetone was added to control flasks as a vehicle control. After 24 h of further incubation, the cultures in each treatment group were separately harvested. The cells were usually assayed immediately; otherwise, centrifuged cell pellets were stored at −80 • C for up to 1 month in frozen growth medium containing 10% DMSO. The total RNA extraction was carried out from freshly collected cells. Protein concentration was determined by the Bicinchoninic Acid (BCA) assay with reagents purchased from Sigma and BSA was used as a calibration standard (Walker, 2009). Quantitative RT PCR The work was carried out on cultivated human lymphocytes as described in Lin et al. (2003). We used quantitative real time polymerase chain reaction (qRT-PCR) to quantitate the relative mRNA expression of AHR, Cytochrome P450-1A1 (CYP1A1), P450-1B1 (CYP1B1), TIPARP (TCDD-inducible poly) (ADPribose) polymerase, ARNT (AHR Nuclear Translocator) and AHRR (AHR repressor). The quantitative RT-PCR included GAPDH (glyceraldehyde-3 phosphate dehydrogenase) as an internal standard. Relative expression of all mRNAs of interest was determined in naïve (unstimulated) as well as benzo(a)anthracene stimulated lymphocyte cultures. All measurements were performed in triplicate and standardized to the level of GAPDH expression. The expression level of target genes in uninduced lymphocyte cultures from healthy volunteers was used as a reference value for calculation of target gene expression in treated cultures. The relative expression level of the target gene, 2 − Ct, was calculated using the method developed by Livak and Schmittgen (2001), with StepOne Software v.2.3 (Applied Biosystems Thermo Fisher Scientific). Total RNA was prepared using the PureLink RNA Mini Kit (Invitrogen) according to the manufacturer's protocol. The RNA quantity and quality was measured in a NanoDrop Spectrophotometer. 4-5 µg of total RNA was reverse transcribed to cDNA using Oligo(dT) and the SuperScript TM First-Strand Synthesis System for RT PCR (Invitrogen) according to the manufacturer's protocol. Quantitative PCR was performed using the qPCRBIO Fast qPCR SyGreen Blue Mix, Hi-ROX (PCRBIOSYSTEMS) on a StepOnePlus qRT PCR apparatus (Applied Biosystems). Primers were designed using the NCBI Primer Blast program for finding highly specific primer pairs for each human gene sequence taken from the NCBI gene library. The specificity of each primer pair was tested by standard curve calibration and melting curve analysis to confirm reliable reaction efficiency and the appearance one single product for each gene of interest. The resultant products were analyzed by agarose gel electrophoresis to confirm the appropriate predicted size of the PCR product (data not shown). Primer pairs used in this work are listed in Table 1. Protein Expression Level of AHR and AHR-Related Proteins The protein expression levels of AHR, CYP1A1, CYP1B1, TIPARP, ARNT, and AHR-R were estimated by Western Enzymatic AHH Activity Measurements The aryl hydrocarbon hydroxylase (AHH) activity was measured by the fluorometric method described for human cultured lymphocytes by Gurtoo et al. (1975) with modifications made in Atlas et al. (1976). Briefly, the incubation mixture consisted of 50 mM Tris-HCl, pH 8.5, 0.36 mM NADPH, 0.42 mM NADH (4.2 mM), 3 mM MgCl 2 , 0.7 mg/ml BSA, 0.2 M sucrose, and DDW, and 10-15 µl lymphocytes (∼2.5 mg/ml, ∼5 × 104 cells/ml) in a total volume of 0.25 ml. The reaction was initiated by adding 2 mM benzo(a)pyrene followed by incubation at 37 • C in a shaking incubator for 1 h. The reaction was stopped by the addition of 0.75 mL cold acetone:hexane (1:3) mixture. Aliquots of 0.25 ml were taken from the organic upper phase [containing the Hydroxy-Benzo-Pyrene (OH-BP) reaction product], added to 0.75 ml 1 M NaOH. The lower fraction containing water soluble OH-BP was transferred in a new Eppendorf tube. Fluorescence emission was measured at 596 nm under excitation at 515 nm using a Horiba Jobin Yvon FL3-11 Spectrofluorometer and supplied software. The AHH activity was expressed in picomol OH-BP/min/mg protein. Statistical Analysis Results are presented as means of at least three independent experiments. Data are expressed as means ± SD. Statistical significance of the differences was assessed using paired Student's t-test. p < 0.01 or p < 0.05 were the criteria of significance. mRNA and Protein Expression of AHR in Patients In the mutant AHR gene of our patients, the stop codon is located at amino acid 621, which should yield a truncated protein lacking a significant part of the C-terminus. We speculated that the mutant AHR may fail to be stably expressed, like many truncated proteins, as part of the cell's sophisticated mRNA quality control apparatus (Karamyshev and Karamysheva, 2018). Indeed, the phenotype of our patients is similar to that of mice harboring a complete AHR knockout, suggesting that the mutant AHR of our patients might indeed be unstable and degraded at either the mRNA or/and protein level. Table 2 and Supplementary Figure S1 show that the relative quantity of AHR mRNA transcripts in heterozygotes (Het) is about 70% that of healthy subjects (WT) whereas AHR mRNA expression in homozygous patients (Hom) is about 40% of the level in WT. These differences most likely result from partial mRNA decay of mutant transcripts. Thus, despite the stop codon mutation, the mutant AHR mRNA is stably expressed in the leukocytes of patients albeit at reduced levels. As would be expected, similar values were obtained when the samples were taken from leukocyte cultures that were treated with BA, as BA is not expected to induce the mRNA of AHR itself rather, expression of its target genes. The fact that we detected AHR mRNA expression ( Figure 1A) in homozygous patients, does not necessarily imply that the protein is stable expressed. Consequently, we used western blot analysis to estimate protein expression levels in lysates prepared from BA treated and untreated leukocyte cultures. We examined whether the AHR protein is expressed in lysates of cells taken from of healthy controls, heterozygotes and homozygous patients. To this end, we used two different antibodies for AHR immunodetection (Figures 1B,C). First, we carried out immunodetection with antibodies raised against the C-terminal domain of AHR. Since the mutant AHR has a stop codon at amino acid 621, it should be lacking its C-terminus. Consistent with this prediction, samples taken from homozygous mutant patients lacked any discernible band of AHR expression whereas heterozygotes expressed about half the amount of healthy controls ( Figure 1B). In order to examine whether the truncated protein is stable or undergoes rapid degradation, we used an antibody against an N-terminal epitope of AHR that should be able to detect both the full-length and the truncated forms of AHR. As seen in the Figure 2C AHR bands are detected in all tested samples, indicating that the truncated AHR protein exhibits significant stability in the lymphocytes. The level of AHR in heterozygous subjects was similar to that of healthy subjects, whereas homozygous patients had about half the amount found in healthy and heterozygous subjects. These results are in agreement with the mRNA expression pattern presented herein and AHR protein expression detected with C-terminus antibody. Interestingly, the mobility of the truncated protein of homozygous patients is similar to that of wild type AHR. It should be noted that anomalous migration of proteins in PAGE is a known phenomenon, explained by the ability of various physical characteristics such as acidity (Rath et al., 2009) or hydrophobicity (Tiwari et al., 2019), or helix-loophelix ("hairpin") sequences (Shirai et al., 2008) to influence the binding of SDS. mRNA and Protein Expression Levels of AHR Target Genes As a starting point for investigating the effects of the mutation, we carried out a quantitative expression analysis for some of the main AHR target genes CYP1A1, CYP1B1, and TiPARP [TCDD Inducible Poly(ADP-Ribose) Polymerase (Qiang et al., 2001), a less known AHR target that may contribute to AHR feedback regulation]. The tests were carried out on the same lysates extracted from lymphocyte cultures that were taken from patients and healthy controls, as described in the first section. We found that basal mRNA and protein expression levels of CYP1A1 and CYP1B1 in homozygous mutant patients were significantly lower than in healthy volunteers, reaching ∼26 and 32% of the healthy control average, respectively (Figures 2B,D). Basal mRNA expression levels of these targets in heterozygotes were similar to levels in healthy controls (see Table 2 and Figures 2A,C). In addition, we examined the induction of AHR target genes in lymphocytes that had undergone induction by benzo[a]anthracene (BA) treatment ( Table 2 and Supplementary Figure S1). Lymphocytes from healthy controls showed eightfold and ∼2.5-fold increases in CYP1A1 and CYP1B1 transcript levels, respectively, whereas the homozygous mutant patients exhibited no increase but rather a slight decrease in CYP1A1 and CYP1B1 transcript levels suggesting a failure of AHR to induce its downstream targets. A lack of activation in homozygous The data are represented as Relative Quantity (RQ) calculated by the RT PCR instrument software. The left column lists the tested genes. Patient genotype is described in column headings: Het -heterozygous subjects n = 2); Hom -homozygous patients (n = 3). Gene expression levels are normalized to the level of healthy control (WT) with GAPDH as an internal control. The data represent averages of three technical replicates for each of three healthy controls, two heterozygous subjects (Het), and three homozygous patients (Hom) for untreated and BA treated (+BA) leukocyte cultures. The data are shown as the mean of three replicates for each patient ±SD. A two-tailed t-test analysis of statistical significance was done; p < 0.01* and p < 0.05** refer to BA induced vs. untreated samples. FIGURE 1 | AHR mRNA (A) and protein (B,C) expression. In all graphs, the columns correspond to lysates of non-induced leukocyte cultures (white columns) or lysates of BA-induced cultures (BA, black columns). (A) Relative AHR mRNA expression was examined by qRT PCR using GAPDH as an internal control. (B) AHR protein expression was detected by western blotting and detection with an antibody against a C-terminal epitope. (C) AHR protein expression was detected by western blotting and detection with an antibody against an N-terminal epitope. In all experiments, leukocyte culture lysates were run on 10% SDS PAGE, transferred to nitrocellulose and probed with specific AHR C-terminal (B) or N-terminal (C) antibodies. Band densities were normalized to β-Actin and quantified as a fraction of non-treated WT samples. Data were obtained separately from either patients or healthy subjects and were averaged. Data are shown as mean ± SD, n = 4 technical replicates for each of three healthy volunteers (Heal), two heterozygous patients (Het), and three homozygous patients (Hom). Insets show a representative AHR western blot obtained using a. LI-COR Odyssey IR scanner. Upper bands correspond to AHR and lower bands correspond to the β-actin loading control. A two-tailed t-test analysis of statistical significance was done; p < 0.01* and p < 0.05** refer to variances between homozygous patients vs. healthy control samples. patients is expected, considering the location of the mutation, which is found in the transcription activation region (Tkachenko et al., 2016). This result is also consistent with the lack of activation observed in AHR deficient mice (Gonzalez et al., 1995;Fernandez-Salguero et al., 1996;Mimura et al., 1997;Shimizu et al., 2000;Bunger et al., 2008). Surprisingly, the heterozygous patients also showed no induction of CYP1A1 expression in response to AHR activation, despite carrying one wild-type allele. Similarly, no induction was observed for AHR-induced CYP1B1 gene and protein expression for homozygous patients. However, heterozygous patients demonstrate clear inducibility of CYP1B1 mRNA expression, a 1.6-fold increase over uninduced samples. (C) Relative CYP1B1 mRNA expression was examined by qRT PCR with GAPDH as internal control. (D) CYP1B1 protein expression was determined by western blotting with CYP1B1 antibodies. (E) Relative TiPARP mRNA expression was examined by qRT PCR using GAPDH as internal control. (F) TiPARP protein expression was detected with PARP antibodies. The leukocyte culture lysates were run on 10% SDS PAGE, transferred to nitrocellulose and probed with specific antibodies as described in the "Materials and Methods" section. Band densities were normalized to the β-Actin loading control and quantified as a fraction of non-treated WT. Data were obtained separately from either patients or healthy subjects and were averaged. Data are shown as mean ± SD, n = 4 technical replicates for each of three healthy volunteers, two heterozygous patients (Het), and three homozygous patients (Hom). Insets show a representative western blot obtained using a LI-COR Odyssey IR scanner. Upper bands correspond to tested protein and lower bands correspond to β-Actin loading control. A two-tailed t-test analysis of statistical significance was done; p < 0.01* and p < 0.05** refer variances between homozygous patients vs. healthy control samples. This compares to a 2.3-fold increase for lysates taken from treated wild type cultures (from healthy controls) compared to uninduced wild type samples. In terms of CYP1B1 protein expression, a very strong increase (greater than four-fold) was observed for both healthy subjects and heterozygotes. The expression of TIPARP, a third AHR target gene was also examined. We found that expression of this gene in homozygous patients was lower than in healthy controls (70% of the control) whereas heterozygous subjects demonstrated 40% higher basal level of this gene vs. healthy controls, although this increase was within the error range (see Table 1 and Figures 2E,F). In contrast to healthy controls, neither hetero-nor homozygous patients exhibited BA-induction at either the mRNA or protein expression level. The fact that CYP1A1 and TiPARP genes are not induced upon BA-activation of AHR in the heterozygotes, who do not exhibit the nystagmus phenotype, might suggest that their gene products are not involved in the ocular pathology observed in the homozygous patients. Thus, CYP1B1 is the one gene among the FIGURE 3 | mRNA and Protein expression of AHR target genes. ARNT (A,B) and AHR-R (C,D). In all graphs, the columns correspond to lysates of non-induced leukocyte cultures (white columns) or lysates of BA-induced cultures (BA, black columns). (A) Relative ARNT mRNA expression was examined by qRT PCR using GAPDH as an internal control. (B) ARNT protein expression was detected by western blotting and detection with ARNT antibodies. (C) Relative AHR-R mRNA expression was examined by qRT PCR using GAPDH as an internal control. (D) AHR-R protein expression was detected by western blotting and detection with AHR-R antibodies. The leukocyte culture lysates were run on 10% SDS PAGE, transferred to nitrocellulose and probed with specific antibodies as described in the "Materials and Methods" section. Bands were normalized to the β-Actin loading control and quantified as a fraction of non-treated WT. Data were obtained separately for either patient or healthy subject and were averaged. Data are shown as mean ± SD, n = 4 technical replicates for each of three healthy volunteers, two heterozygous subjects (Het), and three homozygous patients (Hom). Insets show a representative western blot obtained using a LI-COR Odyssey IR scanner. Upper bands correspond to tested protein and lower bands correspond to β-Actin loading control. A two-tailed t-test analysis of statistical significance was done; p < 0.01* and p < 0.05** refer variances between homozygous patients vs. healthy control samples. three tested genes that behaves differently in heterozygotes, who don't have the IIN phenotype, and homozygotes, who suffer from the eye disease. mRNA and Protein Expression Levels of AHR-Regulated Genes ARNT and AHR-R We also examined the expression of two AHR-interacting genes, ARNT and AHR-R that, respectively activate or repress AHR itself. Mutation of AHR is not expected to affect induction of these genes. Indeed, for all tested groups, the BA treated samples show the same expression levels as the untreated samples for both proteins. Figure 3 shows mRNA ( Figures A,C) and protein (Figures B,D) expression levels of ARNT and AHR-R. Neither of these mRNAs or proteins undergo activation when AHR is stimulated with BA. Basal levels of ARNT and AHR-R in heterozygous patients are similar to the levels found in healthy subjects. In contrast, the levels of ARNT and AHR-R in homozygous patients are slightly but significantly higher compared with healthy and heterozygous subjects, perhaps representing some kind of compensation mechanism for impaired levels or function of AHR. These differences were observed for mRNA expression, but not for AHR-R protein expression. Activity of Aryl Hydrocarbon Hydroxylase (AHH) As an additional measure of AHR activation, we measured the AHH activity carried out by a number of the CYP P450 monooxygenase enzymes, including CYP1A1, 1A2, and 1B1 (Liu et al., 2013), in the same leukocyte lysates as described above. Activity was determined immediately after lymphocyte culture harvesting and repeated during the course of 1 month using the frozen lymphocytes. The average AHH activity (3-6 technical replicates for each lymphocyte sample) was calculated (see Table 3, WT). The basal AHH activity of patients was lower than that of healthy controls, with the heterozygous patient exhibiting approximately 30% activity of healthy controls and homozygous patients exhibiting less than 20% of the wild type non-induced activity. Healthy controls displayed a 3.6-fold increase in AHH activity over the basal level, compared with only a 2.3-fold increase for the heterozygous patients and no significant increase over basal activity for homozygous patients upon induction with However, since the fluorometric method described in Gurtoo et al. (1975) and Atlas et al. (1976), measures the common hydroxylase activity, the contribution of each enzyme CYP1A1, CYP1A2, or CYP1B1 is not known. DISCUSSION Aryl hydrocarbon receptor is a transcription factor that is known for its canonical role in the induction of detoxification enzymes upon exposure to xenobiotics and drugs. In addition, AHR was reported to be involved in a multitude of normal physiological function as well as pathological processes such as cell circle regulation, immune response, apoptosis, oxidative stress, cancer, tumorigenesis and CNS metabolism (Nebert et al., 2000;Marlowe and Puga, 2005;Puga et al., 2009;Juricek and Coumoul, 2018). Recently, two mutations in the AHR gene were shown to be associated with visual defects (Zhou et al., 2018;Mayer et al., 2019). The main goal of this study was to examine potential changes in mRNA and protein expression levels of canonical AHR targets in subjects carrying the stop mutation c.1861C > T; p.Q621 * which is associated with foveal hypoplasia and infantile nystagmus (Mayer et al., 2019). For this purpose, we prepared lymphocytes from healthy heterozygous carriers and affected homozygous mutant patients and examined the expression of AHR, its partner and target genes, in naïve and benzo(a)anthracene (BA) induced cells. Using this readily available system, we were able to successfully examine the expression of AHR and some of its known targets and partners, although this system might have certain limitations in terms of investigating tissue-specific and developmental gene expression. Since it was hypothesized that the AHR mRNA might be degraded due to non-sense mediated decay, we examined first whether the truncated form of AHR is expressed in lymphocytes prepared from the peripheral blood of patients vs. healthy controls. Our results, both at the level of mRNA and protein expression, exclude the possibility of strong downregulation or degradation of AHR, as the AHR was detected in both homo-and heterozygous cells harboring the stop mutation. Further experiments showed that the full-length AHR protein was expressed approximately 50% in heterozygous patients compare to healthy volunteers, corresponding to the presence of one mutant and one wild type allele in the heterozygous genotype. Using AHR antibodies raised against the N-terminus, we observed AHR bands in all subjects at approximately the same mobility. This fact may be explained by abnormal electrophoretic SDS mobility which can be affected by a number of physical characteristics such as acidity, hydrophobicity and helical hairpin structures, as was most recently discussed (Shirai et al., 2008;Rath et al., 2009;Tiwari et al., 2019). Indeed, it was shown that approximately 40% of the yeast proteome does not migrate as would be expected based on the calculated molecular weight of the proteins (Tiwari et al., 2019) and such discrepancies in calculated molecular weight can reach even 30% (Shirai et al., 2008;Tiwari et al., 2019). It is also possible that the truncated protein undergoes some post-translational modification such phosphorylation, glycosylation, or ubiquitination, etc. which can lead to anomalous electrophoretic mobility (Guan et al., 2015). We also examined the effect of the mutation on the expression ARNT and AHR-R, two proteins that belong to the bHLH/PAS family and that regulate AHR transfer from the cytoplasm to the nucleus (Mimura et al., 1999;Kikuchi et al., 2003;Evans et al., 2008). No change in protein expression was observed in any of the samples from patients or healthy volunteers. In fact, the mRNA level of these genes was increased in homozygous cells, possibly as compensation for loss of ARH transcription factor activity. Expression of three AHR target genes, CYP1A1, CYP1B1, and TiPARP was also investigated. As expected, none of the three genes was induced upon BA treatment of lymphocytes prepared from homozygous mutant patients having two mutant alleles. The heterozygous subjects have one mutant allele and exhibit no nystagmus, as expected for a recessive gene. Interestingly, despite the presence of one wild type allele, there is no BA activation of the CYP1A1 and TiPARP targets in heterozygotes. Thus, the mutant AHR function seems to behave in a dominant manner, suppressing the function of the wild type allele, without affecting phenotype. One possible explanation for this phenomenon could lie in the different ability of the mutant AHR to dimerize with itself and other factors as well as to interact with DNA-binding sites. For example, it could form irreversible, non-functional dimers with ARNT, effectively sequestering this transcription partner from the wild type AHR. Moreover, it was shown that regulatory regions in the DNA can influence the types of dimers formed for some helix-loop-helix (bHLH) proteins (Inamoto et al., 2017). The fact that two proteins, CYP1A1 and TiPARP, are not induced in the heterozygous patients who lack the Nystagmus phenotype, indicate that these genes are not involved disease development. Notably, the only target that was found to be differentially expressed in heterozygous compared to affected homozygous patients, CYP1B1, was reported to be associated with congenital glaucoma (Vasiliou and Gonzalez, 2008). Our patients did not exhibit any sign of glaucoma and no signs of glaucoma were reported in AHR-KO mice. Future investigation is required to elucidate whether CYP1B1 protein is involved in development of disease manifestation in the AHR patients. Finally, measured AHH activity was shown to be considerably lower in both hetero-and homozygous patients compared to healthy volunteers (Table 3). Upon BA activation AHR no induction was observed in homozygous patients. The heterozygous patients show intermediate induction of the AHH activity, but its inducibility was significantly lower than revealed healthy subjects. This is inconsistent with the total lack of induction seen at the mRNA and protein levels and might be explained by the fact that the method used to measure enzyme activity detects the combined activity of several cytochrome P450 monooxygenases. It was recently published that AHR-KO are afflicted by a nystagmus and found to have an impaired optic nerve myelin sheath along with modifications in lipid composition and in the expression of MAG (Juricek et al., 2017;Shackleford et al., 2018). Experiments to determine MAG levels in lymphocyte cultures obtained from the blood of the patients yielded very low cDNA copy numbers, so we were not able to confirm this finding in our patients. Recently, AHR was found to bind and activated the gene promoter of serine palmitoyltransferase small subunit A (SPTSSA), which encodes a subunit of the serine palmitoyltransferase that catalyzes the first and rate-limiting step in de novo sphingolipid biosynthesis (Majumder et al., 2020). Thus analysis of genes related to lipid metabolism would constitute a logical next step in our studies. Overall, our results show that CYP1B1, at both the gene and protein level, undergoes induction in heterozygotes but not in homozygotes, making it a candidate to be involved in disease development. A recent study showed that activation of AHR induces changes in the expression patterns of 158 different genes (Liamin et al., 2018). We are interested in identifying additional genes that are induced in heterozygotes, but not in homozygous patients, and believe that such an analysis can give us clues in identifying pathways involved in the development of the observed ocular pathologies. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Tel-Aviv University Ethics Committee, Permission #14221241_20190326. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS AA, BW, CW, and NB conceived and designed the project. MM and RS provided the clinical samples and analysis. NB and CW performed experiments and wrote the manuscript. MR assisted in design of mRNA expression experiments. AA and BW interpreted the data and revised the manuscript. All authors read, revised and approved the final manuscript. FUNDING This project was supported by an internal fund by the Triangle Research and Development Center (TRDC) and Tel-Aviv University 34860000000 (RS and NB). Tel-Aviv University Grant 30003072000 (NB). ACKNOWLEDGMENTS We are grateful to the patients and their family members for their cooperation in this research. We wish to express our gratitude to them.
2020-09-24T13:08:02.119Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "ddd5dabf9379bb11ce13f2acc6f3ecca0368c44f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.582796/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ddd5dabf9379bb11ce13f2acc6f3ecca0368c44f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234042142
pes2o/s2orc
v3-fos-license
Modeling of Bending and Radial Hydroelastic Oscillations for a Sandwich Circular Plate Resting on an Inertial Elastic Foundation The paper deals with the development and analysis of a mathematical model for a circular sandwich plate resting an inertial elastic foundation and interacting with pulsating viscous liquid layer. The sandwich plate is the bottom wall of a channel containing a thin layer of viscous liquid. The pressure in the viscous liquid layer changes due to a predetermined pressure pulsation law at the channel contour and its squeeze between the upper channel wall and the vibrating circular sandwich plate. The coupled hydroelasticity problem consisting of the Navier-Stokes equations, the continuity equation, and the dynamics equations for the circular sandwich plate with corresponding boundary conditions was formulated and solved. We studied the viscous fluid motion inside the channel as a creeping one. The elastic foundation was considered in the framework of inertial Winkler foundation model. To write the sandwich plate dynamics equations, we used the kinematic hypothesis of the broken normal. The hydrodynamic parameters of the liquid layer, including its stresses acting on the sandwich plate, were found. The final mathematical model is the system of partial differential equations for studying bending and radial hydroelastic oscillations of the sandwich plate. Its investigation was carried out by the Fourier method. We studied plate dynamic behaviour in the main vibration mode. In particular, the frequency response of the circular sandwich plate were constructed and studied. Introduction Multilayer materials and structures made of them are increasingly used in various industries. Therefore, the statics and dynamics of multilayer structures, as well as the study of their interaction with continuous media, is an urgent problem both from a theoretical and practical point of view. The development review of kinematic theories for studying the stress-strain state of multilayer structural elements is given in [1]. Reference [2] was presented equilibrium and dynamics equations for threelayered beams and plates obtained in the framework of the kinematic hypothesis of the broken normal. On the other hand, the problems of hydroelastic vibrations for homogeneous plates in various formulations are widely studied. For example, we can cite the following sources here. Hydroelastic vibrations of a circular plate were studied in [3] using the approximate Rayleigh energy method. Reference [4] considered vibrations of a circular plate interacting with an ideal fluid based on the formulation and solution of the coupled hydroelasticity problem. The fluid viscosity under hydroelastic vibrations of a circular plate is taken into account in [5]. In reference [6], the frequency AMSD 2020 Journal of Physics: Conference Series 1791 (2021) 012020 IOP Publishing doi: 10.1088/1742-6596/1791/1/012020 2 response of two parallel elastic sheets with thin incompressible viscous liquid between them under pressure waves over its upper surface is considered. The stability of a plate interacting with a viscous incompressible fluid was studied in [7]. Reference [8] is devoted to the study of the vibration behavior of a Kirchhoff nano-plate interacting with the surrounding viscous fluid. The experimental study results of natural frequencies and the corresponding decrements of harmonic vibrations for rectangular plates in air or on the fluid free surface are presented in [9]. Vibrations of a homogeneous circular plate interacting with a viscous liquid layer at the one its side and resting on an elastic foundation at the opposite side were studied in [10]. Hydroelastic bending vibrations of a circular plate resting on Pasternak foundation and interacting with an inviscid sloshing liquid are studied in [11]. We note the references [12][13][14][15] dealing with the bending vibrations of three-layered beams and plates resting on an elastic foundation. In particular, the free vibrations of a sandwich beam with stiff and compressible core and resting on Winkler foundation were investigated in [12]. In reference [13], the behaviour of circular sandwich plate resting on an elastic foundation under thermal impact was considered. The axisymmetric bending vibrations of a composite circular plate resting on Winkler foundation under local surface loads of different forms are investigated in [14]. Reference [15] was devoted to free vibrations of a three-layered circular plate resting on an elastic foundation under the temperature field influence, taking into account the foundation inertial properties. However, there is much less research on the interaction of multilayer plates with liquid. We can point to references [16][17][18][19][20] which studied the interaction of composite plates and beams with a liquid. However, the above-mentioned papers did not study the radial and bending oscillations of a circular sandwich plate resting on an elastic foundation caused by the action of viscous fluid stresses in the radial and normal directions. Statement of the Problem Let us consider two parallel coaxial disks forming a narrow channel completely filled with a viscous incompressible liquid. The channel bottom rests on an elastic foundation. We assume the upper disk is absolutely rigid. The bottom disk is a sandwich structure consisting of two face sheets of different thicknesses and a core between them. In other words, the bottom disk is a circular sandwich plate resting on the elastic foundation. We consider the core of the sandwich plate to be lightweight and incompressible, and we also neglect its work in the tangential direction. In this case to according [2], the sandwich plate is deformed as a single package, and its stress-strain state (SSS) is fully described by the radial displacement and deflection of the coordinate surface (the middle surface of the plate core), as well as by the rotation angle of the deformed normal in the core. The elastic foundation will be considered within the framework of inertial Winkler foundation model [15]. Let us introduce a cylindrical coordinate system with a pole located in the core center and restrict ourselves to considering the axisymmetric problem. The channel scheme and its geometric dimensions are shown in Fig. 1. 1 is an absolutely rigid disk, 2 is a circular sandwich plate resting on an elastic foundation, 3 is a viscous liquid layer We have introduced in Fig. 1 the following notation: R is the radius of the upper and bottom disks, h 0 is the gap between the disk and the sandwich plate in an undisturbed state, 2c is the thickness of the sandwich plate core, h 1 is the upper face sheet thickness of the sandwich plate, h 2 is the bottom face sheet thickness of the sandwich plate. Further, when studying the problem, we will assume that h 0 << R and w m << h 0 , where w m is the sandwich plate bending amplitude. In addition, we will believe that the pressure pulsation law at the channel contour is given, and the sandwich plate is clamped along its contour. The liquid outflow from the channel will be taken as a free liquid outflow into the same liquid with a given pressure pulsation law, i.e. we assume there is the edge cavity along the channel contour (for simplicity, it is not shown in the figure). In other words, we suppose that when the liquid flows out at the edge, the pressure in the channel edge cross section coincides with the predetermined pressure pulsation law at the channel contour. The equilibrium equations for the considered circular sandwich plate were obtained in [2,21], within the framework of the kinematic hypothesis of the broken normal proposed by the authors. Following [2,21] and taking into account the plate inertial forces in the radial and normal directions, we have written down the dynamics equations for the circular sandwich plate resting on inertial Winkler foundation in the form: Here u is the radial displacement of the circular sandwich plate; w is the normal displacement of the circular sandwich plate, i.e. deflection; φ is the rotation angle of the deformed normal in the plate core; q zr and q zz are the shear and normal stresses of the fluid acting on the plate, respectively; κ is the stiffness coefficient of the inertial elastic foundation; m f is the inertial coefficient of the inertial elastic foundation; V r and V z are the fluid velocity projections on the axis of the introduced coordinate system; ρ k is the material density of the k-th layer of the circular sandwich plate. The notation for the coefficients a 1 , ..., a 6 is presented in [2]. According to [22], due to the narrowness of the channel, we can assume that the fluid movement between the channel wall is creeping one. In this case we have the following equations for viscous liquid dynamics: Here p is the liquid pressure, ν is the coefficient of the kinematic viscosity for the liquid, and ρ is the liquid density. According to the above, as the boundary conditions for Eqs. (1) we consider the clamp conditions at the sandwich plate contour and the conditions for limited sandwich plate deflection at the symmetry axis is the pressure pulsation law at the channel contour, p 0 is the pressure constant level in the liquid. Determining of Elastic Displacements for the Circular Sandwich Plate Taking into account the damping properties of the thin viscous liquid layer in the channel gap, further we consider steady-state harmonic oscillations, since transients decay quickly. Moreover, the following relations take place in the problem statement under consideration Taking into account the above relations (5) in Eqs. (2) we obtain dynamics equations for the thin layer of the viscous incompressible liquid After that, we solve Eqs. (1), we obtain a mathematical model describing the radial and bending hydroelastic oscillations for the circular sandwich plate resting on the inertial Winkler foundation. This model consists of Eqs. (1) with expressions for viscous fluid stresses (6) and boundary conditions (3). We apply the Fourier method to study and resolve the resulting mathematical model. In particular, according to the boundary conditions (3) the elastic displacements u, w, and rotation angle φ can be represented as a series of eigenfunctions for the Sturm-Liouville problem: where J 0 is the zero-order Bessel function, J 1 is the first-order Bessel function; I 0 is the modified zero-order Bessel function; I 1 is the modified first-order Bessel function; β k is the root of the transcendental equation We set the number of retained terms in series (5), and then substituted them into the mathematical model of hydroelastic oscillations for the circular sandwich plate resting on inertial Winkler foundation. As a result, we obtain the system of ordinary differential equations for determining the laws of radial displacement, plate deflection, and the rotation angle of the normal in the circular sandwich plate core. Next, we restrict ourselves to considering the main oscillations mode (i.e., we consider the case for k=1) and write the system contains three equations, one of which is homogeneous. This allows us to reduce the system of three equations to the system of two equations with respect to radial displacement and deflection, which we have solved. Here we give the final expressions u and w for the main oscillations mode for the circular sandwich plate Here we have introduced the following notation Results and Discussion The developed hydroelastic vibrations mathematical model for the circular sandwich plate resting on inertial Winkler foundation and its solution based on the Fourier method allows us to obtain analytical expressions for frequency-dependent distribution laws of the plate's radial displacements and deflections. In this paper, these laws are defined for the main mode of hydroelastic oscillations; they are represented by formulas (9). Furthermore, in Eqs. (9), we distinguished the expressions of A u (ω) and A w (ω), which can be considered as the hydroelastic frequency responses for the circular sandwich plate cross section. Indeed, by setting the value for radial coordinate r in Eqs. (9), we will obtain expressions for the static radial displacement and deflection due to the constant pressure level p 0 , as well as expressions for the frequency-dependent radial displacement and deflection due to liquid pressure pulsation at the sandwich plate cross section corresponding to the specified radial coordinate. Using A u (ω) and A w (ω), we can find the resonant frequencies of plate vibrations that correspond to the maximum elastic displacements of the circular sandwich plate. The above makes it possible to study the plate's SSS at resonances and determine the distribution laws for fluid pressure inside the channel. From the analysis of the mathematical model and the obtained laws for the sandwich plate elastic displacements, we concluded that there is a cross-influence of radial displacements and deflections of the sandwich plate, because hydroelastic vibrations are described by a system of two differential equations with respect to radial displacement and deflection. In particular, calculations show that the hydroelastic frequency responses A u (ω) and A w (ω) have two maxima for the main oscillations mode, i.e. there is a cross-action of stiffness and inertia forces in the radial and normal directions. When considering homogeneous plates [10], this effect was not detected. Also, calculations have shown that an increase in the elastic foundation stiffness leads to an increase in resonant frequencies, and an increase in the elastic foundation inertia coefficient leads to a shift of resonances to the low-frequency region. Summary and Conclusion Thus, the mathematical model has been formulated for the study of hydroelastic oscillations of the circular sandwich plate due to the liquid pressure pulsation at the channel contour. The frequency responses for the radial and bending oscillations of the circular sandwich plate in the main mode are obtained. Our calculations showed the mutual influence of plate stiffness and inertia in the radial and normal directions, as well as the importance of taking into account shear stresses from the liquid layer. This conclusion is based on the appearance of resonant frequencies determined by stiffness and inertia in the radial and normal directions on both the frequency responses A u (ω) and A w (ω). Taking into account the inertial and rigid properties of the elastic foundation leads to a significant change in the values for the resonant frequencies. Therefore, we came to the conclusion that when studying hydroelastic oscillations of sandwich plates, in contrast to homogeneous plates [3][4][5]10], it is necessary to take into account both radial and normal inertia forces, as well as shear and normal stresses of the viscous fluid layer, inertial and rigid properties of the elastic foundation.
2021-05-10T00:03:26.830Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "95d6c6d5efe2b8de966f9241d59935aff9a28b55", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1791/1/012020", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b9558c01b33db2b2631811c6c6479749d20180f4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
51904926
pes2o/s2orc
v3-fos-license
Comparison of priming versus slow injection for reducing etomidate-induced myoclonus: a randomized controlled study Background Etomidate injection is often associated with myoclonus. Etomidate injection technique influences the incidence of myoclonus. This study was designed to clarify which of the two injection techniques—slow injection or priming with etomidate—is more effective in reducing myoclonus. Methods This prospective randomized controlled study was conducted on 189 surgical patients allocated to three study groups. Control group (Group C, n = 63) received 0.3 mg/kg etomidate (induction dose) over 20 s. Priming group (Group P, n = 63) received pretreatment with 0.03 mg/kg etomidate, followed after 1 min by an etomidate induction dose over 20 s. Slow injection group (Group S, n = 63) received etomidate (2 mg/ml) induction dose over 2 min. The patients were observed for occurrence and severity of myoclonus for 3 min from the start of injection of the induction dose. Results The incidence of myoclonus in Group P (38/63 [60.3%], 95% CI: 48.0–71.5) was significantly lower than in Group C (53/63 [84.1%], 95% CI: 72.9–91.3, P = 0.003) and Group S (49/63 [77.8%], 95% CI: 66.0–86.4, P = 0.034). Myoclonus of moderate or severe grade occurred in significantly more patients in Group C (68.3%) than in Group P (36.5%, P < 0.001) and Group S (50.8%, P = 0.046), but the difference between Groups P and S was not significant (P = 0.106). Conclusions Priming is more effective than slow injection in reducing the incidence of myoclonus, but their effects on the severity of myoclonus are comparable. A change in the etomidate injection technique can itself reduce the incidence of EIM, eliminating the need for an additional drug with its inherent cost and potential side effects [2,4]. Pre-treatment with a low dose of etomidate (priming dose) [2] and slow injection of the induction dose [4] have both been shown to reduce the incidence of EIM. To the best of our knowledge, however, there are no studies comparing these two techniques, and therefore it is not clear which is the better option. We, therefore, conducted this study to determine which of the techniques is more effective in reducing the incidence of myoclonus. Materials and Methods This prospective, randomized controlled study was conducted in the Department of Anesthesia and Intensive Care, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India, after obtaining approval from the hospital ethics committee. The study was registered with the Clinical Trials Registry-India (CTRI/2015/02/005592). The study was conducted from November 2013 to May 2015, on 189 patients ranging in age from 18 to 60 years, of both sexes, American Society of Anesthesiologists physical status classification I-II, scheduled for elective surgery under general anesthesia. The surgical procedures included general surgery, urosurgery, gynecological surgery, plastic surgery, and ear-nosethroat (ENT) surgery. Exclusion criteria were: pre-existing adre-nal disease or adrenocortical insufficiency; receiving or having a history of receiving steroids within the last three months; sepsis; hypersensitivity to the study drug; history of a seizure disorder; and neurological disease. A total of 250 patients were assessed for eligibility. Of these, 61 patients were excluded, as 49 patients did not meet the inclusion criteria and 12 patients declined to participate. Finally, 189 patients were randomized into the three study groups, with 63 patients in each group (Fig. 1). Patients were advised to fast for 6 h prior to surgery and were not given any premedication. Written informed consent was obtained from all patients after explanation of the risks and benefits of the study medication and the anesthesia technique. Using a computer-generated random number table, patients were allocated randomly into three groups (63 patients in each group), the control group (Group C), priming group (Group P), and slow injection group (Group S). Concealment of allocation was done using sequentially numbered, opaque, sealed envelopes, which were numbered in advance and opened only after the participant's name and other details were written on the appropriate envelope. In the operating room, standard monitors were applied (electrocardiogram, non-invasive blood pressure, and pulse oximeter) and baseline hemodynamic parameters were noted. Intravenous access was secured with an 18-gauge intravenous catheter. In Group P, patients received a priming dose of 0.03 mg/kg etomidate, followed after 1 min by an induction dose of 0.3 mg/kg, injected manually over 20 s. In Group S, patients received an induction dose of 0.3 mg/kg etomidate, injected slowly over 2 min using a syringe infusion pump. In Group C, patients received an induction dose of 0.3 mg/kg etomidate, injected manually over 20 s. All patients were watched for the occurrence of myoclonus (primary endpoint) by an independent observer for 3 min from the start of injection of the induction dose of etomidate. The time of onset of myoclonus was noted and categorized as 'within 1 min, ' '1 to 2 min, ' and '2 to 3 min' from the start of induction. The myoclonic movement was graded as 0 for no myoclonus, 1 for mild myoclonus (short movement of a body segment, e.g., a finger or wrist), 2 for moderate myoclonus (mild movement of two different muscle groups, e.g., face and arm), and 3 for severe myoclonus (intense myoclonic movement in two or more muscle groups or fast adduction of a limb) [4]. In case of difficulty in mask ventilation due to myoclonus, we planned to administer a neuromuscular blocking agent immediately. The time until loss of consciousness (LOC), defined as the time from start of injection of the induction dose of etomidate until loss of response to verbal commands (e.g., noncompliance when asked to open the eyes), was noted. The dose of etomidate administered when LOC was observed (LOC dose) was also noted. Three min after the start of induction with etomidate, patients in all the groups were administered fentanyl (2 µg/kg) and vecuronium bromide (0.1 mg/kg) to facilitate tracheal intubation. Anesthesia was maintained with isoflurane in a mixture with oxygen and nitrous oxide. After completion of surgery, residual neuro-muscular blockade was antagonized with neostigmine and glycopyrrolate. Heart rate, systolic, diastolic, and mean arterial pressure, and peripheral oxygen saturation were recorded every minute from the start of induction until tracheal intubation and at 1, 2, and 5 min after tracheal intubation (secondary endpoints). Statistical analysis Previous studies have estimated the incidence of EIM to be around 50% [1][2][3]. The sample size calculated for a 5% level of significance and power of 80% was 57 in each group, assuming that the proportion of patients who developed myoclonus after slow injection or pre-treatment with etomidate would be 25%. Assuming a 10% dropout rate, the total sample size was set at 189 (a minimum of 63 in each group). Statistical analysis was performed using SPSS for Windows, version 17.0 (SPSS Inc., USA). Continuous variables-age, weight, induction dose, LOC dose, LOC time, duration of surgery, heart rate and blood pressure-are presented as mean (SD), and categorical variables-American Society of Anesthesiologists (ASA) class, sex distribution, myoclonus incidence, severity grade, and time of onset noted as occurrence of myoclonus within a categorical time frame (1 min, 1 to 2 min, and 2 to 3 min)-are presented as absolute numbers or percentages with 95% CI. The continuous data were first checked for normality using the Shapiro-Wilk test. All the continuous variables were found to be normally distributed. Therefore, all these variables were compared across the three groups using ANOVA. The F-values were found statistically insignificant for all continuous variables except LOC dose and LOC time. Tukey's multiple comparisons test was used for intergroup comparisons for LOC dose, as the variances were homogenous. Tamhane's T2 test was used for intergroup comparisons for LOC time, as the variances were not homogenous. Categorical variables were analyzed using the chi-square test. For all statistical tests, P < 0.05 was taken to indicate a significant difference. Results There were no significant differences in demographic profile, the mean duration of surgery, or dose of etomidate required for induction among the groups ( Table 1). The mean LOC dose was significantly lower in Group S than in Group P and Group C. There was no difference in mean LOC dose between Group P and Group C. The mean time until LOC in Group S was significantly longer than that in Group C and Group P. There was no difference in mean time until LOC between Group P and Group C (Table 1). The incidence, severity, and time of onset of myoclonus in the three groups were as shown in Table 2. The incidence of myoclonus was significantly lower in Group P (60.3%, 95% CI: 48.0-71.5) than in Group C (84.1%, 95% CI: 72.9-91.3, P = 0.003) and Group S (77.8%, 95% CI: 66.0-86.4, P = 0.034). There was no significant difference in the incidence of myoclonus between Groups C and S (P = 0.364). A moderate or severe grade of EIM (Grade 2 or 3) was observed in a significantly greater number of patients in Group C than in Group P and Group S. However, no significant difference in the number of patients experiencing moderate to severe EIM was observed between Group P and Group S. The time of onset of myoclonus, defined by the occurrence of myoclonus within the previously specified time frames was as shown in Table 2. Myoclonus occurred within 2 min from the start of induction in 44/53 patients (83.0%), 36/49 patients (73.5%), and 27/38 patients (71.1%) in Groups C, S, and P respectively, with no significant differences between the groups. Myoclonic movement was observed beyond the 3-min time frame of observation in four patients in Group P and six patients in Group S ( Table 2). Hemodynamic parameters such as mean arterial lpressure (Fig. 2) and heart rate (Fig. 3) were comparable across the three groups. None of the patients showed any incidence of oxygen desaturation. Discussion In the present study, we found that a small priming dose (0.03 mg/kg) of etomidate given prior to the induction dose was more effective than slow injection of etomidate (over 2 min) in reducing the incidence of EIM. The involuntary myoclonic movements seen with etomidate are believed to be caused by subcortical disinhibition [1]. A large dose of etomidate depresses the cortical activity before depressing subcortical activity, thereby causing myoclonus [8,12,13]. Both pretreatment (priming) with etomidate and slow administration may prevent the unsynchronized onset of drug action T1 T2 T3 T4 T5 T6 T7 T8 T11 110 90 70 50 30 10 Fig. 3. Heart rate (HR) changes at various time points in the three study groups. T0 is the baseline reading; T1, T2, T3, T4, and T5 are readings at 1, 2, 3, 4, and 5 min respectively from the start of induction. T6 is at tracheal intubation. T7, T8, and T11 are readings at 1, 2, and 5 min respectively after intubation. at different sites within the central nervous system that may be responsible for EIM [1,4]. In our study, the incidence of EIM in the priming group was significantly lower than in the control and slow injection groups. There was no significant difference in the incidence of EIM between the control and slow injection groups. The incidence of 84.1% (53/63 patients) EIM in the control group in our study is comparable to the results of other studies [2,10]. Aissaoui et al. [2] observed a significant reduction in the incidence of EIM following pretreatment with a low dose of etomidate (priming dose). In their study, 26% (6/23 patients) of patients in the priming group experienced myoclonus as compared to 87% (20/23 patients) in the control group (P < 0.001). Doenicke et al. [1], in a crossover study involving eight patients, found that the incidence of EIM was 25% (2/8 patients) when etomidate induction followed pretreatment with 0.05 mg/kg etomidate, as compared to 75% (6/8 patients) in the control group. In another study, three different pretreatment doses of etomidate were compared. The incidence of EIM was found to be 20% (4/20 patients), 25% (5/20 patients), and 35% (7/20 patients) for pretreatment with 0.03, 0.05, and 0.075 mg/kg etomidate, respectively [1]. We found a higher incidence of EIM in the priming group than reported by the other studies [1,2]. This could perhaps be a result of the fact that we did not premedicate our patients and administered a muscle relaxant at 180 s after the start of induction, thus allowing a longer time period for observation of myoclonus. In contrast, Doenicke et al. [1] premedicated their patients with oral midazolam 1 h prior to induction and administered a muscle relaxant at 90 s after induction, allowing a shorter time period for observation than ours. Aissaoui et al. [2] also used a smaller time period for observation, as they administered a muscle relaxant at 60 s after induction with etomidate. Do et al. [4] observed that slow injection of etomidate (over 2 min) resulted in a significantly lower incidence of EIM (28% [7/25 patients]) than giving it as a fast injection over 10 s (84% [21/25 patients], P < 0.001). Even though our study methodology was similar to that of Do et al., we observed a higher incidence (77.8% [49/63 patients]) of EIM in the slow injection group. This difference could perhaps be attributed to the fact that our study population was younger (mean [SD] age, 34.6 [10.8] vs. 52 [14] years). The age of a patient affects the risk of myoclonus: the younger the patient, the higher the risk [5]. We found that the percentage of patients who experienced moderate or severe EIM was significantly lower in the priming and slow injection groups than in the control group. Aissaoui et al. [2] also found the severity of myoclonus to be significantly lower with priming than in a control group. Do et al. [4] found the severity of myoclonus to be significantly lower in a slow injection group than in a fast injection group. We found no significant difference in the severity of myoclonus between the prim-ing and slow injection groups. To the best of our knowledge, no study has compared the effects of priming vs. slow injection of etomidate on the incidence of EIM. We are hence unable to compare our results with those of other authors. Do et al. [4] noted that in their slow injection group, in contrast with the fast injection group, LOC occurred before the complete induction dose was administered. This finding implies the possibility of a smaller dose requirement in the slow injection group [4]. In our study, the mean time until LOC was significantly longer in Group S than in Group C and Group P. By the time LOC occurred in Group S, only 10.8 (3.4) mg of etomidate had been administered, while the complete induction dose was 16.7 (3.4) mg. On the other hand, in the priming and control groups, the complete induction dose was administered before LOC was observed. The occurrence of EIM is dose-related, larger doses being associated with a higher incidence [1]. Had etomidate been administered until the onset of LOC, instead of as at a fixed dosage, it is possible that the incidence of EIM would have been reduced in the slow injection group. In the majority of the patients in all our study groups, the onset of myoclonus occurred within 2 min after the start of induction. However, four patients in the priming group and six patients in the slow injection group had myoclonus after 3 min from the start of induction. The period of observation for myoclonus varied from 1 to 3 min in most previous studies, suggesting that the actual incidence of EIM may be higher than that reported. Delayed myoclonic movements could go undetected due to masking by a neuromuscular blockade. To determine the true incidence of EIM, further studies are needed to identify the optimal observation period. Our study has several limitations. First, it was not a blind study. Due to the variable speed of injection (2 min vs. 20 s) of etomidate in the study groups, it was not possible to blind either the physicians or the patients. Second, the time of onset of myoclonus was categorized into 1-min intervals from the start of induction. Thus, the exact time of onset and duration of myoclonus were not noted. To conclude, we found that priming with etomidate significantly reduces the incidence and severity of myoclonus. Priming is more effective than slow injection in reducing the incidence of myoclonus, but their effects on the severity of myoclonus are comparable. Funding Statement No funding received. All expenses were borne by Vardhman Mahavir Medical College and Safdarjung Hospital.
2018-08-14T13:49:25.475Z
2018-07-30T00:00:00.000
{ "year": 2018, "sha1": "eeb16d9783e111b5a1d448b6bf0f6372eb3aae14", "oa_license": "CCBYNC", "oa_url": "https://ekja.org/upload/pdf/kja-d-18-27168.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eeb16d9783e111b5a1d448b6bf0f6372eb3aae14", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216555659
pes2o/s2orc
v3-fos-license
Further Development of Near-Infrared Mediated Quantum Dots and Paclitaxel Co-loaded Nanostructured Lipid Carrier System for Cancer Theragnostic. Of colloidal systems, ceteris paribus, nanostructured lipid carriers are second to none in offering a single-unit platform for multifunctional benefits. Quantum dots are known to possess unique properties that make them ideal for imaging purpose and that they may be used for cancer detection. For several decades, paclitaxel has been the most effective drug against a wide range of solid tumours. Theragnostic nanomedicine provides a platform to monitor, evaluate, and individualize treatment in real time. Evaluation of cancer treatment outcome at an early stage therapy is key to increase survival prospects of a patient. Previously, a novel co-loaded nanostructured lipid carriers’ theragnostic system for parenteral administration was developed. The aim of this study was to further investigate the co-loaded nanostructured lipid carriers in order to provide interpretation necessary for preclinical elucidation of the formulation, in part. The co-loaded nanostructured lipid carriers were prepared by oil/water emulsification-solvent evaporation technique. In this study, stability and co-loaded nanostructured lipid carriers’ internalization by MCF 7 and HepG2 cells were investigated. The co-loaded nanostructured lipid carriers was stable at 4°C for 1 month. The formulation was successfully internalized by MCF-7 and HepG2 cells. Nevertheless, the co-loaded nanostructured lipid carrier was more apt for MCF-7 cells. This finding affirms the formulation to be the most appropriate for breast cancer treatment. In addition, if taken correctly by a patient for a month, the formulation would give true reflection of the contents’ amounts, the factor paramount to appropriate changes in treatment protocol. It can therefore safely be concluded that the co-loaded nanostructured lipid carrier formulation may be potentially an effective theragnostic translational system. Introduction The development of nanocarriers for concomitant therapeutic and imaging applications has recently won considerable attention. This strategy potentially allows an approach that combines treatment and diagnosis in individual patient. [1][2][3] Nanotheragnostic has evolved to become a promising strategy for personalized medicine. 4 Theragnostic nanomedicine could provide unique features and new methodologies to monitor, evaluate, and individualize treatment in real time. Imaging agent may act as a tool to optimize individual patient dosage schedules and levels for the benefit of the patient and to evaluate treatment outcome at an early stage therapy by allowing individualized 1 The School of Pharmaceutical Sciences, Shandong University, Jinan, China appropriate changes in treatment protocols, which are likely to increase the survival prospects for the patients. 5 Developed at the turn of the millennium, 6 nanostructured lipid carriers (NLCs) have improved characteristics in terms of drug loading and stability compared to solid lipid nanoparticles. 7 Paclitaxel (PTX), a potent antineoplastic agent derived from the bark of pacific yew tree, Taxus brevifolia, 8,9 is one of the most broadly active compounds being used against a wide spectrum of malignancies including cancers of breast, ovary, head and neck, and AIDS-related Kaposi's sarcoma. [10][11][12][13] Quantum dots (QDs) have highly sensitive fluorescent imaging properties with molar extinction coefficient as high as 0.5-5 Â 10 6 M À1 Ácm À1 . 14 This efficient photon absorption leads to nanomaterials that are 10 to 50 times brighter and several thousand times more photostable than conventional imaging dyes. 15 Quantum dots offer a great promise as versatile probes integrating imaging and diagnosis. In light of bioimaging application, the qualities of wide absorbance range and narrow, highly symmetric emission spectra enable excitation of various sized QDs with a single wavelength resulting in distinct emission spectra with little overlapping. 16 This reduces cross-talk between channels and allows improved multicolor detection, thereby enhancing detection efficiency and capability. Imaging technique is chiefly limited by penetration depth due to the strong scattering properties of soft tissues. 17 There is a strong scattering in the visible region of the spectrum (<700 nm). However, the near-infrared region ((NIR; 700-900 nm) often called "biological window" for optical imaging, is characterized by a low absorption and scattering in soft tissues. 18 Nanoparticles with NIR excitation (650-900 nm) are highly preferable for in vivo imaging because of their penetration depth and minimized tissue autofluorescence compared with UV light. 19,20 Thus, NIR QDs are suitable and more efficient for in vivo fluorescent imaging. In previous study, a novel co-loaded NLC based on QDs and PTX 21 to be used as a parenteral multifunctional delivery system was fabricated. This present study sought to further investigate the co-loaded NLC in order to provide interpretation necessary for preclinical elucidation of the formulation, in part. In doing so, short-term stability and internalization of coloaded NLC by MCF-7 and HepG2 cells were investigated. It was hypothesized that the aforementioned investigations would augment the co-loaded NLC as an effective translation potential for cancer theragnostic. Preparation of Co-loaded (PTX and QDs) NLC and Blank NLC The desired amounts of GMS (37.5 mg), OA (14.03 mL), SPC (10 mg), PTX (3.5 mg), and 1.0 mL of 5 mM QDs were accurately weighed, measured, and quantitatively transferred into 2 mL eppendorf tube where they were dissolved in 1 mL of acetonitrile. The eppendorf tube was then submerged in a water bath at 80 C. The resulting organic phase was slowly (8 mL/h) injected by microsyringe pump (KD Scientific, Holliston, Massachusetts) into 10 mL of 0.5% wt/vol F68 aqueous solution, under mechanical agitation (RCT basic, Guangzhou, China) of 1000 rpm in a water bath at 80 C for 10 minutes to form a coarse emulsion. The warm primary coarse emulsion was further treated with a sonicator (>20 kHz) for 20 minutes to form a homogenous nanoemulsion. The resulting nanoemulsion was cooled down in an ice (0 C) bath to produce nanoparticle dispersion (PTX and QDs co-loaded NLC) and later stored at 4 C until use. The b-NLC was the one in which PTX was not added. 21 Physical Short-Term Stability Study of Co-loaded NLC In determination of physical short-term stability of the coloaded NLC formulation, the samples were stored in sealed amber colored glass vials at 4 C. After every week for 1 month of storage, the samples were evaluated for changes in particle size and encapsulation efficacy (EE). For EE measurement, the desired amount of co-loaded NLC was dispersed in 2.9 mL of 0.5 wt% Tween 80-phosphate buffered saline (pH 7.4) and agitated (XW-80A vortex, Instruments factory of Shanghai Medical University, China) for 3 minutes to dissolve the free drug. The resulting dispersion was centrifuged at 2500 rpm (3-30 K Sigma, Henderson Biomedical Ltd, London, United Kingdom) for 10 minutes at 4 C. Upon centrifugation, the amount of the soluble free drug in the supernatant was harvested and measured by HPLC. The HPLC assay (Agilent 1100 series, USA) was performed on a reverse phase C18 analytical column (4.6 Â 250 mm, pore size 5 mm, InertSustain, Tokyo, Japan). The mobile phase was a mixture of acetonitrile: water (65:35, vol/vol) delivered at a flow rate of 1.0 mL/min. Paclitaxel was detected at 227 nm with a variable wavelength detector. The calibration curve for quantification of PTX was linear (R 2 ¼ 0.9988) over a range of standard concentrations between 1.0 and 50 mg/mL. The EE was calculated according to the following formula: For particle size evaluation, the mean particle size of the coloaded NLC was established by photon correlation spectroscopy using Malvern Zetasizer (3000 HS, Malvern Instruments Ltd, Worcestershire, United Kingdom) at 25 C and 90 scattering angle. The samples were prepared in triplicates, measured, and averaged. Cell Maintenance and Cellular Uptake Study The cancer cell lines of MCF-7 and HepG2 were obtained from Shandong Institute of Immunopharmacology and Immunotherapy of Shandong University (Jinan, China). The MCF-7 cells and HepG2 cells were cultured in Roswell park memorial institute (RPMI)-1640 medium and Dulbecco's modified Eagle's medium (DMEM), respectively. All the media were maintained with 10% (vol/vol) fetal bovine serum, penicillin (100 UÁmL À1 ), and streptomycin (100 mgÁmL À1 ). The cells were grown at 37 C in a humidified atmosphere containing 5% carbon dioxide. For investigation of cellular internalization of the nanoparticle (co-loaded NLC), HepG2 and MCF-7 cell lines were utilized. In profile, HepG2 and MCF-7 cells were seeded (2 Â 10 5 cells per well) on coverslips in the 6-well plates each which included DMEM and RPMI-1640 medium, respectively. After 80% confluency, the medium was replaced with 1 mL of coloaded NLC for 3 hours. After 3 hours of incubation, the cells were washed with phosphate-buffered saline (PBS; pH 7.4) and immediately fixed in 75% methanol and 25% glacial acetic acid for 15 minutes at 37 C. Thereafter, washed twice with PBS (pH 7.4), air dried, and stained with Hoechst 33342 (a DNA specific fluorescent dye) for 15 minutes. The stained coverslips (treated cells) were then observed under a laser scanning confocal microscope (Carl Zeiss LSM 700, Zeiss, Illinois). Statistical Analysis All studies were independently repeated at least in triplicates, and data were presented as the mean + standard deviation. Statistical analysis of differences among various treatments was carried out using the 2-tailed unpaired Student t test. Statistical significance was considered at P < 0.05. b-NLC and Co-loaded NLC Preparations Representative photograph of Figure 1A (b-NLC) showed clearer nanoparticle dispersion. Representative photograph of Figure 1B (co-loaded NLC) indicated almost the same nanoparticle dispersion as shown in Figure 1A except with white colouring from PTX. These results were not surprising because the method of preparation was the same for both preparations except that b-NLC was without PTX compared with the other. Stability Study of Co-loaded NLC As depicted in Figure 2 (stability study of co-loaded NLC as measured by particle size and EE every week for 1 month), the particle sizes were found to be 113.60 + 1.50 nm, 112.40 + 3.02 nm, 113.90 + 6.05 nm, 112.60 + 4.17 nm, and 114.90 + 2.26 nm, while EEs were found to be 86.59% + 1.41%, 85.49% + 1.25%, 87.81% + 2.63%, 85.17% + 0.56%, and 85.25% + 1.07%, respectively. These results showed almost the same measurement for each of the measured parameters. By statistically comparing the 2 parameters, there were insignificant changes of the parameters. Stability study of co-loaded NLC as measured by particle size and encapsulation efficacy every week for 1 month (n ¼ 3). The co-loaded NLC was stored at 4 C prior to analysis. NLC indicates nanostructured lipid carrier. In Vitro Cellular Uptake of Co-loaded NLC Figure 3 shows internalization of co-loaded NLC by MCF-7 and HepG2 cells following exposure at 37 C for 3 hours. The nuclei of both MCF-7 cells and HepG2 cells were stained blue. Co-loaded NLC was stained red. The combination of blue and red colours makes purple (merged). With a high degree of confidence, it can be said that the co-loaded NLC reached the nucleus. b-NLC and Co-loaded NLC Preparations By visual inspection, Figure 1, photographs of b-NLC and coloaded NLC, indicates that the nanoparticles were very nearly transparent in appearance. This is consistent with hydrodynamic diameters ranging from 50 to 200 nm. 22,23 Stability Study of Co-loaded NLC The purpose of stability testing is to provide proof on how the quality of active substance varies with time under the influence of a variety of environmental factors such as temperature and humidity. A short-term stability of co-loaded NLC dispersion was studied at 4 C every week for 1 month. Effects on particle size and EE were evaluated. The stability study of co-loaded NLC ( Figure 2) revealed insignificant (P > 0.05) changes in the stability parameters. This invariability in both parameters (particle size and EE) suggests that the co-loaded NLC could remain stable for a month at 4 C. This indicates that in at least a period of month, the co-loaded NLC would present nearly the exact original amounts of contents. Consequently, a patient taking the right dose would have true reflection of the quantities of the dosage unit and hence appropriate treatment protocols be taken for the patient. Generally, water in formulation triggers instability overtime due to increased mobility of molecules. To avert this, the product is usually lyophilized using mannitol as a cryoprotectant. 24,25 Although co-loaded NLC short-term stability was achieved for the period of 1 month, it is suggested that a lyophilized form of the formulation would be appropriate to cast-iron guarantee long-term stability. In Vitro Cellular Uptake of Co-loaded NLC Cellular internalization of co-loaded NLC was investigated by the use of MCF-7 and HepG2 cell lines. It can be seen that the co-loaded NLC was successfully internalized by both MCF-7 and HepG2 cells (Figure 3). The results not only indicate that co-loaded NLC can enter the cell but also importantly that it reaches the nucleus (Hoechst 33342-a DNA specific fluorescent dye 26 ). Considering 3-hour incubation period, it can be said that the internalization was fairly satisfactory since negatively charged nanoparticles prevent rapid cellular uptake due to repulsion emanating from negatively charged cellular membranes. In addition by visual inspection, internalization by MCF-7 cells looks clearer than that of HepG2 cells, probably due to the higher sensitivity of PTX to MCF-7 relative to HepG2 cells; the factor that may have led to PTX being approved by Food and Drug Administration as the first line of treatment for breast cancer. 27 This finding affirms co-loaded NLC as the formulation appropriate for breast cancer treatment. Conclusion The co-loaded NLC has demonstrated substantial short-term stability. The formulation was successfully internalized by MCF-7 and HepG2 cells. However, the co-loaded NLC was best suited for MCF-7 cells. This finding affirms the formulation to be the most appropriate for breast cancer treatment. It can therefore safely be concluded that the co-loaded NLC formulation may hold promise as an effective theragnostic translational system. Author's Note For this work, the thought, experimental work, data analysis, and manuscript writing were done by Livesey D. Olerile. There were no animals, human subjects, or medical records used in the study and as such there was no ethical issue.
2020-04-28T13:02:03.778Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "a64e2a7851777da12a754af19617658284b6841d", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1533033820914308", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee6fb3e1b50e2b2dea5a1c6b7373630bc8d27e48", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
202541617
pes2o/s2orc
v3-fos-license
Enhancing AMR-to-Text Generation with Dual Graph Representations Generating text from graph-based data, such as Abstract Meaning Representation (AMR), is a challenging task due to the inherent difficulty in how to properly encode the structure of a graph with labeled edges. To address this difficulty, we propose a novel graph-to-sequence model that encodes different but complementary perspectives of the structural information contained in the AMR graph. The model learns parallel top-down and bottom-up representations of nodes capturing contrasting views of the graph. We also investigate the use of different node message passing strategies, employing different state-of-the-art graph encoders to compute node representations based on incoming and outgoing perspectives. In our experiments, we demonstrate that the dual graph representation leads to improvements in AMR-to-text generation, achieving state-of-the-art results on two AMR datasets Introduction Abstract Meaning Representation (AMR; Banarescu et al. (2013)) is a linguistically-grounded semantic formalism that represents the meaning of a sentence as a rooted directed graph, where nodes are concepts and edges are semantic relations.As AMR abstracts away from surface word strings and syntactic structure producing a language neutral representation of meaning, its usage is beneficial in many semantic related NLP tasks, including text summarization (Liao et al., 2018) and machine translation (Song et al., 2019). The purpose of AMR-to-text generation is to produce a text which verbalises the meaning encoded by an input AMR graph.This is a challenging task as capturing the complex structural information stored in graph-based data is not trivial, as these are non-Euclidean structures, which implies that properties such as global parametrization, vector space structure, or shift-invariance do not hold (Bronstein et al., 2017).Recently, Graph Neural Networks (GNNs) have emerged as a powerful class of methods for learning effective graph latent representations (Xu et al., 2019) and graph-to-sequence models have been applied to the task of AMR-to-text generation (Song et al., 2018;Beck et al., 2018;Damonte and Cohen, 2019;Guo et al., 2019). In this paper, we propose a novel graph-tosequence approach to AMR-to-text generation, which is inspired by pre-neural generation algorithms.These approaches explored alternative (top-down, bottom-up and mixed) traversals of the input graph and showed that a hybrid traversal combining both top-down (TD) and bottom-up (BU) information was best as this permits integrating both global constraints top-down from the input and local constraints bottom-up from the semantic heads (Shieber et al., 1990;Narayan and Gardent, 2012). Similarly, we present an approach where the input graph is represented by two separate structures, each representing a different view of the graph.The nodes of these two structures are encoded using separate graph encoders so that each concept and relation in the input graph is assigned both a TD and a BU representation. Our approach markedly differs from existing graph-to-sequence models for MR-to-Text generation (Marcheggiani and Perez Beltrachini, 2018;Beck et al., 2018;Damonte and Cohen, 2019) in that these approaches aggregate all the immediate neighborhood information of a node in a single representation.By exploiting parallel and complementary vector representations of the AMR graph, our approach eases the burden on the neural model in encoding nodes (concepts) and edges (relations) in a single vector representation.It also elimi-nates the need for additional positional information (Beck et al., 2018) which is required when the same graph is used to encode both TD and BU information, thereby making the edges undirected. Our main contributions are the following: • We present a novel architecture for AMR-to-text generation which explicitly encodes two separate TD and BU views of the input graph. • We show that our approach outperforms recent AMR-to-text generation models on two datasets, including a model that leverages additional syntactic information (Cao and Clark, 2019). • We compare the performance of three graph encoders, which have not been studied so far for AMR-to-text generation. Related Work Early works on AMR-to-text generation employ statistical methods (Flanigan et al., 2016b;Pourdamghani et al., 2016;Castro Ferreira et al., 2017) and apply linearization of the graph by means of a depth-first traversal. Recent neural approaches have exhibited success by linearising the input graph and using a sequence-to-sequence architecture.Konstas et al. (2017) achieve promising results on this task.However, they strongly rely on named entities anonymisation.Anonymisation requires an ad hoc procedure for each new corpus.The matching procedure needs to match a rare input item correctly (e.g., "United States of America") with the corresponding part in the output text (e.g., "USA") which may be challenging and may result in incorrect or incomplete delexicalisations.In contrast, our approach omits anonymisation.Instead, we use a copy mechanism (See et al., 2017), a generic technique which is easy to integrate in the encoder-decoder framework and can be used independently of the particular domain and application.Our approach further differs from Konstas et al. (2017) in that we build a dual TD/BU graph representation and use graph encoders to represent nodes.Cao and Clark (2019) factor the generation process leveraging syntactic information to improve the performance.However, they linearize both AMR and constituency graphs, which implies that important parts of the graphs cannot well be represented (e.g., coreference). Several graph-to-sequence models have been proposed.Marcheggiani and Perez Beltrachini (2018) show that explicitly encoding the structure of the graph is beneficial with respect to sequential encoding.They evaluate their model on two tasks, WebNLG (Gardent et al., 2017) and SR11Deep (Belz et al., 2011), but do not apply it to AMR benchmarks.Song et al. (2018) and Beck et al. (2018) apply recurrent neural networks to directly encode AMR graphs.Song et al. (2018) 2018) use Graph Convolutional Networks (GCNs) to tackle the tasks of link prediction and entity classification on knowledge graphs.Damonte and Cohen (2019) show that off-theshelf GCNs cannot achieve good performance for AMR-to-text generation.To tackle this issue, Guo et al. (2019) introduce dense connectivity to GNNs in order to integrate both local and global features, achieving good results on the task.Our work is related to Damonte and Cohen (2019), that use stacking of GCN and LSTM layers to improve the model capacity and employ anonymization.However, our model is substantially different: (i) we learn dual representations capturing top-down and bottom-up adjuvant views of the graph, (ii) we employ more effective graph encoders (with different neighborhood aggregations) than GCNs and (iii) we employ copy and coverage mechanisms and do not resort to entity anonymization. Graph-to-Sequence Model In this section, we describe (i) the representations of the graph adopted as inputs, (ii) the model architecture, including the Dual Graph Encoder and (iii) the GNNs employed as graph encoders. Graph Preparation Let G = (V, E, R) denote a rooted and directed AMR graph with nodes v i ∈ V and labeled edges (v i , r, v j ) ∈ E, where r ∈ R is a relation type.We convert each AMR graph into an unlabeled and connected bipartite graph G t = (V t , E t ), transforming each labeled edge (v i , r, v j ) ∈ E into two unlabeled edges (v i , r), (r, v j ) ∈ E t , with |V t | = n + m and |E t | = 2m.This process, called Levi Transformation (Beck et al., 2018), turns original edges into nodes creating an unlabeled graph.For instance, the edge between semester and that with label :mod in Figure 1(b) is replaced by two edges and one node in 1(c): an edge between semester, and the new node :mod and another one between :mod and that.The new graph allows us to directly represent the relationships between nodes using embeddings.This enables us to encode label edge information using distinct message passing schemes employing different GNNs. G t captures a TD view of the graph.We also create a BU view of the graph G b = (V t , E b ), where each directed edge e k = (v i , v j ) ∈ E t becomes e k = (v j , v i ) ∈ E b , that is, we reverse the direction of original edges.An example of a sentence, its AMR graph and the two new graphs G t and G b is shown in Figure 1. Dual Graph Encoder We represent each node v i ∈ V t with a node embedding e i ∈ R d , generated from the node label.In order to explicitly encode structural information, our encoder starts with two graph encoders, denoted by GE t and GE b , that compute representations for nodes in G t and G b , respectively. Each GE learns node representations based on the specific view of its particular graph, G t or G b .Since G t and G b capture distinct perspectives of the graph structure, the information flow is prop-agated throughout TD and BU directions, respectively.In particular, for each node v i , the GE receives the node embeddings of v i and its neighbors, and computes its node representation: where Each node v i is represented by two different hidden states, h t i and h b i .Note that we learn two representations per relation and node of the original AMR graph.The hidden states h t i and h b i , and embedding e i contain different information regarding v i .We concatenate them building a final node representation: This approach is similar to bidirectional RNNs (Schuster and Paliwal, 1997).Bidirectional RNNs benefit from left-to-right and right-to-left propagation.They learn the hidden representations separately and concatenate them at the end.We perform a similar encoding: first we learn TD and BU representations independently, and lastly, we concatenate them. The final representation r i is employed in a sequence input of a bidirectional LSTM.For each AMR graph, we generate a node sequence by depth-first traversal order.In particular, given a representation sequence from r 1 to r n , the hidden forward and backward states of r i are defined as: where LSTM f is a forward LSTM and LSTM b is a backward LSTM.Note that, for the backward LSTM, we feed the reversed input as the order from r n to r 1 .Lastly, we obtain the final hidden state by concatenating them as: The resulting hidden state h i encodes the information of both preceding and following nodes. Stacking layers was demonstrated to be effective in graph-to-sequence approaches (Marcheggiani and Perez Beltrachini, 2018;Koncel-Kedziorski et al., 2019;Damonte and Cohen, 2019) and allows us to test for their contributions to the system performance more easily.We employ different GNNs for both graph encoders (Section 3.3).Figure 2 shows the proposed encoder architecture. Graph Neural Networks The GEs incorporate, in each node representation, structural information based on both views of the graph.We explore distinct strategies for neighborhood aggregation, adopting three GNNs: Gated Graph Neural Networks (GGNN, Li et al. (2016)), Graph Attention Networks (GAT, Veličković et al. (2018)) and Graph Isomorphic Networks (GIN, Xu et al. (2019)).Each GNN employs a specific message passing scheme which allows capturing different nuances of structural information. Gated Graph Neural Networks GGNNs employ gated recurrent units to encode node representations, reducing the recurrence to a fixed number of steps.In particular, the l-th layer of a GGNN is calculated as: where N (i) is the immediate neighborhood of v i , W 1 is a parameter and GRU is a gated recurrent unit (Cho et al., 2014).Different from other GNNs, GGNNs use back-propagation through time (BPTT) to learn the parameters.GGNNs also do not require to constrain parameters to ensure convergence. Graph Attention Networks GATs apply attentive mechanisms to improve the exploitation of non-trivial graph structure.They encode node representations by attending over their neighbors, following a self-attention strategy: where attention coefficients α i,j are computed as: where σ is the activation function and denotes concatenation.W 2 and a are model parameters.The virtue of the attention mechanism is its ability to focus on the most important parts of the node neighborhood.In order to learn attention weights in different perspectives, GATs can employ multihead attentions. Graph Isomorphic Networks GIN is a GNN as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler and Lehman, 1968) in representing isomorphic and non-isomorphic graphs with discrete attributes.Its l-th layer is defined as: , where h W is a multi-layer perceptron (MLP).In contrast to other GNNs, which combine node feature with its aggregated neighborhood feature, GINs do not apply the combination step and simply aggregate the node along with its neighbors. Each of these GNNs applies different approaches to learn structural features from graph data and has achieved impressive results on many graph-based tasks (Li et al., 2016;Veličković et al., 2018;Xu et al., 2019 where s c is the coverage vector and v, W h , W s , w c and b are learnable parameters.The coverage vector is the accumulation of all attention distributions so far. Copy and Coverage Mechanisms Previous works (Damonte and Cohen, 2019;Cao and Clark, 2019) use anonymization to handle names and rare words, alleviating the data sparsity.In contrast, we employ copy and coverage mechanisms to address out-of-vocabulary issues for rare target words and to avoid repetition (See et al., 2017). The model is trained to optimize the negative log-likelihood: where Y = y 1 , . . ., y |Y | is the sentence, X is the AMR graph and θ represents the model parameters. Data We use two AMR corpora, LDC2015E86 and LDC2017T102 .In these datasets, each instance contains an AMR graph and a sentence.Table 1 shows the statistics for both datasets.Figure 3 shows the distribution of the AMR graph diameters and node degrees for both datasets.The AMR graph structures are similar for most examples.Note that 90% of AMR graphs in both datasets have the diameter less than or equal to 11 and 90% of nodes have the degree of 4 or less.Very structurally similar graphs pose difficulty for the graph encoder by making it harder to learn the differences between their similar structures.Therefore, the word embeddings used as additional input play an important role in helping the model to deal with language information.That is one of the reasons why we concatenate this information in the node representation r i . Experiments and Discussion Implementation Details We extract vocabularies (size of 20,000) from the training sets and initialize the node embeddings from GloVe word embeddings (Pennington et al., 2014) larize the model, during training we apply dropout (Srivastava et al., 2014) to the graph layers with a rate of 0.3.The graph encoder hidden vector sizes are set to 300 and hidden vector sizes for LSTMs are set to 900.The models are trained for 30 epochs with early stopping based on the development BLEU score.For our models and the baseline, we used a twolayer LSTM decoder.We use Adam optimization (Kingma and Ba, 2015) as the optimizer with an initial learning rate of 0.001 and 20 as the batch size.Beam search with the beam size of 5 is used for decoding. Results We call the models G2S-GIN (isomorphic encoder), G2S-GAT (graph-attention encoder), and G2S-GGNN (gated-graph encoder), according to the graph encoder utilized.As a baseline (S2S), we train an attention-based encoderdecoder model with copy and coverage mechanisms, and use a linearized version of the graph generated by depth-first traversal order as input.We compare our models against several state-ofthe-art results reported on the two datasets (Konstas et al., 2017;Song et al., 2018;Beck et al., 2018;Damonte and Cohen, 2019;Cao and Clark, 2019;Guo et al., 2019). Model External BLEU We use both BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) as evaluation metrics3 .In order to mitigate the effects of random seeds, we report the averages for 4 training runs of each model along with their standard deviation.Table 2 shows the comparison between the proposed models, the baseline and other neural models on the test set of the two datasets. For both datasets, our approach substantially outperforms the baselines.In LDC2015E86, G2S-GGNN achieves a BLEU score of 24.32, 4.46% higher than Song et al. (2018), who also use the copy mechanism.This indicates that our architecture can learn to generate better signals for text generation.On the same dataset, we have competitive results to Damonte and Cohen (2019).However, we do not rely on preprocessing anonymisation not to lose semantic signals.In LDC2017T10, G2S-GGNN achieves a BLEU score of 27.87, which is 3.33 points higher than Damonte and Cohen (2019), a state-of-the-art model that does not employ external information.We also have competitive results to Guo et al. (2019), a very recent state-of-the-art model. We also outperform Cao and Clark (2019) improving BLEU scores by 3.48% and 4.00%, in LDC2015E86 and LDC2017T10, respectively.In contrast to their work, we do not rely on (i) leveraging supplementary syntactic information and (ii) we do not require an anonymization preprocessing step.G2S-GIN and G2S-GAT have comparable performance on both datasets.Interestingly, G2S-GGNN has better performance among our models.This suggests that graph encoders based on gating mechanisms are very effective in text generation models.We hypothesize that the gating mechanism can better capture longdistance dependencies between nodes far apart in the graph.Additional Training Data Following previous works (Konstas et al., 2017;Song et al., 2018;Guo et al., 2019), we also evaluate our models employing additional data from English Gigaword corpus (Napoles et al., 2012).We sample 200K Gigaword sentences and use JAMR4 (Flanigan et al., 2016a) to parse them.We follow the method of Konstas et al. (2017), which is fine-tuning the model on the LDC2015E86 training set after every epoch of pretraining on the Gigaword data.G2S-GGNN outperforms others with the same amount of Gigaword sentences (200K), achieving a 32.23 BLEU score, as shown in Table 3.The results demonstrate that pretraining on automatically generated AMR graphs enhances the performance of our model. Ablation Study In Table 4, we report the results of an ablation study on the impact of each component of our model on the development set of LDC2017T10 dataset by removing the graph encoders.We also report the number of parameters (including embeddings) used in each model.The first thing we notice is the huge increase in metric scores (17% in BLEU) when applying the graph encoder layer, as the neural model receives signals regarding the graph structure of the input.The dual representation helps the model with a different view of the graph, increasing BLEU and METEOR scores by 1.04 and 0.68 points, respectively.The complete model has slightly more parameters than the model without graph encoders (57.6M vs 61.7M). Impact of Graph Size, Arity and Sentence Length The good overall performance on the datasets shows the superiority of using graph encoders and dual representations over the sequential encoder.However, we are also interested in estimating the performance of the models concerning different data properties.In order to evaluate how the models handle graph and sentence features, we perform an inspection based on different sizes of graph diameter, sentence length, and max node out-degree.Table 5 shows METEOR5 scores for the LDC2017T10 dataset. The performances of all models decrease as the diameters of the graphs increase.G2S-GGNN has a 17.9% higher METEOR score in graphs with a diameter of at most 7 compared to graphs with diameters higher than 13.This is expected as encoding a bigger graph (containing more information) is harder than encoding smaller graphs.Moreover, 71% of the graphs in the training set have a diameter less than or equal to 7 and only 2% have a diameter bigger than 13 (see Figure 3).Since the models have fewer examples of bigger graphs to learn from, this also leads to worse performance when handling graphs with higher diameters.We also investigate the performance with respect to the sentence length.The models have better results when handling sentences with 20 or fewer tokens.Longer sentences pose additional challenges to the models. G2S-GIN has a better performance in handling graphs with node out-degrees higher than 9.This indicates that GINs can be employed in tasks where the distribution of node degrees has a long tail.Surprisingly, S2S has a better performance than G2S-GGNN and G2S-GAT when handling graphs that contain high degree nodes. Semantic Equivalence We perform an entailment experiment using BERT (Devlin et al., 2019) fine-tuned on the MultiNLI dataset (Williams et al., 2018) as a NLI model.We are interested in exploring whether a generated sentence (hypothesis) is semantically entailed by the reference sentence (premise).In a related text generation task, Falke et al. (2019) employ NLI models to rerank alternative predicted abstractive summaries.Nevertheless, uniquely verifying whether the reference (REF) entails the generated sentence (GEN) or vice-versa (GEN entails REF) is not sufficient.For example, suppose that "Today Jon walks" is the REF and "Jon walks" is the GEN.Even though REF entails GEN, GEN does not entail REF, that is, GEN is too general (missing information).Furthermore, suppose that "Jon walks" is the REF and "Today Jon walks" is the GEN, GEN entails REF but REF does not entail GEN, that is, GEN is too specific (added information).Therefore, in addition to verify whether the reference entails the generated sentence, we also verify whether the generated sentence entails the reference. Table 6 shows the average probabilities for entailment, contradiction and neutral classes on the LDC2017T10 test set.All G2S models have higher entailment compared to S2S.G2S-GGNN has 33.5% and 5.2% better entailment performances than S2S, when REF entails GEN and GEN entails REF, respectively.G2S models also generate sentences that contradict the reference sentences less.This suggests that our models are capable of capturing better semantic information from the graph generating outputs semantically related to the reference sentences. Human Evaluation To further assess the quality of the generated sentences, we conduct a human evaluation.We employ the Direct Assessment (DA) method (Graham et al., 2017) via Amazon Mechanical Turk.Using the DA method inspired by Mille et al. (2018), we assess two quality criteria: (i) meaning similarity: how close in meaning the generated text is to the gold sentence; and (ii) readability: how well the generated sentence reads (Is it good fluent English?).We randomly select 100 sentences generated by S2S and G2S-GGNN and randomly assign them to HITs (following Mechanical Turk terminology).Human workers rate the sentences according to meaning similarity and readability on a 0-100 rating scale.The tasks are executed separately and workers were first given brief instructions.For each sentence, we collect scores from 5 workers and average them.Models are ranked according to the mean of sentence-level scores.We apply a quality control step filtering workers who do not score some faked and known sentences properly. GOLD China and Kyrgyzstan agreed in a joint communique that terrorism, separatism and extremism still pose major threats to regional security and stability. S2S In the joint communique, China and Kyrgyzstan still agreed to threaten terrorism, separatism, extremism and regional stability.Song et. al (2018) In a joint communique, China and Kyrgyzstan have agreed to still be a major threat to regional security, and regional stability. G2S-GGNN At a joint communique, China and Kyrgyzstan agreed that terrorism, separatism and extremism are still a major threat to region security and stability.quality of the generated sentences regarding S2S. The Pearson correlations between meaning similarity and readability scores, and METEOR 6 scores are 0.50 and 0.22, respectively. Semantic Adequacy We also evaluate the semantic adequacy of our model (how well does the generated output match the input?) by comparing the number of added and missing tokens that occur in the generated versus reference sentences (GOLD).An added token is one that appears in the generated sentence but not in the input graph.Conversely, a missing token is one that occurs in the input but not in the output.In GOLD, added tokens are mostly function words while missing tokens are typically input concepts that differ from the output lemma.For instance, in Figure 1, there and of are added tokens while person is a missing token.As shown in Table 8, G2S approaches outperform the S2S baseline.G2S-GIN is closest to GOLD with respect to both metrics suggesting that this model is better able to generate novel words to construct the sentence and captures a larger range of concepts from the input AMR graph, covering 6 METEOR score is used as it is a sentence-level metric.more information. Manual Inspection Table 7 shows sentences generated by S2S, Song et al. (2018), G2S-GAT, and the reference sentence.The example shows that our approach correctly verbalises the subject of the embedded clause "China and ... agreed that terrorism, separatism and extremism SU BJ ... pose major threats to ...", while S2S and Song et al. (2018) are fooled by the fact that agree frequently takes an infinitival argument which shares its subject ("China ... SU BJ agreed to threaten / have agreed to be a major threat").While this is a single example, it suggests that dual encoding enhances the model ability to take into account the dependencies and the graph structure information, rather than the frequency of n-grams. Conclusion We have studied the problem of generating text from AMR graphs.We introduced a novel architecture that explicitly encodes two parallel and adjuvant representations of the graph (top-down and bottom-up).We showed that our approach outperforms state-of-the-art results in AMR-to-text generation.We provided an extensive evaluation of our models and demonstrated that they are able to achieve the best performance.In the future, we will consider integrating deep generative graph models to express probabilistic dependencies among AMR nodes and edges. Figure 1 : Figure 1: (a) an example sentence, (b) its original AMR graph (G) and different graph perspectives: (c) top-down (G t ) and (d) bottom-up (G b ). Figure 2 : Figure 2: Dual Graph Encoder.The encoder receives the two graph views and generates structural node representations that are used by the decoder.Representations in blue, yellow and orange are e i , h t i and h b i , respectively. Figure 3 : Figure 3: Distribution of the AMR graph diameter (left) and node degree (right) in the training set for LDC2015E86 (red) and LDC2017T10 (blue) datasets. Figure 4 : Figure 4: Human evaluation of the sentences generated by S2S and G2S-GGNN models.Results are statistically significant with p < 0.05, using Wilcoxon ranksum test. Table 1 : See et al. (2017):of LDC2015E86 and LDC2017T10 datasets.The values are calculated for all splits (train, development and test sets).DAG stands for directed acyclic graph.An attention-based unidirectional LSTM decoder is used to generate sentences, attending to the hidden representations of edges and nodes.In each step t, the decoder receives the word embedding of the previous word (during training, this is the previous word of the reference sentence; at test time it is the previously generated word), and has the decoder state s t .The attention distribution a t is calculated as inSee et al. (2017): Table 2 : BLEU and METEOR scores on the test set of LDC2015E86 and LDC2017T10 datasets. on Common Crawl.Hyperparameters are tuned on the development set of the LDC2015E86 dataset.For GIN, GAT, and GGNN graph encoders, we set the number of layers to 2, 5 and 5, respectively.To regu- Table 3 : Results on LDC2015E86 test set when models are trained with additional Gigaword data. Table 4 : Results of the ablation study on the LDC2017T10 development set. Table 7 : An example of an AMR graph and generated sentences.GOLD refers to the reference sentence. Table 8 : Fraction of elements in the output that are not present in the input (ADDED) and the fraction of elements in the input graph that are missing in the generated sentence (MISS), for the test set of LDC2017T10.The token lemmas are used in the comparison.GOLD refers to the reference sentences.
2019-09-01T08:22:38.000Z
2019-08-14T00:00:00.000
{ "year": 2019, "sha1": "ce07df583f7a0e975175c4efc8b93a436376fef8", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1909.00352", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "ce07df583f7a0e975175c4efc8b93a436376fef8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
21034554
pes2o/s2orc
v3-fos-license
Genetic Outlier Detection for a Robust Support Vector Machine Support vector machine (SVM) has a strong theoretical foundation and also achieved excellent empirical success. It has been widely used in a variety of pattern recognition applications. Unfortunately, SVM also has the drawback that it is sensitive to outliers and its performance is degraded by their presence. In this paper, a new outlier detection method based on genetic algorithm (GA) is proposed for a robust SVM. The proposed method parallels the GA-based feature selection method and removes the outliers that would be considered as support vectors by the previous soft margin SVM. The proposed algorithm is applied to various data sets in the UCI repository to demonstrate its performance. Introduction Support vector machine (SVM) was proposed by Vapnik et al. [1,2]; it implements structural risk minimization [3]. Beginning with its early success with optical character recognition [1], SVM has been widely applied to a range of areas [4][5][6]. SVM possesses a strong theoretical foundation and enjoys excellent empirical success in pattern recognition problems and industrial applications [7]. However, SVM also has the drawback of sensitivity to outliers and its performance can be degraded by their presence. Even though slack variables are introduced to suppress outliers [8,9], outliers continue to influence the determination of the decision hyperplane because they have a relatively high margin loss compared to those of the other data points [10]. Further, when quadratic margin loss is employed, the influence of outliers increases [11]. Previous research has considered this problem [8][9][10][12][13][14]. In [12], an adaptive margin was proposed to reduce the margin losses (hence the influence) of data far from their class centroids. The margin loss was scaled based on the distance between each data point and the center of the class. In [13,14], the robust loss function was employed to limit the maximal margin loss of the outlier. Further, a robust SVM based on a smooth ramp loss was proposed in [8]. It suppresses the influence of outliers by employing the Huber loss function. Most works have aimed at reducing the effect of outliers by changing the margin loss function; only a small number have aimed at identifying the outliers and removing them in the training set. For example, Xu et al. proposed an outlier detection method using convex optimization in [10]. However, their method is complex and relaxation is employed to approximate the optimization. In this paper, a new robust SVM based on a genetic algorithm (GA) [15] is proposed. The proposed method locates the outliers among the samples and removes them from the training set. The basic idea of this SVM parallels that of genetic feature selection, wherein GAs locate the irrelevant or redundant features and remove them by mimicking natural evolution. In the proposed method, GA detects and removes outliers that would be considered as support vectors by the previous soft margin SVM. The remainder of this paper is organized as follows. In Section 2, we offer preliminary information on GAs. In Section 3, we describe the proposed method. Section 4 details the experimental results that demonstrate the performance and our conclusions are presented in Section 5. Genetic Algorithms Genetic Algorithms (GAs) are engineering models obtained from the natural mechanisms of genetics and evolution and are applicable to a wide range of problems. GAs typically maintain and manipulate a population of individuals that represents a set of candidate solutions for a given problem. The viability of each candidate solution is evaluated based on its fitness and the population evolves better solutions via selection, crossover, and mutation. In the selection process, some individuals are copied to produce a tentative offspring population. The number of copies of an individual in the next generation is proportional to the individual's relative fitness value. Promising individuals are therefore more likely to be present in the next generation. The selected individuals are modified to search for a global optimal solution using crossover and mutation. GAs provide a simple yet robust optimization methodology [16]. Genetic Outlier Selection For Support Vector Machines In this section, a new outlier detection method based on a genetic algorithm is proposed. First, dual quadratic optimization is formulated in a soft margin SVM and support vector candidates are selected from the training set based on the Lagrange multiplier. Then, the candidates are divided into either support vectors or outliers using GA. Figure 1 presents the overall procedure for the proposed method. Suppose that M data points {x 1 , x 2 , ..., x M } (x i ∈ R n ) are given, each of which is labeled with a binary class y i ∈ {−1, 1}. The goal of the SVM is to design a decision hyperplane that maximally separates two classes and where W and w 0 are the weight and bias of the decision function, respectively. The SVM is trained by solving where T is a slack variable and implies a margin loss at each data point. C is a constant and denotes the penalty for a misclassification. The above formulation can be recast into subject to where Λ = [λ 1 , λ 2 , ..., λ M ] T is a Lagrange multiplier vector and the nonnegative number λ i is a Lagrange multiplier associ- ated with x i . In a standard SVM, the data points with positive λ i are support vectors and contribute to the decision hyperplane according to The interesting point is that if outliers are included in the training set, the outliers are likely to have positive margin loss and contribute to the hyperplanes. Further, the outliers tend to have relatively large margin loss and significantly influence the determination of the hyperplane, thereby making the SVM sensitive to the presence of outliers. In this paper, a robust SVM design scheme is proposed based on GA. First, a set of support vector candidates is prepared by collecting the data points with positive Lagrange multipliers. As stated, not only the support vectors but also some outliers may be included in S. The goal of support vector selection is to determine a subset S v ⊆ S that includes only support vectors such that the classification accuracy of the SVM is maximized while the number of data points in the subset card(S v ) is minimized, where card(·) denotes the cardinality. This is a bi-criteria combinatorial optimization problem and is usually intractable because it has an NP-hard search space. The implementation of the support vector selection parallels the feature selection method. The use of GA is a promising solution to this bi-criteria optimization problem because the feature selection methods based on GA outperform the non-GA feature selection methods [16][17][18]. To retain the support vectors and discard the outliers in subset S v , the GA chromosome is represented by a binary string consisting of ones and zeros, as illustrated in Figure 2. In this figure, "1" and "0" indicate whether the associated data points should be retained or discarded in the set of support vectors, respectively. Genetic operators are applied to generate new chromosomes in the new generation. There are two types of genetic operators: crossover and mutation. The purpose of the crossover is to exchange information among different potential solutions. The mutation introduces genetic material that may have been missing from the initial population or lost during crossover operations [19]. In this paper, one-point crossover and bit-flip mutation [20] are employed as genetic operators. When a validation set V is denoted as V = {v 1 , v 2 , ...v m }, the fitness function of a chromosome is computed using where In this equation, m is the number of validation data points and α is a design coefficient. The fitness function actually implies the bi-criteria that the classification accuracy of the SVM should be maximized while the number of data points in the subset card(S v ) should be minimized. The first term is aimed at improving the classification performance and the second term is aimed at the compactness of the SVM. The coefficient α plays an essential role in striking a balance between the classification performance and the classification cost. The parameters of the GA and SVM are given in Table 1. In Table 1, α is set to 0.1 to emphasize the classification accuracy over the classification cost. Experimental Results In this section, the validity of the proposed scheme is demonstrated by applying it to five databases of the UCI repository [21]. The UCI repository has been widely used within the pattern recognition community as a benchmark problem for machine learning algorithms. The five databases are the Wine, Haberman, Transfusion, Garman, and Pima sets. All the sets except the Wine set are binary; first and second classes are used in the Wine set. The databases used in the experiments are summarized in Table 2. In this experiment, the databases are randomly divided into four equal-sized subsets. Two subsets are used for training and the remaining two subsets are used for validation and testing. The training and validation sets are used to design a robust SVM and the test sets are used to evaluate the performance of the algorithms. To demonstrate the robustness of the proposed method against outliers, approximately 5% and 10% of the training samples were randomly selected from each class and their labels were reversed. Five independent runs were performed for statistical verification; the linear kernel was used for SVM. The performances of the proposed method and the general soft margin SVM were compared in terms of average testing accuracy and the number of support vectors. The results are summarized in Tables 3 and 4. In the tables, the proposed robust SVM is denoted as GASVM. It is observed that the standard SVM exhibits a marginally better performance than that of the proposed method for only the Australian database in the non-outlier case. In the majority of the cases, the proposed method achieves superior classification accuracy using a smaller number of support vectors than that of the standard SVM. That is, the proposed method is less sensitive to outliers and requires a reduced number of support vectors compared to the standard SVM. Further, by comparing the cases with 5% outliers and 10% outliers as indicated in Figure 3 and Figure 4, it can be observed that in the standard SVM, the greater the number of outliers that are included in the training set, the greater the number of support vectors generated and hence, the more the performance is degraded. In the proposed method, however, less sensitivity is exhibited toward the outliers and the increase in support vectors is limited. The reason for the improved performance of the proposed method is that only useful and discriminatory support vectors are selected and the brunt of the outlier influence on the SVM training is removed. To highlight the robustness of the proposed method, the test accuracy of the GASVM was normalized with respect to that of the standard SVM and the relative performances of the two SVMs are presented in Figure 5. In this figure, the length of the bar l denotes where C GASV M and C SV M are the correct classification rates of GASVM and standard SVM, respectively. From this figure, it is clear that the greater the number of outliers included, the higher the relative excellence of the proposed method over the standard method. Conclusions In this paper, we presented a new method for detecting outliers to improve the robustness of SVM. The proposed method detected outliers within the support vectors assigned by soft margin SVM using GA, and demonstrated recognition performance and a total number of support vectors superior to those of previous methods. Using the proposed method, the robustness of SVM was improved and the SVM was simplified by outlier deletion. The validity of the suggested method was demonstrated through experiments with five databases from the UCI repository.
2018-01-23T22:39:43.048Z
2015-06-30T00:00:00.000
{ "year": 2015, "sha1": "81f16e2857584ecb8a160ccbcbefc1ff1458fc2f", "oa_license": "CCBYNC", "oa_url": "http://www.ijfis.org/journal/download_pdf.php?doi=10.5391/IJFIS.2015.15.2.96", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e4b50e69cfd418649171061e9db18f4fa7928c30", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
265103716
pes2o/s2orc
v3-fos-license
Effects of heat and personal protective equipment on thermal strain in healthcare workers: part B—application of wearable sensors to observe heat strain among healthcare workers under controlled conditions Purpose As climate change accelerates, healthcare workers (HCW) are expected to be more frequently exposed to heat at work. Heat stress can be exacerbated by physical activity and unfavorable working requirements, such as wearing personal protective equipment (PPE). Thus, understanding its potential negative effects on HCW´s health and working performance is becoming crucial. Using wearable sensors, this study investigated the physiological effects of heat stress due to HCW-related activities. Methods Eighteen participants performed four experimental sessions in a controlled climatic environment following a standardized protocol. The conditions were (a) 22 °C, (b) 22 °C and PPE, (c) 27 °C and (d) 27 °C and PPE. An ear sensor (body temperature, heart rate) and a skin sensor (skin temperature) were used to record the participants´ physiological parameters. Results Heat and PPE had a significant effect on the measured physiological parameters. When wearing PPE, the median participants’ body temperature was 0.1 °C higher compared to not wearing PPE. At 27 °C, the median body temperature was 0.5 °C higher than at 22 °C. For median skin temperature, wearing PPE resulted in a 0.4 °C increase and higher temperatures in a 1.0 °C increase. An increase in median heart rate was also observed for PPE (+ 2/min) and heat (+ 3/min). Conclusion Long-term health and productivity risks can be further aggravated by the predicted temperature rise due to climate change. Further physiological studies with a well-designed intervention are needed to strengthen the evidence for developing comprehensive policies to protect workers in the healthcare sector. Supplementary Information The online version contains supplementary material available at 10.1007/s00420-023-02022-2. Introduction Climate change is one of the major challenges of our time (Trenberth 2018;UNEP 2021).In Germany, the average temperature has increased by more than 1.5 °C since 1881 and extreme weather events such as hot summer days (> 30 °C) and heat waves (multiple consecutive days with > 30 °C) will occur more frequently (Papalexiou et al. 2018;IPCC 2019;van Rüth et al. 2019).Climate change affects many occupations in terms of heat stress, e.g.road and agricultural workers.However, indoor temperatures may also reach subjectively uncomfortable but also healthrelevant levels as air-conditioned workplaces are not common in Germany (Lenzer et al. 2020).In Germany, the German Society for Occupational and Environmental Medicine (Deutsche Gesellschaft für Arbeitsmedizin und Umweltmedizin e. V. (DGAUM)) defines "working under heat stress" as climatic stress in the workplace caused by an extreme increase in indoor/outdoor temperature due to heat (DGAUM 2012).Increased indoor temperatures due to hot weather are known to cause heat strain (Simister and Cooper 2005).Amongst others, healthcare workers (HCW) such as nurses have an increased risk of heat strain due to comparatively high job-related physical activity (Schoierer et al. 2019).This may be further aggravated by the use of personal protective equipment (PPE) (Dorman and Havenith 2009). Heat strain is the overall physiological response resulting from heat stress.If the core body temperature exceeds normal levels (36.8-37.5 °C) and the thermoregulatory system fails to compensate the heat stress, the risk of heat strain increases (Mazlomi et al. 2017;Ebi et al. 2018).A core body temperature above 38 °C can lead to fatigue, headache, dizziness, loss of appetite and rapid breathing (Gostimirovic et al. 2020).Heat stress triggers the production of stress hormones such as adrenaline, noradrenaline, and cortisol (McMorris et al. 2006).This may explain in part the physiological responses of heat stress such as an increase of core and peripheral temperature, heart rate, and sweating (Mazlomi et al. 2017).Multiple studies have shown that long-term exposure to heat reduces the work capacity and increase mortality rates within the general population (Rowell 1974;Arbury et al. 2014;Steul et al. 2018;an der Heiden et al. 2019;Bisolli et al. 2019;Vicedo-Cabrera et al. 2021). Although body temperature and other physiological parameters can help to detect symptoms of heat strain, their monitoring in occupational settings is still challenging.As wearable devices are becoming more advanced, the online monitoring of selected health parameters may help to better understand and monitor changes due to increased temperature (Notley et al. 2018).In general, wearable devices consist of three main components: (1) the hardware measuring physiological and/or activity data, (2) the communication hardware and software to relay data to a (remote) processing unit and (3) the data analysis techniques to extract clinically relevant information from the obtained data (Patel et al. 2012).Their current capabilities include physiological, biochemical and motion sensing (Teng et al. 2008;Bonato 2010).However, wearable devices to assess physiological parameters related to heat strain due to occupational heat stress among HCW, have not been used thus far. Therefore, the goal of this study was the feasibility assessment of wearable devices for the monitoring of physiological parameters of HCW during controlled heat stress conditions.Furthermore, the individual effects of high temperatures and PPE on these parameters were investigated. Study design and experimental setup This study was designed as a crossover trial, which took place between October 2021 and March 2022.The experiments took place in climate chamber (5 × 3 × 2.2 m (L/W/H)) with temperature and humidity control.The interior included a table with a chair, a treadmill, a 170 patient bed and patient dummy (CLA1®, 21 kg, Coburger Lehrmittelanstalt, Coburg, Germany).A photo of the climate chamber setting can be found in the supplemental information (Fig. S1). Four independent experiments were performed by each the participants: (1) at 22.0 °C without PPE (NN), at 22 °C with PPE (NP), (3) at 27 °C without PPE (WN) and (4) at 27 °C with PPE (WP).The selection of the higher temperature was based on the guidelines of the German Federal Ministry of Labour and Social Affairs, recommending that the ambient temperature in workspaces should not exceed 26 °C (BMAS 2022).However, we chose 27 °C in order to achieve a sufficient temperature difference to the reference temperature of 22 °C.The relative humidity was set to 40% for all four scenarios to limit the number of experimental variables.Participant wore a standard hospital gown and additional PPE (disposable plastic gown, FFP2 face mask, face shield and gloves) if applicable.To alleviate a habituation or training effect, the setting´s sequence was randomized.During the experiment participants performed HCW-related activities within a given time.These activities include, amongst others, walking up to 5.8 km/h on a treadmill, mobilization and washing of a dummy and the simulation of administrative tasks.The full protocol can be found in the supplementary information.Except for temperature and PPE, all other conditions were identical.One experiment lasted 3.5 h.Each participant conducted all experiments at the same time of the day (either mornings or afternoons) and the interval between two individual experiments was at least seven but maximum ten days.These prerequisites aimed to minimize interfering effects of circadian rhythms and differences in the participants´ physiological condition (Goel et al. 2013). Participants Inclusion of eligible participants was based on the medical history, by considering the following criteria: (1) age 18-60 years; (2) medical background or experience as HCW, since they had to perform several HCW-related activities; (3) no sensitivity against heat (e.g., dizziness, redness on the skin); (4) no obesity (BMI < 30); and (5) no severe chronic diseases.Prior to each experiment, participants were asked about their general state of health.Furthermore, heart rate, blood pressure and body temperature (forehead) were measured to exclude illnesses that may interfere with the study. Monitoring of the environmental conditions A QUESTemp 34 Heat Stress Monitor ® (Quest Technologies, Wisconsin, USA) was utilized for assessing the heat stress by the environmental conditions in the climate chamber.The instrument was placed on a table at a height of approximately one meter.It was also ensured that this device was placed away from any barriers that might block radiant heat or flow.The participants were also requested not to move too close to this instrument in order to minimize variations in temperature and radiant heat.The sampling interval was one minute. Monitoring of physiological parameters A cosinuss° Two ® (Cosinuss GmbH, Munich, Germany) in-ear sensor was utilized to monitor participants´ in-ear temperature (IET) and heart rate (HR) during the experiments (Fig. 1).For each participant, the appropriate sensor size (small or medium) was selected during the anamnesis.The sampling interval was one second.After 14 min of recording, the data were sent to the cosinuss° Health cloud server via a Gateway within one minute.This cycle was repeated during the whole experiment. Moreover, skin temperatures were monitored using Thermochron iButton ® temperature loggers (CK electronic GmbH, Cologne, Germany) (Fig. 1).The sensors were placed at five different central and peripheral locations (left/ right infraclavicular, belly and left/right midthigh).The sampling interval was one minute.Finally, to evaluate the par-ticipants´ clinical state, their weight, forehead temperature, blood pressure, and heart rate were measured before and after each experiment using a digital body scale, an infrared thermometer and a medical blood pressure cuff, respectively. Data handling and statistical analysis Prior to the analysis, all data were pre-processed.In detail, sections outside the trials, including those with apparent sensor malfunctions, were removed and the remaining measured data were used for the analysis.Furthermore, only physiological data measured under the dry bulb indoor temperature between 20 and 24 °C for normal conditions and between 25 and 29 °C for warm conditions were included in the analysis.For in-ear temperature and heart rate, one-minute medians were calculated to match the interval of the skin sensors.Additionally, only heart rate results with corresponding signal quality index above 50 were used.The signal quality index is an algorithm quantitatively assesses the functional near-infrared spectroscopy signal quality on a numerical scale from 0 (very low quality) to 100 (very high quality).Moreover, longitudinal data from the five skin temperature measurements were corrected using an external calibration.For the calculation of the mean skin temperature (MST) for one experiment, the results of all five sensors were averaged. Each physiological parameter was tested for normality using the Kolmogorov-Smirnov test.The analysis of variance (ANOVA) test was utilized to compare the measured data.All parameters were found to be non-normally distributed.Therefore, the Wilcoxon rank-sum test was considered.Since this method is not robust against systematic interindividual variability, a linear mixed-effects model analysis was used to properly consider the interindividual differences.The mixed model approach was broadly used in previous accelerometer studies (Van Dongen et al. 2004;Haapalainen et al. 2008;Pfeiffer et al. 2009;Bolton et al. 2021).In particular, we were interested in making conclusions about how the trial settings over time (fixed effects) impact the measured physiological parameters by controlling the individual differences (random effects).Alpha (α) level at 0.05 was set for all statistical tests.All p-values were two-tailed.The data cleaning process and statistical analysis were performed using R statistical software (version 4.1.2.® ). Participants´ clinical characteristics Eighteen participants (n = 11 females, n = 7 males) completed all four experimental sessions (with a mean interval of 8 days).The majority of them are actively working as HCW (nurses).Their mean age was 35.2 ± 10.4 years old (22-57).Pre-post-increases were observed in participants´ weight and forehead temperature.Table 1 presents all participants' clinical characteristics from the medical history. Measured data from the used instruments For trials under normal temperature (NN and NP), the average dry bulb temperature was 23.0 °C (heat index of 21.2 °C) and the value was 27.3 °C (heat index of 27.2 °C) for the trials under warm temperatures (WN and WP).The measured mean relative humidity for all settings was 34 ± 5.2%.The results of in IET, HR and MST during all experimental sessions are shown in Fig. 2: The data is presented as one-minute-medians calculated from the results of all participants (n = 18).Additionally, a non-timeresolved representation of the data as box-whisker-plots can be found in the supplemental information (Fig S2). Concerning participants´ IET, the median values during experimental sessions under warm temperatures were higher than those under normal temperatures (37.3 °C,37.4 °C,37.8 °C,and 37.9 °C for NN,NP,WN and WP,respectively).Using the identical order, the median values for participants´ HR were 76, 78, 79, and 83 beats/min.The median values for participants´ MST were 32.0 °C,32.4 °C,33.0 °C,and 33.4 °C for NN,NP,WN and WP,respectively.Using the ANOVA (i.e., Wilcoxon rank-sum test), the differences in each physiological parameter between all four trial settings were found as significant (p < 0.001).However, as this method is not robust against systematic interindividual variability, the data were re-analyzed and predicted using the linear mixed-model regression. Discussion In our study, we focused on HCW since there is a substantial research gap on heat strain for this occupation.We postulated that the heat strain among HCW generated from their physical activities is exacerbated by the heat stress due to increased indoor temperatures.This situation can be further aggravated due to the wearing of PPE, as it has been mandatory caring for patients with SARS-CoV-2 or other infectious diseases.Based on these hypotheses, our cross-over study revealed that the combination of internal and external heat stress while conducting HCW-related activities in the climate chamber induces a significant increase in all observed physiological parameters.This increment was particularly high when performing the experimental scenario wearing PPE at 27 °C.Such findings support previous studies concerning the adverse effects of occupational heat strain, particularly attributable to the wearing of PPE (Dorman and Havenith 2009;Eggenberger et al. 2018;Foster et al. 2020b). The measurement of the IET as a proxy of the body temperature and HR using wearables was based on existing studies as it has been shown that such physiological parameters can provide indicators of health status and have tremendous diagnostic value (Eggenberger et al. 2018).Additionally, we realized that measuring participants´ MST is also important, as it is considered an indicator of thermal sensation.Although the thermoregulatory mechanisms of healthy humans allow only minor changes in core temperature, peripheral skin temperatures respond clearly to changes in ambient temperature or metabolism (Krishnamurthy et al. 2017).Besides the increase in the observed physiological parameters (IET, HR and MST), participants frequently reported drinking more water, feeling tired, and sweating excessively during the experiments with PPE (Quartucci et al., Effects of heat and personal protective equipment on thermal strain in health care workers-Part A: Application of a standardized protocol and assessment of subjective well-being, submitted for publication). It is well known that heat stress reduces the human capability to perform activities at full capacity due to the physiological dysfunction (Russo et al. 2019;Foster et al. 2020b).In the study by Gostimorovic et al. concerning heat stress on human cardiovascular functions, a vigorous, long-term impairment of physiological parameters did lead to several health issues such as heart attacks, malignant cardiac arrhythmias, thromboembolic diseases and heat-induced sepsis like shock.(Gostimirovic et al. 2020). As climate change progresses, the frequency of hot days with uncomfortable indoor temperatures is expected to increase.This situation can lead to heat strain events when the core body temperature exceeds its normal level, resulting from a total heat load exceeding the capacity for heat dissipation.This may cause long-term health and productivity risks with devastating economic consequences (Russo et al. 2019). Finally, the feasibility of using Cosinuss° Two in-ear sensor ® to observe the physiological response which is attributable as heat stress was investigated.While some studies described the advantages of this device (Burgos et al. 2020;Ellebrecht et al. 2022), some evaluated the inaccuracy of using this wearable, particularly in measuring core body temperature (Roossien et al. 2020(Roossien et al. , 2021)).Consequently, we used the terminology of IET instead of core body temperature.Nevertheless, the IET is a good proxy of the core body temperature.Since the goal of our study is observing the physiological responses of HCW under different experimental conditions, this device is suitable as a monitoring tool for effects caused by thermal stress.Furthermore, the potential measurement error due to wind factor was likely minimized due to controlled experimental environment in the climate chamber. Strength and limitations To our knowledge, this is the first experimental study concerning the heat stress measurement by conducting simulations of the HCW-related activities in a controlled climatic environment.Besides supporting existing knowledge, this study provides valuable information about planning heat stress experiments using realistic but controlled conditions.Furthermore, the feasibility of using wearables to assess heat stress was demonstrated.A limiting factor is the temperature measurement in the ear, which is a proxy of the core body temperature.However, we considered this more feasible and non-invasive compared to other methods for the determination of the core body temperature.Furthermore, data quality of the ear sensors was impaired in cases when the sensor was not correctly placed in the ear or slipped out.This happened more frequently when the participants had to take off FFP2 masks.In this case, the experiment was stopped for a brief moment and a member of the study team reinserted the sensor.At last, the temperature and humidity in the climate chamber were not as stable as intended.However, the temperature difference between the settings was significantly higher than its variance.For relative humidity, the variance likely did not have an influence on the results. Conclusions In summary, our results suggest that the combination of internal heat stress triggered by high physical activity as well as external heat stress induced by increased environmental heat appears to be related to health and productivity losses of workers engaged in the healthcare sector.This situation can be worsened by wearing additional PPE, which was required in particular working conditions.This fact was supported by both participants' perceptions and physiological measurements.All nations, particularly during a worldwide pandemic such as COVID-19, are dependent on HCW.Subsequently, their health and welfare are of paramount importance for sustained health stability.Unfortunately, HCW are likely at high risk of the health burden due to excessive occupational heat exposure.Lack of ventilation or non-air-conditioning building, as it is common in Germany, can aggravate this situation.As a consequence, adequate cooling provisions or other mitigation strategies should be implemented in order to reduce potential heat strain in HCW (Foster et al. 2020a;Lou et al. 2021;Bongers et al. 2022).Further research concerning the current and future risks of occupational heat exposure is crucial to develop comprehensive evidence-based policies for protecting HCW from the adversities of heat stress.We hope that the results of this study will help policymakers establishing appropriate interventions based on HCW's work-related health hazards due to heat stress. Fig. 1 Fig. 1 Wearable sensors used during the trials.Cosinuss° Two in-ear sensor® (A) to monitor body temperature and heart rate and Thermochron iButton® (B) to record peripheral temperatures Fig. 2 Fig. 2 1 min median in-ear temperature (IET, A), heart rate (HR, B) and mean skin temperature (MST, C) calculated from the results of all participants.The applied settings were a) 22 °C (NN), b) 22 °C and PPE (NP), c) 27 °C (WN) and d) 27 °C and PPE (WP) Table 1 Participant's characteristics at the time of the preliminary examination(n = 18)
2023-11-11T06:18:33.491Z
2023-11-10T00:00:00.000
{ "year": 2023, "sha1": "08fd773fb94a117a6940cb9977993717d7215ae9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00420-023-02022-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c923fd3e7bc965162f183c165395f28c0b7c838d", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
213474026
pes2o/s2orc
v3-fos-license
Aircraft Engines Remaining Useful Life Prediction with an Improved Online Sequential Extreme Learning Machine : The e ffi cient data investigation for fast and accurate remaining useful life prediction of aircraft engines can be considered as a very important task for maintenance operations. In this context, the key issue is how an appropriate investigation can be conducted for the extraction of important information from data-driven sequences in high dimensional space in order to guarantee a reliable conclusion. In this paper, a new data-driven learning scheme based on an online sequential extreme learning machine algorithm is proposed for remaining useful life prediction. Firstly, a new feature mapping technique based on stacked autoencoders is proposed to enhance features representations through an accurate reconstruction. In addition, to attempt into addressing dynamic programming based on environmental feedback, a new dynamic forgetting function based on the temporal di ff erence of recursive learning is introduced to enhance dynamic tracking ability of newly coming data. Moreover, a new updated selection strategy was developed in order to discard the unwanted data sequences and to ensure the convergence of the training model parameters to their appropriate values. The proposed approach is validated on the C-MAPSS dataset where experimental results confirm that it yields satisfactory accuracy and e ffi ciency of the prediction model compared to other existing methods. Introduction Health state prediction in complex equipment or subsystems based on conventional methods such as physical modeling approaches became a very difficult task. Due to the need for deep knowledge of system components and their inner interactions, the modeling process complexity increased and became almost impossible. Even if the final model can be prepared under limited conditions, results might be misleading the prediction or not accurate enough due to poor generalization. Nowadays, machine learning applications for Remaining Useful Life (RUL) estimation has received growing importance due to the availability of heterogeneous data, which in turn motivated researchers to boost the conventional RUL prediction paradigm with a variety of prediction approaches. Continuous improvements of machine learning models make its use in Prognostic Health Management (PHM) more relevant [1]. It allows modeling systems behavior by only extracting important patterns from their retrieved data even without a prior knowledge of their internal characteristics. Unlike conventional paradigms, machine learning techniques aims to reduce modeling complexity with less human intervention and less computational costs. For instance, Xiong Li et al. [2] used a Hidden Semi Markov Model (HSMM) to train SVM for RUL prediction. García et al. [3] designed a hybrid SVM-based swarm intelligence technique searching for the best training coefficients during RUL prediction of aircraft engines. Saidi et al. [4] combined Spectral Kurtosis (SK) to SVM to be able to construct an RUL predictor with a more meaningful feature mapping for wind turbine high-speed shaft bearings health prognosis. Zheng et al. [5] based on a hybrid model constructed a training formwork of the ELM model. They adopted a time window features scaling as a preprocessing and appropriate features selection step to guarantee an accurate prediction of RUL. Ordóñez et al. [6] proposed a hybrid autoregressive model combined with an improved SVM using genetic algorithm to build several estimation algorithms for early RUL prediction. Chen et al. [7] used a SVM-based similarity approach for RUL prediction with the same C-MAPPS dataset. It should be mentioned that all cited works could be classified as a hybrid model that aims to accurately predict the RUL by an unsupervised training or preprocessing of training data before a "single-batch" supervised training. However, time-varying data and parameter update is not considered in these cases, which is incompatible with data-driven prediction models able to address dynamic health deterioration of equipment according to internal or environmental conditions. Neural networks with mini batch training are able to interact with flowing sequences of data and update the prediction model to satisfy changes in newly arrived ones. In [8], a new data-driven approach is introduced by Ben Ali et al. by training a Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network with Weibull Distribution (WD) to avoid time domain fluctuation during RUL prediction. Al-Dulaimi et al. [9] developed a mini batch hybrid deep NN that takes two paths for RUL estimation; the multidimensional feature extraction based on Long Short Term Memory (LSTM) and convolutional NN and the prediction path via a fusion algorithm. Wen et al. [10] constructed feature mapping and training schemes based on ensemble residual CNN and validated their training model with the K-fold cross-validation method after constructing several learners to predict RUL accurately. Xiang et al. [11] simplified the RUL prediction process of aircraft engines by the direct use of raw sensors measurements without a prior expertise in signal processing using a novel deep convolutional NN combined with a time window feature extractor. The prediction approaches used in these works are based on mini batch training interacting with data-driven sequences. Many training models were developed with different paradigms such as hybrid, ensemble, and deep learning attempting to access more meaningful data representations through an accurate prediction. However, the effect of dynamic changes at the level of the training mini batches of data on the training models is not discussed in this case, which may allow the weights and biases to diverge in some mini batches to under-fit gradually the NN. In other cases, the use of old mini batch training algorithms for NN such as the Backpropagation (BP) algorithm can increase computational costs and time consumption. Through this analysis, the main challenges for RUL predictions task are: • Update the training model based on data variation during time (adaptive learning). • Decide whether the newly arrived data is important (new) for the training model or not. • Make the training model in line with the actual health state of the equipment by giving more attention to newly acquired data and discarding gradually the old ones. In the recent decade, ELM has been widely used for prediction purposes due to the fast training and fewer parameters tuning based on the best fit of a linear function approximation [12]. Zhou et al. [13] proposed a Stacked ELM (S-ELM) where a stack of small ELMs is specially designed for solving large and complex data problems. Ben Ali et al. [14] presents a new unsupervised learning classification tool of extracted data based on the Adaptive Resonance Theory 2 (ART2) for high-speed shaft bearing prognosis. Li et al. [15] proposed an improved OS-ELM which is one of the ELM variants with an adaptive forgetting factor and an update selection strategy to predict gas utilization ratio using time-varying data. Yang et al. [16] developed a new approach of regularization of the recursive learning of the OS-ELM to reduce over-fitting and both empirical and structural risks of the prediction model. Lu et al. [17] enhanced the recursive learning of the OS-ELM by introducing ensemble Kalman filter propagation for output weights tuning of NN under RUL prediction of aircraft engines. Yin et al. [18] proposed a (TD-RLS(λ)) with a forgetting factor to enhance the RLS adaptation to linear function approximation by updating its parameters based on environmental feedback. OS-ELM can address online learning in real time such as our RUL prediction problem without iterative tuning, which strongly contribute to computational costs reduction. Training rules of OS-ELM allows recursive learning with any driven chunks of data easily even with different or fixed sizes. In general, one of the limitations of OS-ELM or ELM algorithms is the interpolation caused by the pseudo inverse of the matrix [19], which makes ELM variants suffering from structural risks. In this paper, an improved data-driven approach based on OS-ELM is proposed for RUL estimation to enhance the novelty of the prediction algorithm and its adaptability to time-varying data. In fact, the main contributions of this work attempting to solve adaptive learning and ill-posed problems are: • SAEs based OS-ELM is modified for best features extraction and selection via unsupervised learning. • An OS-ELM is introduced. Tikhonov regularization is involved to reduce structural risk and over-fitting by minimizing the norm of the training weights. Dynamic forgetting of old data, USS and TD error objective functions are all integrated into both SAEs and OS-ELM to achieve better accuracy and adaptability to newly arrived data. The proposed approach is applied to the public dataset of the turbofan engine [20], where obtained results compared to other recent used approaches in the literature show higher performance. The remainder of this work is organized as follows: In Section 2, a general description of the turbofan engine dataset is presented. Section 3 elaborates on the proposed approach used in this paper. Section 4 illustrates and discusses the experimental results where the performances of the proposed learning scheme are showcased and compared to other data-driven methods. Finally, Section 5 concludes with a summary. Dataset Description The C-MAPSS dataset consists of four different subsets, where each dataset consists of multiple multivariate time series and describes different operating conditions and faults modes for different engines. Each dataset is divided into training and testing sets. In our work, we have used the subset FD001 and only a single operating condition and one fault mode are considered. At the beginning of each time series, the engine is considered as normally functioning according to initial conditions and at a certain level, it starts deteriorating gradually by losing its performances until the failure. Each dataset contains 26 columns which are considered as the input features of the prediction model, each column contains information about engine number, time cycles, sensors measurements, and operational conditions of the operating engine [21]. The dataset describes life cycles of 100 different engines with a big amount of training samples where their characteristics changes in time. The reason behind studying this type of data is to address dynamic programming and adaptive learning to solve higher dimensional space problems. Figure 1 illustrates an example of sensors measurement variation and its corresponding RUL degradation function of the first engine from the chosen dataset, where the different colors represent different sensors. Figure 2 indicates the methodology adopted for RUL prediction in this work. Essentially there are two distinctive parts: Dataset preparation and prediction model training. Dataset Preparing The preparation of the dataset is firstly initialized with an appropriate feature selection where only the most significant and sensitive features will be chosen. Each one must describe a certain attitude of degradation level or gradual time-varying behavior in each time cycle. Measurements that do not represent any variation during operating cycles will not contribute to the learning process. In fact, their noncorrelation with the desired RUL can disturb the training model and makes it more sensitive to over-fitting and other ill-posed problems. In our experiments, we chose only 10 parameters for RUL prediction whose indices are: {7, 8, 9, 12, 14, 16, 17, 20, 25, 26}. After that these features are scaled in the interval [0-1] using a min-max normalization procedure before training or testing. Finally, each time series will be labeled according to the piecewise linear RUL function as proposed in [1] and [22] and shown previously in Figure 1. Figure 2 indicates the methodology adopted for RUL prediction in this work. Essentially there are two distinctive parts: Dataset preparation and prediction model training. Figure 2 indicates the methodology adopted for RUL prediction in this work. Essentially there are two distinctive parts: Dataset preparation and prediction model training. Dataset Preparing The preparation of the dataset is firstly initialized with an appropriate feature selection where only the most significant and sensitive features will be chosen. Each one must describe a certain attitude of degradation level or gradual time-varying behavior in each time cycle. Measurements that do not represent any variation during operating cycles will not contribute to the learning process. In fact, their noncorrelation with the desired RUL can disturb the training model and makes it more sensitive to over-fitting and other ill-posed problems. In our experiments, we chose only 10 parameters for RUL prediction whose indices are: {7, 8, 9, 12, 14, 16, 17, 20, 25, 26}. After that these features are scaled in the interval [0-1] using a min-max normalization procedure before training or testing. Finally, each time series will be labeled according to the piecewise linear RUL function as proposed in [1] and [22] and shown previously in Figure 1. Dataset Preparing The preparation of the dataset is firstly initialized with an appropriate feature selection where only the most significant and sensitive features will be chosen. Each one must describe a certain attitude of degradation level or gradual time-varying behavior in each time cycle. Measurements that do not represent any variation during operating cycles will not contribute to the learning process. In fact, their noncorrelation with the desired RUL can disturb the training model and makes it more sensitive to over-fitting and other ill-posed problems. In our experiments, we chose only 10 parameters for RUL prediction whose indices are: {7, 8, 9, 12, 14, 16, 17, 20, 25, 26}. After that these features are scaled in the interval [0-1] using a min-max normalization procedure before training or testing. Finally, each time series will be labeled according to the piecewise linear RUL function as proposed in [1,22] and shown previously in Figure 1. Prediction Model Training The training of the RUL prediction model passes by two different steps: Training of the SAEs and OS-ELM. In this paper, both unsupervised and supervised learning are performed based on the proposed algorithm. Therefore, before going any further, first, we will explain the fundamentals of basic OS-ELM. Then, the proposed OS-ELM will be explained in detail for both unsupervised and supervised learning strategies. Basic OS-ELM Since ELM learns a big amount of data at once, the output weights β of the hidden layer are also tuned for once and cannot be updated. Therefore, a new problem of weights updating appears when data change with time. Consequently, ELM releases the OS-ELM as one of its variants that depend on an online learning attitude with chunks in varied or fixed size, as it also could learn data that come consecutively in one by one instance [23]. • Calculate the initial hidden layer output H k (k = 0) as the same basic ELM theories as presented in Equation (1), where G can be generated independently from the training data according to any continuous bounded piecewise activation function. • Calculate the first output weight matrix β k (k = 0) as shown in Equation (2). Recursive updating of β for new coming mini batches. Proposed OS-ELM In the proposed algorithm we have added a regularization parameter C to reduce empirical and structural risks [16] during the initialization phase. Therefore, Equation (3) will be changed into Equation (8) given below [25], where C ∈ [0; +∞]. After the training model is initialized, the USS-based TD objective function is proposed to adopt the RLS weights to the new data variations [18]. Furthermore, we proposed, based on [8,18,26], a new DFF formula that is proposed as illustrated in Equation (10), so Equations (4) and (5) will be changed, respectively, into (9) and (10). Training of the Proposed Network SAEs are serially trained autoencoders. The encoded features of an autoencoder are considered as the inputs of the next one as shown in Figure 3. Training of the Proposed Network SAEs are serially trained autoencoders. The encoded features of an autoencoder are considered as the inputs of the next one as shown in Figure 3. Both of the unsupervised and supervised training in our proposed approach carried with fixed sizes of mini batches to verify the objective function of TD error in Equation (10) and to avoid a mismatch dimensions error for programming reasons. The results of the mapping of the SAEs are directly used for fine-tuning the RUL prediction network, the update of supervised and unsupervised training models is simultaneously achieved. During the training of the SAEs, we have to change the targets in the above-proposed equations to be the same as the input [13]. Therefore, Equations (1), (2), and (10) must be changed to fit the training process of the SAEs. Both of the unsupervised and supervised training in our proposed approach carried with fixed sizes of mini batches to verify the objective function of TD error in Equation (10) and to avoid a mismatch dimensions error for programming reasons. The results of the mapping of the SAEs are directly used for fine-tuning the RUL prediction network, the update of supervised and unsupervised training models is simultaneously achieved. During the training of the SAEs, we have to change the targets in the above-proposed equations to be the same as the input [13]. Therefore, Equations (1), (2), and (10) must be changed to fit the training process of the SAEs. Our main contribution to modify the autoencoder in this work is that the basic ELM theories for training the autoencoder show that: "After training the Autoencoder, we can encode features using the transpose of output weights matrix [13], Equation (14)". However, mathematically it is proven that the best encoding can be obtained by using the inverse matrix of the transpose of output weights as shown in Equation (15). The proposed formula is tested experimentally and proved its accuracy [27]. An important note that we should take into consideration during the training of the SAEs is that after the initialization phase, we are no longer in need of the randomly generated input weights and we will use the formula in Equation (16) instead of the one in Equation (1). where X k+1 are the k th+1 mini batch of inputs of the training set. Experiments, Results, and Discussion The training parameters of the proposed algorithm are tuned after repeating the experiment in reason to attempt the best score value in Equation (17) and the minimum RMSE using Equation (18) for the test set in both Equations (17) and (18). According to the PHM 2008 challenge [22], the scoring function penalizes the latent prediction d i ≥ 0 more than early ones d i < 0 for maintenance reasons. Too late prediction might delay maintenance operations, and too early prediction might not be harmful but consumes more maintenance resources. Figure 4 explains that the sparseness of the RMSE and score function towards higher values is less than the others towards zero, which confirms the credibility of the results. Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 12 Our main contribution to modify the autoencoder in this work is that the basic ELM theories for training the autoencoder show that: "After training the Autoencoder, we can encode features using the transpose of output weights matrix [13], Equation (14)". However, mathematically it is proven that the best encoding can be obtained by using the inverse matrix of the transpose of output weights as shown in Equation (15). The proposed formula is tested experimentally and proved its accuracy [27]. An important note that we should take into consideration during the training of the SAEs is that after the initialization phase, we are no longer in need of the randomly generated input weights and we will use the formula in Equation (16) instead of the one in Equation (1). where Xk+1 are the k th+1 mini batch of inputs of the training set. Experiments, Results, and Discussion The training parameters of the proposed algorithm are tuned after repeating the experiment in reason to attempt the best score value in Equation (17) and the minimum RMSE using Equation (18) for the test set in both Equations (17) and (18). According to the PHM 2008 challenge [22], the scoring function penalizes the latent prediction 0 d i ≥ more than early ones 0 d i < for maintenance reasons. Too late prediction might delay maintenance operations, and too early prediction might not be harmful but consumes more maintenance resources. Figure 4 explains that the sparseness of the RMSE and score function towards higher values is less than the others towards zero, which confirms the credibility of the results. Experimental results lead to the parameters shown in Table 1. A set of training mini batches was ignored during training of the RUL prediction models with the contribution of the USS strategy. This can contribute to training time reduction and avoiding divergence of training weights. Figure 5 illustrates the reaction of USS strategy towards the useless mini batches by showing the indexes of ignored and selected mini batches. It is obvious that USS conditions shown previously in Equation (9) are confirmed in Figure 5. The RMSE error of each selected chunk of data is bigger than the one before it. Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 12 Experimental results lead to the parameters shown in Table 1. A set of training mini batches was ignored during training of the RUL prediction models with the contribution of the USS strategy. This can contribute to training time reduction and avoiding divergence of training weights. Figure 5 illustrates the reaction of USS strategy towards the useless mini batches by showing the indexes of ignored and selected mini batches. It is obvious that USS conditions shown previously in Equation (9) are confirmed in Figure 5. The RMSE error of each selected chunk of data is bigger than the one before it. During training, the DFF by the intervention of TD error minimization function-based RLS can contribute to the prediction algorithm adaptation towards variations in the newly arrived data. By adapting weights gradually to enhance the approximation function, the accuracy of the prediction will remain stable and always converges to the desired value. Figure 6 shows the variation of DFF during training. If the TD changes too much, it means that the new coming training data has also different characteristics from the first ones, which consequently forces the training weights β to adapt towards these new changes. The adaptation of β is a result of updating the forgetting factor λ, and the amount of changes is controlled by the user hyper-parameters expressed in Table 1. During training, the DFF by the intervention of TD error minimization function-based RLS can contribute to the prediction algorithm adaptation towards variations in the newly arrived data. By adapting weights gradually to enhance the approximation function, the accuracy of the prediction will remain stable and always converges to the desired value. Figure 6 shows the variation of DFF during training. If the TD changes too much, it means that the new coming training data has also different characteristics from the first ones, which consequently forces the training weights β to adapt towards these new changes. The adaptation of β is a result of updating the forgetting factor λ, and the amount of changes is controlled by the user hyper-parameters expressed in Table 1. The training model was tested using new data (unknown to the model) from a test set of FD001. Additionally, the example of curve fitting of the RUL target function in Figure 7 proves that the proposed approach has an acceptable performance. The response of the designed network in Figure 7 is plotted using a set of different life cycles of different engines from the test set. The proposed approach performances are compared with a set of other recent approaches in the literature. The results from Table 2 indicate that the proposed algorithm has the ability to achieve a low score value depending on less training samples and less training time. [6] 39.6843 -20631 -DCNN [1] 18.4480 1286.7 20631 -LSTM [5] 16.17 338 20631 714.53 DCNN [11] 12.61 273.7 17731 -WELM [5] 13.78 267.31 20631 5.04 HDNN [9] 13.017 245 20631 - The training model was tested using new data (unknown to the model) from a test set of FD001. Additionally, the example of curve fitting of the RUL target function in Figure 7 proves that the proposed approach has an acceptable performance. The response of the designed network in Figure 7 is plotted using a set of different life cycles of different engines from the test set. The training model was tested using new data (unknown to the model) from a test set of FD001. Additionally, the example of curve fitting of the RUL target function in Figure 7 proves that the proposed approach has an acceptable performance. The response of the designed network in Figure 7 is plotted using a set of different life cycles of different engines from the test set. The proposed approach performances are compared with a set of other recent approaches in the literature. The results from Table 2 indicate that the proposed algorithm has the ability to achieve a low score value depending on less training samples and less training time. [11] 12.61 273.7 17731 -WELM [5] 13.78 267.31 20631 5.04 HDNN [9] 13.017 245 20631 - The proposed approach performances are compared with a set of other recent approaches in the literature. The results from Table 2 indicate that the proposed algorithm has the ability to achieve a low score value depending on less training samples and less training time. Conclusions A new online sequential training scheme is presented in this work for the RUL prediction of aircrafts engines. The proposed training model explained that by accepting any training mini batch of time-varying data in every update without taking into consideration the divergence of the weights at a certain level, the training model will no longer behave positively to attain the needed approximation and generalization for newly arrived data. So far, the proposed USS strategy is involved in over-fitting reduction and accurate feature selection. The DFF-based TD error estimation plays a key role in weights adaptation between the current and the next training mini batches. The accurate features mapping based on robust unsupervised learning can solve the higher dimensional data problems. The experimental results prove that the new learning scheme for higher dimensional dynamic data is fast and accurate making it a competitive candidate for time-varying data prediction problems. In the literature, C-MAPPS dataset is studied and investigated with different machine learning models. One of the important issues that has not yet been discussed is that sensors measurements are contaminated with noise of unknown sources and behavior. Therefore, future investigations will be focused on enhancing the proposed approach accuracy considering noisy measurements. Conflicts of Interest: The authors declare no conflict of interest.
2020-02-13T09:12:35.036Z
2020-02-05T00:00:00.000
{ "year": 2020, "sha1": "9a35a4175a6fdf465d394f6b9e0b73baf74a9820", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/applsci/applsci-10-01062/article_deploy/applsci-10-01062.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "9a35a4175a6fdf465d394f6b9e0b73baf74a9820", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7421922
pes2o/s2orc
v3-fos-license
Immunoglobulin idiotopes expressed by T cells. I. Expression of distinct idiotopes detected by monoclonal antibodies on antigen-specific suppressor T cells. The idiotopic repertoire expressed by antigen-specific suppressor T cells (Ts) generated by Streptococcus pneumoniae strain R36a (Pn) in BALB/c strain mice was investigated using a panel of five monoclonal anti-idiotopic antibodies against TEPC-15/HOPC-8 myeloma proteins. Previous studies suggested that the anti-idiotopic antibodies recognize distinct idiotopic determinants within the T15 idiotype, and that Pn- reactive B cells express all of those idiotopes as shown by a specific inhibitory effect of the anti-idiotopic antibodies on induction of anti- Pn response in vitro as well as on the mature antibody plaque-forming cells. In this study we asked the question of whether anti-idiotopic (Id) can block the inductive and/or effector phases of generation of Ts which act on the Pn-reactive B cells. The presence of anti-Id during the activation of T cells with Pn did not prevent the generation of Ts. However, suppression mediated by Ts on responder lymphocytes (cultures of spleen cells or B cels) was inhibited (reversed) by four out of five anti-Id. Some of the antibodies recognize hapten (phosphorylcholine)- inhibitable Id in the paratope of Ig whereas others are directed against nonparatopic Id. These data indicate that the antigen receptor on Ts includes VH sequences both within and without the immunoglobulin in paratope, and that the Id repertoir of Ts overlaps with that of B cells. not allow a definitive conclusion on the similarity between T receptors and IgV region; a comparative analysis of several individual Id would be more informative. We have recently used a panel of monoclonal antibodies against the T15 idiotypic family to analyze the expression of individual Id on Pn-reactive B cells (20). The antibodies inhibit the induction of primary, T-independent response and the specific plaque formation by lymphocytes immunized with Pn in vitro. The inhibitory effects of monoclonal antibodies directed against Id within and without the paratopic region are comparable. The Id appear to be expressed independently, in various combination patterns ("idiograms"), which characterize B cells of a given inbred strain (8,20). Based on these results it has become possible to investigate the idiotypic pattern of Pn-specific T, in BALB/c strain that have been previously shown to express a crossreactive T15 marker detected by conventional antisera (5). Using a functional assay of inhibition of"Is activity by monoclonal antibodies in vitro, we show herein that the idiogram of Pn-educated Ts overlaps with that of Pn-reactive B cells, that is, the Ts express most, but not all, known Id of the T15 family. Materials and Methods Mice. BALB/c strain mice were obtained from a colony maintained at Ciba-Geigy AG, Basel, Switzerland. For suppression of T15, mice were injected with 50 #g of monoclonal anti-T15 antibody Maid5-4 intraperitoneally within 24 h after birth. They were used for the experiments within 2-4 mo of age, together with sex-and age-matched control mice. The serum titer ofT15 Id detectable by a reverse hemagglutination assay using MaID5-4-coated RBC was ~ 10 -4 (corresponding to 20 pg/ml) in normal, adult BALB/c mice, whereas the level of Id in T15-suppressed mice was undeteetable (<0.1 #g/ml). The injection of Maid5-4 induces a chronic suppression of T15 (21). Additional normal BALB/c and C57B1/6 strain mice were purchased from The Jackson Laboratory, Bar Harbor, ME. Antigens. Sheep and burro erythrocytes (RBC) were purchased from the Colorado Serum Co., Denver, CO. The S. pneumoniae strain R36a (Pn) was grown in Todd-Hewitt Broth (BBL Microbiology Systems, Becton, Dickinson & Co., Cockeysville, MD) and formaldehyde-treated Pn antigen for stimulation of lymphocyte cultures was prepared according to DuClos and Kim (22). The optimal immunogenie concentration of each batch was determined empirically. The TNP-Brucella abortus (TNP-BA) conjugate was prepared and provided by Dr. James J. Mond from the Uniformed Services University of Health Sciences, Bethesda, MD. Extraction of cell wall polysaeeharide (PnC) from Pn was previously described (23). Monoclonal Anti-Idiotopic Antibodies. Antibodies were products of cloned hybridomas generated by fusion of lymphocytes taken from mice (BALB/c, A/J, or SJL) immunized against HOPC-8 or TEPC-15 proteins with myeloma cell lines according to Kohler et al. (24). The details of the production and maintenance of the hybridoma clones and their specificity for idiotopes of the HOPC-8/TEPC-15 family have been described elsewhere (8,(11)(12)(13). Antibodies were obtained by repeated salt precipitation of either peritoneal aseites from hybridomabearing mice or supernatants from cultures of hybridoma cell lines. All anti-idiotopic antibodies react specifically with both HOPC-8 and TEPC-15 myeloma proteins but not with other mouse myeloma proteins (including the PC-binding proteins, M603 and M511) nor with different classes of serum Ig. The exception is antibody B36-75, which reacts with HOPC-8 but not with TEPC-15 (8). AB1-2 antibody (11,12) was provided by Dr. John F. Kearney from the University of Alabama (Birmingham, AL). The Ig class of the monoclonal anti-Id antibodies and their binding properties in respect to PC inhibition of Id-(anti-Id) reactivity are summarized in Table I. The ability of the monoclonal anti-Id antibodies to inhibit PC-reactive B cells from BALB/c mice in vitro has been studied in detail (20). The antibodies inhibit the response to Pn when added into the cultures together with the antigen on day 0, and they also inhibit the plaque formation by differentiated, antigen-stimulated cells in agarose (see the description of the two assays below). The inhibition is specific, in that the response of BALB/c lymphocytes (20). § Inhibitory effect of antibodies added into the cultures of responder cells with Pn on day 0. The effect was calculated as percent inhibition of specific antibody forming ceils detectable on day 4. The values represent means from numerous experiments (20). The control response (in absence of anti-ld) ranged from 200 to 800 PFC/well. to other antigens such as TNP-BA, TNP-Dextran, or sheep RBC is not affected (12,20). The relative inhibitory activity of different anti-Id hybridoma proteins (percent suppression) for normal BALB/c shown in Table I is a mean value from numerous assays on lymphocytes pooled from several mice as well as cells from individual donors (20). The proteins had no effect on Pc-reactive cells from T15-suppressed BALB/c mice (suppression < 15%, in either test) (see also Results). Separation of B and T Cells. B cells were prepared by cytotoxic depletion of T cells from spleen cell suspension using repeated treatment (twice) with monoclonal anti-Thy-1,2 antibody (25; a generous gift of Dr. Ann Marshak-Rothstein, Massachusetts Institute of Technology, Boston, MA) and a low-toxicity rabbit serum as a source of complement (C). The details of this procedure were described elsewhere (26). The response of the remaining B cell-rich population to concanavalin A was depressed by 90% compared with control cells treated with C alone, whereas the mitogenic response to a bacterial lipopolysaccharide remained undiminished. T cells were prepared by panning of spleen cell suspension on plastic petri dishes coated with a goat antiserum against mouse Ig (27). The nonadherent fraction usually contained <2% of cells with surface Ig detectable by immunofluorescence. Induction and Testing of Specific Suppressor Cells. Spleen cells or purifed T cells were resus- pended in a culture medium consisting of enriched Eagle's medium (28) and 10% fetal calf serum (Grand Island Biological Co., Grand Island, NY). Cells were incubated in Marbrook culture chambers (Bellco Glass, Inc., Vineland, N J) (28) together with a concentration of Pn supraoptimal on antibody response, for 3 d (5). Parallel cultures were also set up without Pn. At the end of incubation, cells were washed, resus~ended in appropriate culture medium, and added into the fresh responder cultures (1-3 × 10 Pn-educated cells in 50 #1 of medium per well). Sheep RBC-specific 'Is cells were induced by a 3-d incubation with the antigen in vitro and tested as previously described (29). or TNP-BA was initiated in fiat-bottomed wells of 96-well plates (3042, Falcon Microtest II; Falcon Labware, Oxnard, CA) in quadruplicate. Each well contained 106 splenocytes or B cells and optimal dose of antigen (10'-106 Pn or 105 TNP-BA) in a total volume of 150 #1 of a culture medium containing 2 × 10 -5 M 2-mercaptoethanol and 10% fetal calf serum (HY-Clone, Sterile Systems, Logan, UT) as described elsewhere (20). The culture wells were fed daily with 10 #1 of a cocktail (20). The induction of primary response of splenocytes to sheep RBC in the multiwell culture system was previously described (29). Antibody PFC generated in the responder cultures after 4 d of incubation were enumerated by plaque assays with sheep RBC, TNP-eoupled burro RBC (30), or PnC-coupled burro RBC (23) using either microscopic slide-assay with agarose (31) or a liquid-phase assay in chambers constructed from microscopic slides (modified Cunningham's system) (32). Inhibition of Lymphocytes by Monoclonal Anti-Id Antibodies. The minimal concentrations of hybridoma proteins required for inhibition of both Pn-reactive B cells and specific PFC were determined in an earlier study (20). The amounts of proteins used in the present experiments were 5-to 10-fold higher than those given in Table I. The controls for the assays described below included the addition of equal volume (20-50 /~1) of diluent (culture medium or a balanced salt solution). Attempts to inhibit the generation of Ts were carried out by adding anti-Id proteins into the Marbrook cultures (50 #l/vessel) at the beginning of a 3-d incubation of purified T cells with Pn. To test the effect of anti-Id(s) on effector T~, the hybridoma proteins were added into the assay cultures of responder cells with educated T~. The effect of anti-Id on induction of antibody response of spleen cells or B cells (20) was assessed by adding the hybridoma proteins into the responder culture wells on day 0 and enumerating the PFC response on day 4. The ability of anti-Id to inhibit the mature antibodyforming cells was determined by addition of the proteins into the plaque assay reaction mixture (20). In either assay the percent inhibition was calculated: 100 -([(PFC with anti-id)/(PFC with diluent)]) × 100. Induction of Pn-specific Suppressor Activity in Cultures of Purified T Cells. The suppressor activity of Pn-educated splenocytes resides in Thy-1.2-positive cell population (5). For the sake of this study, however, it was important to determine whether suppressor cells can be induced in cultures of purified, na'/ve T cells and to confirm that the effector Ts inhibit the Pn-reactive B cells directly. The results in Table II show that purified splenic T cells incubated with Pn for 3 d and then added into fresh assay cultures were about as effective in suppressing the anti-Pn response as unseparated, antigen-educated splenocytes, and that both spleen and B responder cells were II Spleen cells treated twice with anti-Thy-l.2 + C. suppressed. Normal, media-incubated T cells did not suppress; in fact, there was an enhancement of anti-Pn response in B cell cultures that received the control, mediaincubated T cells. The enhancement that has been consistently seen in our experiment seems to indicate that the response to Pn is partially T dependent (J. Gerny and G. Heusser, unpublished observations). The suppression generated in T cell cultures with Pn is antigen-specific in that the Ts do not inhibit the anti-TNP response in cultures stimulated with TNP-BA (Table III). Monoclonal Anti-Id Antibodies Fail to Inhibit the Generation of T8 In Vitro. Our first approach to assessment of Id expression on T cells was an attempt to inhibit the generation of Ts by addition of monoclonal anti-Id antibodies into the cultures of na'ive T cells with Pn at time zero. The anti-Id used for the experiment were previously shown to inhibit the activation of B cells by Pn in vitro (20). The protein concentration used herein is -10-fold higher than the effective end-point concentration shown in Table I. T cells were incubated for 3 d, washed, and tested for their ability to suppress the response to Pn. We did not, however, see any diminution of the suppressive activity of Ts educated with Pn in the presence of either of the anti-Id as compared with the activity of control Te (Table IV). In other trials not shown here, we used a mixture of the anti-Id antibodies, and readded the proteins to the T cell cultures on the second day of incubation with Pn, but still no diminution of Ts activity was observed. The anti-Id antibodies also failed to inhibit the generation of T, in unfractionated spleen cell suspension. Inhibition of Educated, Effector T~ by Monoclonal Anti-Id Proteins. Because it appeared that anti-Id proteins were unable to interfere with the generation of T~, we set out to determine whether those antibodies might block the effect of mature Te on responder B cells. For this, it was necessary to use T15-negative responder cells that would not themselves be inhibited by addition of the anti-Id into the assay cultures. Such cells were obtained from the spleen of BALB/c mice injected neonatally with monoclonal anti-T15 antibody Maid5-4. Preliminary experiments were carried out to ascertain that the splenic lymphocytes from T15-suppressed mice did respond to Pn in vitro and that the response was suppressed by Pn-educated T cells from normal BALB/c but not by any of the anti-Id antibodies. Fig. 1 shows an experiment in which a mixture of two anti-Id antibodies, AB1-2 plus Maid5-4, was added to cultures containing Pn-educated T~ and spleen cells from T15-suppressed mice. The splenocytes from T15-suppressed mice responded well to Pn (group I) and the response was not inhibited by the anti-T15 Id antibodies (group II). The normal T cells incubated in culture medium did not inhibit the response (group III). There was a significant suppressive effect on Pn-educated T cells (Ts); however, the suppression was overcome by addition of anti-Id antibody mixture to the assay culture. Thus, the response in the cultures containing Ts without anti-Id (group IV, 270 PFC/well) was suppressed by 64% and 60%, respectively, compared with control groups I and III. The response in cultures containing both Te and anti-Id (684 PFC/weI1, group V) was significantly higher (P < 0.01), and it was suppressed by only 9%, -6%, and 26%, respectively, compared with control groups I, III, and II. The comparison with group II is the most rigorous as it takes into account an eventual effect of anti-Id antibody on the responder cells themselves. In subsequent experi- Group Responder cells T cells added Anti-ld Table V). with sheep RBC and were from the same ments, therefore, the abolition of Ts activity by anti-Id was always measured against two controls: (a) responder cells plus media-incubated T cells, and (b) responder cells plus anti-Id. The effect of several individually tested anti-Id antibodies on Pn-specific Ps is shown in Table V. None of the antibodies had a significant effect on the responder cells from T15-suppressed mice, and all but one of them have either diminished or abolished the suppression mediated by "Is. The exception was antibody B36-82, which consistently failed to affect the suppressor activity. The number of PFC in cultures with T~ plus B36-82 was not significantly different from that in cultures with T~ only (P > 0.1 in both experiments 1 and 2, Table V) and the percent of suppression did not change relative to either control group. Lack of Inhibition of Sheep RBC-specific T~ by Anti-TI5 Id Antibodies. The specificity of the apparent abolition of the Pn-specific suppressor T cell activity by monoclonal anti-T15 Id antibodies was tested using T~ specific for another antigen, sheep RBC. The purpose of the control study was to exclude the possibility that the monoclonal anti-T 15 may detect an Ig determinant(s) cross-reactive with a T cell-specific surface antigen. A monoclonal antibody with such a fortuitous cross-reactivity has been recently mentioned by Pillemer and Weissman (33). The experimental design of the control experiment with sheep RBC-generated T~ was the same as that with Pn. The study was carried out with syngeneic cells (Ts and responders) from both BALB/c and C57BL/6 strain. However, none of the anti-Id antibodies had any detectable effect on the activity of sheep RBC-specific T~ in either strain (Table VI). Discussion Id associated with T15/48 idiotypic family were defined operationally by monoclonal anti-idiotope antibodies (anti-Id). Previous studies (8,20) strongly suggested that the five anti-Id selected for our experiments react with distinct idiotopic determinants on a T15 + immunoglobulin molecule. Screening of anti-PC serum antibody from different mouse strains by radioimmunoassay with anti-Id have shown that the Id are independently expressed, that is, all anti-Id reacted with BALB/c anti-PC antibody, but only some of them bound to antibodies from other inbred strains, in a random pattern (8). Further analysis of PC-reactive B cells and the mature, PCspecific PFC from individual mice of BALB/c and especially C57BL/6 strains confirmed the independence of Id expression. Even though some Id were more frequently detected than others, there was no linkage pattern discernable. The study also suggested that the T15 + response of BALB/c mice to PC (presented on S. pneumoniae) is idiotopically heterogeneous and that the idiotopic repertoire expressed by the B cells may change in the course of antigen-driven differentiation, presumably by a process of somatic mutation (20). The hapten-inhibition analysis of the interaction between monoclonal anti-Id and PC-reactive myelomas indicate that the Id we detect occupy different sites of the Ig in respect to the paratope (8, and Table I). The expression of five distinct Id on T cells was monitored by a functional assay (i.e., inhibition). First, we find that none of the anti-Id inhibited the generation of suppressor T cells by Pn in vitro; not even a mixture of several anti-Id(s) was effective (data not shown). Because all of the anti-Id did inhibit the activation of Pn-reactive B cells (i.e., precursors of antibody-forming cells) (20), the apparent failure to inhibit the Ts activation does not seem to reflect an innate defficiency of the monoclonal proteins. One may still argue that the mode of antigen receptor expression on resting precursors T cells is much that it is not properly accessible to the anti-Id. However, in our earlier experiments, we observed an inhibition of Ts education by a conventional mouse (A strain) antiserum against TEPC-15 (5). Thus it is possible that the receptor on T~ precursors excludes the Id detectable by the selected monoclonal antibodies but includes other idiotypic determinants recognized by the polyvalent conventional antibody. A similar conclusion was made by Benca et al., (34) who found that PCspecific helper T cells (Tn) were inhibitable by conventional (A strain) antiserum against HOPC-8 but not by a monoclonal antibody AB1-2 generated against the same myeloma protein (34). On the other hand, one of the other anti-Id used in our study, Maid 5-4, was shown to inhibit PC-specific TH activity (35). However, we found that the monoclonal anti-Id did reverse the suppressive effect of activated T~ on response to Pn. To show this, we used responder cells from syngeneic (BALB/c) mice injected neonatally with monoclonal antibody Maid 5-4 against TEPC-15. The treatment leads to chronic suppression (deletion) of T 15/H8 idiotypebearing clones (21), which are replaced by PC-reactive clones bearing a different, heretofore unidentified idiotype (36,37). The Maid 5-4 appears to be directed against a very common Id of the T15 family that is present on most anti-PC ~mtibodies (13) and on virtually all PC-reactive B cells from BALB/c as well as C57BL/6 strain (20). Thus it was expected that Pn-reactive cells from the suppressed T-15-mice would not express any other Id of T15 family, as was indeed shown by the failure of any of the anti-Id to inhibit the response ( Table V). The ostensibly T15-B cells were nonetheless readily suppressed by Pn-educated "Is from normal, T15-positive mice, which substantiates the earlier notion that the effectorial specificity of the T~ is towards the antigen rather than the idiotype of B cells (5). The monoclonal anti-Id had no detectable effect on the responder cells (derived from the T15-suppressed mice) but did abolish the suppressive effect of T, (obtained from normal donors). The reversal of suppression was specific, as there was no effect of the antibodies on sheep RBC-specific T~. The possible role of Fc region of anti-Id antibodies in the inhibition of "Is has not yet been investigated. However, the interaction of anti-Id with T~ does not alter the effector cell specificity in that Pneducated T~ failed to inhibit the responses to unrelated antigens, sheep RBC and TNP-BA whether or not the anti-Id(s) were added (data not shown). We interpret the data as evidence for expression of individual Ig idiotopes on mature, effectorial T~. Four out of five Id were detectable in that manner in our assay. One of those determinants, B 36-75, is hapten (PC)-inhibitable and, therefore, located within the paratope. Id recognized by AB 1-2 and Maid 5-4 are inhibitable only with PC keyhole limpet hemocyanin, suggesting that these may be somewhere near the antigen-binding site, whereas B 24-50 seems to be entirely nonparatonic (Table I). Thus, the results indicate that the T, may express both paratopic and nonparatopic structures of the variable region of Ig. Interestingly, a determinant not detectable on T~ was that defined by B 36-82. This Id, which is uniformly expressed by virtually all PC-reactive B cells and PFC in BALB/c strain (20), appears to reflect a structural difference between T~ receptor and Ig. There is evidence for an alteration of idiotype during antigen-driven differentiation of B cells. The switch from IgM to IgG anti-PC antibody production is accompanied by changes in primary amino acid sequences in VH (38) and a loss of T15 expression as measured by a conventional anti-T15 antiserum (39). Recently, a transient change in Id-460 idiotype expression has been observed within the IgM-producing, TNPreactive B cells (40). It is tempting to speculate that a similar change in idiotopic repertoir may occur during the T cells differentiation and that this may explain why the selected monoclonal anti-Id failed to inhibit the education of precursor Ts population while they did inhibit the mature, effector Ts. Changes in Id expression on both classes of lymphocytes during their immune differentiation may play a role in the regulatory balance between idiotopic and anti-idiotopic clones. The existence of idiotopic overlap between T and B cells has been suggested previously. Binz and Wigzell (41) used lymphocyte absorbtion of anti-Id sera produced against alloantibody to show that alloreactive T cells expressed some but not all idiotypic members of B cells. On the othe hand, Karwinkel et al. (42), working with the NP b idiotype detectable by a rabbit antiserum on NIP-specific antibody observed a decreased NP b expression in hyperimmune mice while the NIP-binding receptor from T cells retained the NP b idiotype. Our study indicates that discrepancies like that could be resolved by mapping of individual idiotopes with monoelonal reagents at various stages of antigen-driven lymphocyte differentiation. Summary The idiotopic repertoire expressed by antigen-specific suppressor T cells (T~) generated by Streptococcus pneumoniae strain R36a (Pn) in BALB/c strain mice was investigated using a panel of five monoclonal anti-idiotopic antibodies against TEPC-15/HOPC-8 myeloma proteins. Previous studies suggested that the anti-idiotopic antibodies recognize distinct idiotopic determinants within the T15 idiotype, and that Pn-reactive B cells express all of those idiotopes as shown by a specific inhibitory effect of the anti-idiotopic antibodies on induction ofanti-Pn response in vitro as well as on the mature antibody plaque-forming cells. In this study we asked the question of whether anti-idiotopic (Id) can block the inductive and/or effector phases of generation of T~ which act on the Pn-reactive B cells. The presence of anti-Id during the activation of T cells with Pn did not prevent the generation of T~. However, suppression mediated by Ts on responder lymphocytes (cultures of spleen cells or B cells) was inhibited (reversed) by four out of five anti-Id. Some of the antibodies recognize hapten (phosphorylcholine)-inhibitable Id in the paratope of Ig whereas others are directed against nonparatopic Id. These data indicate that the antigen receptor on T~ includes VH sequences both within and without the immunoglobulin in paratope, and that the Id repertoir of T~ overlaps with that of B cells.
2014-10-01T00:00:00.000Z
1982-09-01T00:00:00.000
{ "year": 1982, "sha1": "c7986dd737ed439ebf961d9e9e86ea9a8cfa0eac", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/156/3/719.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c7986dd737ed439ebf961d9e9e86ea9a8cfa0eac", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
115450097
pes2o/s2orc
v3-fos-license
Contemporary Inspection and Monitoring for High-Speed Rail System Non-destructive testing (NDT) techniques have been explored and extensively utilised to help maintaining safety operation and improving ride comfort of the rail system. As an ascension of NDT techniques, the structural health monitoring (SHM) brings a new era of real-time condition assessment of rail system without interrupting train service, which is significantly meaningful to high-speed rail (HSR). This chapter first gives a review of NDT techniques of wheels and rails, followed by the recent applications of SHM on HSR enabled by a combination of advanced sensing technologies using optical fibre, piezoelectric and other smart sensors for on-board and online monitoring of the railway system from vehicles to rail infrastructure. An introduction of research frontier and development direction of SHM on HSR is provided subsequently concerning both sensing accuracy and efficiency, through cutting-edge data-driven analytic studies embracing such as wireless sensing and compressive sensing, which answer for the big data ’ s call brought by the new age of this transport. Introduction The past decade has witnessed the most prosperous blooming of HSR, marking a splendid new age of this fast-developing transportation, which subtly alters people's travelling habit with great convenience and ride comfort. Hidden behind the highquality ride service provided by HSR is the tremendous effort and huge budget spent on the inspection and maintenance work, which is more challenging with increasing speed and capacity. With long-term numerous cycles of loading and unloading, both rail tracks and train wheels are suffering from vibrations and stresses caused by wheel/rail interactions, leading to fatigue, wear, plastic deformation, cracks and other deteriorations. The wheel/rail interactions are intense with average contact stresses over 1000 MPa under normal operating conditions, and this number can go much higher upon specific situations (wheel flange/rail edge contact while train turning, poor conforming wheel and rail profiles, etc.) [1]. Moreover, to author's knowledge with recent research work on contact mechanics using NDT approaches, machine element contacts including wheel/rail contacts are essentially contacts between the asperities due to surface roughness of the contact bodies, and the asperity contacts indicate hyper-stress concentration beyond 4000 MPa at the contacting peaks [2]. Under such high stresses, components of the rail system are deteriorating rapidly in various forms and the deteriorated structures create a worse operating environment, adding the occurrences of failures. A typical failure is rolling contact fatigue (RCF) causing a series of subsequent rail defects (squats, transverse cracks, spalling and gauge corner cracks). The rail also takes up impact load from running trains intermittently due to wheel defects, rail irregularities or at certain areas rail turnouts, rail joints, etc. The intense vibrations caused by wheel/rail interactions and impacts are transmitted bidirectionally from the wheel/rail interface up to the coach and down to the rail slab simultaneously. In terms of HSR, to meet the high standard requirements of smooth operation under high speed, the components utilised are different from those in conventional rail lines. For example, the rail tracks are strengthened with high resistance to wear, and multi-layer concrete forms up the rail slab with CA mortar layer serving as the damping instead of traditional ballast. These measures add ride comfort in HSR operation, but make the system more 'brittle' with reduced capability in vibration absorption, hence add the risks of cracks in the rail system. A recent example is the giant crack (44 cm long) found in an operating Japan Shinkansen bullet train in December 2017, causing interruption of service and great social panic [3]. Similar cases can be highly possible on ballastless rail tracks leading to more catastrophic consequences, calling for more reliable and thorough inspection actions. NDT techniques have long been used for inspection in rail system since the 1920s. With integrated ultrasonic probes or eddy current sensors, the NDT systems are able to check surface and internal defects along the rail in either contact or noncontact manner. The NDT inspection is conducted through manual inspection device or inspection vehicle. Conventional inspection vehicles are normally attached to a traction locomotive to carry out inspection. In the age of HSR, many countries have developed high-speed comprehensive inspection vehicles (CIVs) for the more complicated inspection tasks, such as the 'East-i' CIV in Japan, the 'IRIS320' CIV in France, and the 'No. 0' CIV in China, etc. Inspection content of the high-speed CIVs covers from geometry data of rail infrastructure to dynamic behaviours of trains. Despite of the wide range of data types, the NDT techniques require interruption of train service to conduct the inspection. To provide early alarming in prevention of further consequences in terms of accidents similar to the Japan Shinkansen case, continuous real-time information of in-service rail system is highly desired, which puts forward the introduction of online monitoring to this area. Since wheel/rail interaction is the core part of the rail system, this chapter mainly focuses on the inspection and monitoring methods of wheel and rail defects. Typical defects of wheels and rails 2.1 Wheel out-of-roundness (OOR) Various types of wheel OOR/defects occur on HSR in-service, which influence operational safety and give rise to high maintenance cost. These defects take on many patterns, such as flats, eccentricities, polygons, corrugations on block-braked wheel treads, missing pieces of tread material owing contact to fatigue cracking and other random irregularities [1,4]. Generally, they can be categorised into two major types: local defects and periodic OOR all around the wheel. The former can cause severe repeated wheel-rail impacts, while the latter leads to abnormal vibrations of vehicle-track system at certain frequencies [5]. Wheel local defects There are two major causes behind initiation and development of wheel tread local defects: thermal cracking and rolling contact fatigue (RCF) [6]. Several factors, such as speed, axle load, wheel-rail adhesion, wheel material and braking conditions, also have some effects on deterioration rates of wheel tread [7]. In HSR operation, wheel wear rate can increase quickly due to the high operation speed, high stiffness track, wide wheel-rail impact frequency, intense vibrations and high speed flow [5,7,8]. Wheel defects can cause abnormal vibrations and have the potential to impose damage to both track and vehicle components such as sleepers, rails, wheelsets and bearings, increase the likelihood of derailment and deteriorate operational safety and comfort owing to high vibration amplitudes [1,9]. Previous research found that the load history of axle bearing and bogie frame may fluctuate due to the influence of wheel roughness and lead to fatigue cracks [10]. Wheel defects also result in an increase in the noise both inside and outside the train [11,12] which can be annoying for both passengers on the train and residents along the rail line [5]. For high-speed trains, the high-magnitude impact loads generated by a defective wheel can excite various vibration modes for the wheelsets and thereby contribute to abnormal increases in the stress states of wheel axle under high-speed conditions [13]. Wheel polygonisation The studies of wheel polygonisation were stated some three decades ago when some of polygonal wheels were detected on high-speed trains (ICE, Germany). Wheel polygonisation with one, three and four harmonics around circumference has been found on disc-braked wheels in ICE, in which the third harmonic dominated for solid steel wheels, while the second harmonic was common for rubber sprung wheels [5]. The research on high-order polygonisation (15-25 orders) had not been carried out until recent years, when new problems and challenges in HSR operation were raised. For HSR, there is an increasing demand for relative studies on this problem because it is reported that high-order polygonisation with very small radial deviation (< 0.05 mm, or <20 dB re 1 μm) can cause abnormal vibration and even failures to the bogie components. The influences of polygonal wheels on track structure and vehicle components are studied by [13,14]. It is revealed that: (1) the wheel-rail impact normal force increases with the deepening of the wheel polygonal wear; (2) the amplitude of the normal force fluctuation depends mainly on the wavelength and depth of the wheel polygonal wear on the wheel running surface; and (3) the stress load cycles induced by wheel polygonisation can considerably increase the propagations of the initial crack in the wheel axle. Common defects caused by inappropriate manufacturing and use As per increasing of demand for HSR, rail defects have become a critical challenge in operation because an incident could cause more losses when trains run at higher speed. Many researchers have proposed classification methods for typical types of rail cracks derived from different propagation orientations of rail defects [15,16]. The most common rail defects are caused by inappropriate manufacturing and inappropriate use of rails, and they mainly include transverse defects (TD), detail fractures (DF) and split heads. TD (Figure 1a) is one of the most critical type of cracks that appear in railheads propagating along lateral direction. DF ( Figure 1b) has an origination point and grows radially from the origination point. These types of defects are caused by inappropriate use such as excessive stress concentrations. The vertical split heads (VSH) (Figure 2a), which usually originate from manufacturing anomalies, cause the second most train derailments (after TD). Rolling contact failure (RCF) RCF has become a significant economic and safety challenge for HSR and metro lines. Fatigue fracture occurs as a result of a periodic loading applied to the materials which exceeds its fatigue limit. Normally, it will lie between 35 and 60% of the tensile strength of rail [17]. Once a fatigue crack has initiated, it will spawn with every period of loading. At the beginning, it is very gentle and then quicker until a critical size is achieved [18]. Typical RCF originating at rail surface includes head checks, surface gauge corner head checks and squats. The cracks generate as the rails experience huge impacts from the wheels [19,20] and the fatigue damage results from the normal and shearing stresses of the wheel-rail interaction [21]. A micro-crack may induce surface spalling effect when it propagates from the railhead to inner parts of the rail. In addition, RCF can cause corrugation and bolt hole cracks on the rails, significantly influencing track structure. Generally, there are six classes of corrugation: short-pitch corrugation, light rail corrugation, corrugation on sleepers, contact fatigue corrugation, rutting and roaring rails and heavy haul corrugation [22]. Wheel roughness measurement and defect detection The most effective and common strategy to control the wheel defects is wheel re-profiling [5] which can eliminate local defects and polygonisation and reduce the resulting noise and vibration [4]. In modern HSR wheel maintenance, many modern depots are equipped with a wheel re-profiling facility known as a wheel lathe and the wheelsets are not necessary to be disassembled during re-profiling. However, the wheel re-profiling always follows a time or mileage-base schedule per earlier experience or supplier's specification. Consequently, it can decrease the wheel diameter and thereby shorten the service lives of the healthy wheels which are scheduled to be re-profiled. Therefore, there is a large economic incentive for adopting condition-based maintenance (CBM) scheme based on advanced NDT and SHM techniques, to reduce maintenance costs of wheelsets and efficiently preventing the hazards imposed by wheel defects. There are two main types of CBM approaches: in-service (online) condition monitoring and in-depot (offline) inspection [23]. The former one provides real-time condition information for maintenance planning, while the latter approach, normally done at a fixed interval, can offer accurate measurement for condition assessment of vehicle components. Wheel roughness measurement/in-depot (offline) wheel inspection Wheel tread roughness measurement (in-depot inspection) is a direct way of collecting wheel condition information for maintenance, monitoring profiles in conjunction with wear problems. With wheel roughness measurement data, the wheel re-profiling strategy can be optimised using data-driven wear model [24]. The NDT technologies employed by roughness measurement include linear variable differential transformer (LVDT), the mechanical displacement probe, the rotation sensor, electromagnetic acoustic transducer (EMAT) [25], laser-ultrasonics [26], laser-air hybrid ultrasonic technique (LAHUT) [27] and other novel NDT techniques [28][29][30]. Some even allow the trains run at a low speed during inspection [30]. However, for measurement methods using ultrasound pulse-echo technique, it is sometimes difficult to detect wheel flats because they usually have smooth edges that do not generate echoes [31]. There are now many commercial devices that allow the measurement to be done in depot with high efficiency, such as ØDS measurement instrument, Miniprof, MÜLLER-BBM, etc. Online wheel condition monitoring and defect detection Existing online wheel condition monitoring systems mainly include trackside wheel impact load detector (WILD), force gauges installed on sleeper pads, distributed sensors based on Brillouin optical time domain analysis (BOTDA), accelerometer-based trackside detector, acoustic detectors, laser-and video camera-based detectors, etc. Trackside WILD and wheel: rail interaction detector By deploying strain gauges and accelerometers on the rail, it is possible to measure wheel-rail contact force or rail acceleration response when a train passes over the instrumented rail section. These devices report impact as either a force at the wheel-rail contact interface or a relative measure of the defect [10]. The most common WILD is composed of a series of strain gauge load circuits mounted on the neutral axis of the rail between two adjacent fasteners in several consecutive sleeper bays to quantify the wheel-rail interaction force. Johansson and Nielsen [1] made use of this set-up to build a detector on the rail web in nine consecutive sleeper bays. Nielsen and Oscarsson [32] used both rail web and rail foot strain gauges to measure the wheel impact load and rail bending moment. Stratman et al. [33] proposes new criteria for removal of wheels with high likelihood of failure, based on two real-time SHM trends that were developed using data collected from in-service trains. Filograno et al. [34] developed an FBG-based sensing system comprising FBG strain gauges mounted at both rail web and rail foot enables train identification, axle counting, speed and acceleration detection, wheel imperfections monitoring and dynamic load calculation. They have expanded the application of this system in the Madrid-Barcelona HSR line [34]. An FBG-based wheel imperfection detection system that can provide in-service measurement of wheel condition was developed by The Hong Kong Polytechnic. It offers a comprehensive health monitoring scheme for vehicle and track in the entire railway network of Hong Kong [35]. A monitoring system has been proposed with FBG sensors implemented on rail tracks to detect wheel local defects such as wheel flats and polygonisation [36]. The impacts of wheel/rail interactions caused by wheel local defects are reflected as subtle anomalies in response to signals collected by FBG sensors, and the deployed system is shown in Figure 3a. The detecting results match well with those from offline inspection (Figure 3b). In addition to the strain gauge-based detector, there are other methods for online wheel load measurement to assess the condition of passing wheels, such as force gauges installed on sleeper pads, distributed sensors based on Brillouin optical time domain analysis (BOTDA), etc. Besides, there are also some commercial WILDs, such as WheelChex ® system, GOTCHA system (optic fibre-based wheelflat detection and axle load measurement system), and MULTIRAIL WheelScan. Trackside rail acceleration and noise detector Accelerometer-based systems can provide 100% coverage of the circumference of a wheel of any size in defect detection [10]. Skarlatos et al. [37] used two B&K accelerometers placed on the rail foot to pick up the rail vibration signals for diagnosis of wheel defects. Belotti et al. [38] used four consecutive accelerometers and an inductive axle-counter block which help to discriminate the response corresponding to each wheel. Seco et al. [39] proposed a trackside detector which has eight accelerometers installed on both bend zone and straight zone. However, the acceleration data are difficult to convert to wheel-rail impact load, which is widely used as wheel local defect indicator [5]. This is mainly because the measured acceleration signal could not directly refer to the excitation of each wheels, and sometimes an additional axle counter is needed [23]. Furthermore, their performance might be limited by their repeatability and by the analysis applied to the accelerations acquired. The commonly used noise detector is called trackside acoustic array detectors (TAADs), which make use of arrays of high-fidelity microphones to listen to the audible noises produced by the passing trains [40]. There are also some commercially available systems, such as trackside acoustic detection system (TADSTM) and the RailBAMTM [41,42]. However, these systems are specialised in wheel bearing fault diagnosis rather than wheel tread defect detection. For the slight flat defect of high-speed train wheel (flat depth < 0.5 mm), this method may not be applicable because when the train runs at high speed (>200 km/h), the prediction accuracy can be limited [11]. Detection methods based on laser and video cameras There are various types of wheel roughness monitoring systems based on laser and video cameras. Some typical or well-known detectors include wheel profile detectors (WPDs), MBV-systems, wheel profile measurement system (WPMS) and those based on light illumination devices, light-sensing devices, charge-coupled device (CCD) camera, and laser displacement sensors (LDSs). WPDs are based on a combination of lasers and video cameras to automatically measure the wheel profile while train is in motion [40]. These data acquired from WPDs include wheel profile and wear, wheel diameter, height and thickness of the flange, back-to-back distance and wheel inclination. A prototype of a condition monitoring system called MBV-systems is presented by Lagnebäck [43] for measuring the profile of the wheels with a laser and a camera. An automatic WPMS based on laser and high-speed camera was installed on an iron ore line in Sweden in 2011 and can measure the wheel profile for speeds up to 140 km/h [44]. This system, which is in a CBM manner, has been attracting more and more interest from maintenance engineers in the Swedish railway sector. Zhang et al. [45] presented an online non-contact method for measuring a wheelset's geometric parameters based on the opto-electronic measuring technique. The system contains a charge-coupled device (CCD) camera with a selected optical lens and a frame grabber, which was used to capture the image of the light profile of the wheelset illuminated by a linear laser. Besides, there are some newly designed laser-based online detectors, which are immune to vibration and on-site noise, easy to calibrate, with high efficiency of data acquisition and with high accuracy of positioning [46,47]. Rail defect detection and monitoring techniques Manual inspection method is still widely used in most routine track inspections until today since it can directly figure out rail defects. However, it needs experienced workers and involves significant human input and judgement [48]. Therefore, NDT&E techniques, which enable rail inspection in an automated manner, are in need. NDT techniques were advocated for rail inspection as early as the 1920s [49]. Ultrasonic testing (UT) emerged in the 1960s became dominant in rail inspection [10,50]. With the development of UT, European countries and Japan have released a variety of forms of ultrasonic rail flaw detection equipment, such as portable types, hand-push types, road and rail dual-use vehicles and specialised railtesting trains [28]. While being extensively utilised, both the magnetic induction testing and conventional UT methods are not suitable for all defect scenarios; for example, they offer poor sensitivity to defects located in the rail web and rail foot [51]. A wide variety of inspection techniques are under research and development with the target to enhance the detection capability. Advanced UT techniques To enhance accuracy, speed and detection rate in rail defect detection, many research efforts have been made to improve the detection methods and develop advanced UT techniques. The novel techniques include laser ultrasonic testing (LUT), phased array ultrasonic testing (PAUT), electromagnetic acoustic testing or electromagnetic acoustic transducer (EMAT), guided wave testing (GWT) and acoustic emission testing (AET). Laser ultrasonic testing (LUT) Compared with traditional piezoelectric UT, LUT has its own merits such as non-contact and no coupling agent. The laser device can be located relatively far away from the rail with optic-fibre used as transmission media. This enables the establishment of trackside monitoring system. Besides, with good interference immunisation, laser can be used in measurement in adverse environment or high temperature. Pulsating laser works on solid surface and produces longitudinal wave, lateral wave and surface wave simultaneously. As a result, it can be applied to detect not only surface defects but also internal defects. Yet certain problems exist in LUT, such as low efficiency of light-sound energy transformation, weak ultrasonic signal and high cost of detection equipment. Nielsen et al. [52] developed an automatic LUT-based system for rail inspection, named LURI, which was tested on a railroad line containing man-made structural defects. This system can detect defects on the running surface of the rail, as well as horizontal and vertical flaws in the railhead. Kenderian et al. [53] developed the first non-contact testing system based on laser-air hybrid ultrasonic technique for rail defect inspection. The system can detect VSH defects and thermal fatigue cracks with a success rate of nearly 100%, and rail web defects with a rate of approximately 90%. Lanza et al. [54] developed a laser/air-coupled rail defect detection system, which can accurately locate rail transverse cracks by using laser emission and ultrasonic wave for detection. Phased array ultrasonic testing (PAUT) PAUT, developed from the research on phased array radar, can detect cracks in different directions, depth and locations conveniently. Utrata and Clark [55] present groundwork of PAUT methods, which provided useful information and evidences for the positioning of phased array (PA) probes in rail flaw detection. PAUT is now widely applied in rail defect detection, covering railhead, rail web, rail base and weld areas. Institutes that carry out research on PAUT for rail defect detection include: Transportation Technology Centre Inc. (TCCI), Lowa State University, University of Warwick, University of Birmingham in UK, TWI Company and Socomate in France, etc. Wooh and Wang [56] developed a hybrid array transducer which is an assembly of a linear phase and a static array and can accurately assess real defects in rail specimens. Speno International Company [57] developed an ultrasonic rail testing equipment based on multi-element phased array technology and the equipment was installed on a trial inspection car which can achieve a speed of 80 km/h and a sampling rate of 6 kHz when detecting rail defects. TTCI [58] developed an Omni-scan PAUT system which was applied in on-site detection of TDs. Field test of the system was conducted on the Facility for Accelerated Service Testing (FAST) of TTCI. Electromagnetic acoustic testing (EMAT) EMAT, as a kind of excitation and detection technique of propagating ultrasonic wave, can provide detection of defects located in subsurface area of railhead. Thus, a promising method appears to be electromagnetic-acoustic method which is realised by EMAT transducers. Both transverse and longitudinal cracks in railhead can be detected by using EMATs, as shown in Figure 4 [59]. University of Warwick and University of Birmingham [60] developed a railway surface-detect inspection technique based on EMAT equipped with two EMAT converters, one for emitting surface Rayleigh waves and the other for receiving surface propagating Rayleigh waves. It is found that this technique can improve the inspection rate of horizontal and vertical defects on the railheads, compared with piezoelectric transducers. University of Warwick [61] designed a lab-based laser-EMAT system to investigate the ultrasonic surface wave's generation, propagation and interaction on the railhead with a Michelson interferometer measuring the out-of-plane displacement. The Rayleigh-like wave generated by EMAT can flood the whole curve makes it capable to detect the gauge corner cracking. Guided wave testing (GWT) GWT techniques have been widely investigated over the past decades because of the potential for long-range interrogation and detecting vertical-transverse defects under shelling and weld defects [62,63]. They are ideal in SHM applications that can benefit from built-in transduction, moderately large inspection ranges and high sensitivity to small flaws. In rail applications, since ultrasonic guided wave can propagate through the discontinuous defects on rail surface, the screening effect of lateral cracks distributed underneath produced by surface detachment can be minimised. Rose et al. [63] developed a GWT inspection system with non-contact aircoupled and EMATs to transmit and receive guided waves for the detection of transverse defects under shelling. Wilcox et al. [64] developed a GWT system with a dry-coupled piezoelectric transducer array to detect smooth transverse-vertical defects and alumino-thermic welds, but this system requires interruption of the operation of trains. Lanza et al. [54] developed a GWT system using a pulsed laser Rail cracks detection using EMAT [59]. to generate ultrasonic guided waves and air-coupled transducers to sense the guided waves for the detection of vertical cracks hidden below horizontal cracks. Park et al. [65] proposed a built-in active sensing system consisting of two piezoelectric patches in conjunction with both impedance and guided wave propagation methods for rail defect detection. Marine Technology Association of South Africa and Council of Scientific and Industrial Research [66] jointly developed solar power GWT detection system ( Figure 5). The coverage of the single system for rail defect detection is up to 2 km. Imperial College London and Guided Ultrasonics Ltd. cooperatively developed a G-shaped scanning ultrasonic rail track detection device, which can inspect vertically distributed defects and alumino-thermic weld joint [67], as shown in Figure 6. It can effectively inspect 18-mm-deep defects under rail crossing nose. Acoustic emission testing (AET) Different from common ultrasonic inspection, acoustic emission (AE) is instantaneous elastic waves by quick release of localised energy in solid materials under external applied loads. AE events can be captured by the piezoelectric sensors, generated elastic waves along all directions. Many sensors can be utilised to document arrival time of the signals and the variation of frequency during the crack initiation process. Hence, the nature of cracks can be determined. Through experimental study, AE has been proven a feasible solution in defecting rail detection, especially in rotating machinery [68]. A simplified analytical model, which separates defects caused by AE activities from background noise, was proposed by Thakkar et al. [48]. They also investigated the physical interaction between AE and axial load, speed, as well as traction through experiment. It is found that AE signal can be used for analysing the defects on the surface of the rail under normal operating speed. The application of AET on rail defect detection is rarely reported. Previous research [69] shows that the benefit of AET may be limited due to imperfection of materials, which can produce different nature of signal source. Besides, the installation of the sensor may also affect the AE signal generation. It may also be affected by wheel and track defect for any misalignment [48]. Another concern of AET is signal processing for those AE waves that have similar amplitude with that of background noise produced by the rolling wheel [18]. Advanced data learning and updating methods have been investigated dealing with great uncertainties arisen from the online monitoring data for more accurate and efficient damage diagnosis. AE method incorporating Bayesian framework is utilised in an online rail turnout crack monitoring system developed by CNERC-Rail [70]. The method is able to detect defects without training data of damaged rail structure and the monitoring systems have been implemented on Shanghai-Nanjing HSR lines, as shown in left panel of Figure 7. The rail turnout conditions are indicated in a probabilistic manner through a structure health index (SHI), as shown in right panel of Figure 7. Magnetic particle testing Magnetic particle testing [71] can be used easily to detect the specimen surface defects. But, the result is very sensitive to the specimen surface condition. If the specimen surface is coated or wet, the reliability of the detect result will decrease a lot. Therefore, removing the coating materials and surface drying are necessary before testing. Eddy current testing Eddy current testing is very simple and easy to detect surface and shallow internal cracks [28]. Eddy current sensors have been mounted on the bogie of track inspection cars and equipped in roller-guided trolleys for mobile inspection of rails [28]. They are able to detect surface and near-surface defects in the railhead but fail to locate internal defects. Alternating current field measurement (ACFM) The alternating current field measurement (ACFM) technique is an electromagnetic inspection method that uses hand-held probes, and computerised control, data acquisition and computational models. ACFM is more efficient than conventional inspection methods due to a reduced need for surface preparation and an ability to work through surface coatings. ACFM also has an added benefit that it is not only capable of detecting flaws but can also detect size defects for length and depth [72]. In 2000, TSC with the support of Bombardier Transportation began the development of an advanced ACFM system for application in the rail industry. Following the experimental work on the train axles, it became evident that an ACFM system could be deployed to detect RCF cracking on rails. This led to the development of a pedestrian-operated ACFM walking stick [73]. The inspection of the railhead is carried out by sequentially scanning across the group of sensors enabling the uninterrupted inspection of the rail. The system can detect and size gauge corner cracks and head checks smaller than 2 mm in depth. However, the ACFM sensors cannot quantify squats accurately and are unable to detect short-wave corrugation and wheelburns. FBG-based online monitoring of HSR In recent years, optic fibre sensors have been advocated for application to rail infrastructure monitoring. The FBG sensors have merits of being immune to electromagnetic interference (EMI) and no power supply is needed on-site. A monitoring system based on the FBG technology has been developed and installed on an operating rail line in Hong Kong for real-time and continuous detection of rail strain and temperature, rail breaks, axle counting, wheel imperfection assessment and dynamic loading identification [35]. Wang et al. [74] proposed a rail performance monitoring and safety warning system and implemented this system on a rail line by deploying FBG sensors in the rail web and at the expansion joints between supporting concrete slabs. Yoon et al. [75] proposed a distributed fibre sensory system based on Brillouin scattering and a correlation domain analysis technique for longitudinal strain monitoring of rails. Ni et al. developed a deformation monitoring system for an in-service HSR tunnel using an FBG-based monitoring system [76]. An array of FBG bending gauges was deployed at the rail slab of a segment inside the tunnel. Upon occurrences of deformation, there would be relative rotation between two adjacent bending gauges. Phase shift of the FBG sensors caused by the relative rotations was recorded, and the deformation can then be derived, resulting in a profile of the deforming rail slab, and the deformation of the tunnel can be inferred. FBG sensors for detecting acousto-ultrasonic signals have been studied since the mid-1990s [77]. The conventional interrogation technique for FBGs as sensing elements utilises their spectral encoding and decoding capabilities for the measurand; however, the spectral decoding capability cannot be used to detect high-frequency signals (e.g., acoustic and ultrasonic waves) due to the low wavelength scanning speed. Appropriate demodulation techniques capable of highsensitivity detection of high-frequency waves are necessary to develop acoustoultrasonic FBG sensors. There are two main approaches to detecting acoustic and ultrasonic waves with FBGs: the first one uses a narrowband light source to illuminate the FBGs and demodulate the power intensity variation when the waves impinge on the FBGs, while the second one uses a broadband light source and an optic filter. Minardo et al. [78] conducted a numerical investigation on the response of FBGs subjected to longitudinal ultrasonic waves. Ni et al. [79] have developed a hybrid monitoring system using FBG sensors to interrogate ultrasound signals emitted by PZT sensors (Figure 8). The hybrid system has been verified in lab and a test line in mainland China. Outlooks of SHM on HSR With embedded hybrid monitoring systems of FBG and PZT sensors, the SHM techniques have shown their promising prospect in HSR, enabling real-time monitoring of structural conditions of in-service trains and rail infrastructure. To realise large-scale utilisation on numerous HSR lines worldwide, practical solutions ought to be achieved concerning both economic and efficient aspects, answering for the need of early warning and quick decision-making upon emergencies in high-speed operation and guiding the potential development direction of SHM applications on HSR in the coming decades. Wireless sensing network (WSN) provides a cost-effective approach eliminating wires and enabling remote sensing, which largely enhances the practical applicability of SHM [80,81]. A wireless-based system was designed to monitor the performance of rail vehicles by Nejikovsky and Keller [82]. The communication in the WSN system can be made through satellite and Ethernet, while data are uploaded onto cloud for storage and transmission to control room far away from site; system data transmission plan can be found in the aforementioned railway tunnel deformation project [76]. Particularly, in terms of near-field communication, radio frequency identification (RFID) has been proposed as a competitive candidate [83], which provides a new thinking on emerging RFID modules in normal sensors. The passive RFID sensors embedded in the HSR structures need no wired power supply and can be activated by passing trains, sending structural condition information. Continuous online monitoring of HSR over multiple HSR lines puts forward the difficulty in storage and analysis with massive data collected. The authors' team has long been dedicated to damage diagnosis and prognosis of HSR based on monitoring data with updating and learning methods. Facing the data amount issue, compressive sensing, which is able to sample data at sub-Nyquist sampling rate while maintaining almost all the original information, is being actively investigated to streamline the axle box acceleration data from an operating high-speed train and has successfully verified the feasibility of sub-Nyquist data acquisition in HSR online monitoring [84,85]. This is of great significance to wireless sensing and RFID where transmitted data amount is limited. Conclusions Various sensing technologies have long been benefiting rail industries with systematic and reliable inspection and monitoring. In turn, the vigorous development of HSR has been pushing research in sensing technologies with flourishing state-of-the-art deliverables coming out. The HSR is expanding worldwide, satisfying people's growing demands in travelling with ease and comfort, and bringing heavier inspection and maintenance tasks. In response to the expanding HSR network, conventional offline inspection will still be the primary approach taking up most of the work, and online SHM will be a powerful supporting tool playing a more important role and reflecting real-time states of the operation HSR systems. The use of sensors will be less solitary and separated but more in a combined manner containing multi-disciplinary subjects from mechanical engineering, civil engineering, electrical engineering to computer science, mathematics, etc. Moreover, the requirements to contemporary sensing go beyond fundamental functions of accuracy and reliability to flexibility, portability and environment-friendly. Taking advance of nature of railway, the SHM applications on HSR can do more than environment-friendly. The concept proposed by the authors, a high-speed train with embedded sensing systems can be treated as an integrated moving sensor, capable of gathering information not restricted to structural conditions, but air conditions inside and outside the car body concerning surrounding environment and people's health. Having accomplished multiple SHM projects on HSR lines, we are initiating just calling a start, and in the near future, the encounter of sensing technologies and HSR will continuously foster reciprocal developments, paving a high-speed path to structural well-being, sustainable environment and social health.
2019-04-16T13:29:12.491Z
2018-11-05T00:00:00.000
{ "year": 2019, "sha1": "bcdc9cfdf554db4c390d70f0f38ed31e8743e73a", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/64211", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c961b092263c26b25ccc94c6bb16bf2a830518a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54535247
pes2o/s2orc
v3-fos-license
The influence of an extended Atlantic hurricane season on inland flooding potential in the Southeast United States Abstract. Recent tropical cyclones, like Hurricane Katrina, have been some of the worst the United States has experienced. Tropical cyclones are expected to intensify, bringing about 20 % more precipitation, in the near future in response to global climate warming. Further, global climate warming may extend the hurricane season. This study focuses on four major river basins (Neches, Pearl, Mobile, and Roanoke) in the Southeast United States that are frequently impacted by tropical cyclones. An analysis of the timing of tropical cyclones that impact these river basins found that most occur during the low 10 discharge season, and thus rarely produce riverine flooding conditions. However, an extension of the current hurricane season of June-November, could encroach upon the high discharge seasons in these basins, increasing the susceptibility for riverine hurricane-induced flooding. Our results indicate that 37-258 % more days would be at risk to flooding from an average tropical cyclone with an extension of the hurricane season to May-December (just 2 months longer). Future research should aim to extend this analysis to all river basins in the United States that are impacted by tropical cyclones in 15 order to provide a bigger picture of which areas are likely to experience the worst increases in flooding risk due to a probable extension of the hurricane season with expected global climate change in the near future. Introduction In the southeastern United States tropical cyclones are some of the most severe rain events (Schumacher and Johnson, 2006).While tropical cyclones occur less frequently than other rain-producing events, they cause the most damage be-cause they cover a large geographic area and often cause widespread flooding (Greenough et al., 2001;Mousavi et al., 2011;Schumacher and Johnson, 2006).On average, tropical cyclones occurring in the southeast bring 240.4 mm of rain in a 24 h period (Schumacher and Johnson, 2006).The severity of flooding following tropical cyclone events is a function of tropical storm frequency; landfall location; precipitation intensity; and, for coastal areas, mean sea level (Irish and Resio, 2013).In addition to flooding, these storms cause further damage from their strong winds (Greenough et al., 2001;Mousavi et al., 2011), and they frequently can cause tornadoes and landslides (Greenough et al., 2001;National Science Board (NSB), 2007). Coastal communities in the United States, especially along the east coast and the Gulf Coast, are most at risk of the flooding, strong winds, and heavy precipitation associated with tropical cyclones (Irish et al., 2014).Approximately half of the United States population lives within only ∼ 80 km of the coast (NSB, 2007), and, on average, areas that are prone to tropical cyclones are 5 times more heavily populated than the rest of the nation (Frey et al., 2010).About 70 million people live in hurricane-prone areas (Greenough et al., 2001).Recent increases in coastal populations and development in coastal areas are posing an increasing risk to coastal infrastructure and human life (Greenough et al., 2001;Irish et al., 2014).Based off of 2010 estimates, 39 % of US homes are located in coastal counties, an 8 % increase since 2000 (NOAA, 2013).The monetary losses from hurricanes are increasing; in 2006 dollars, average annual losses were USD 1.3 billion from 1949to 1989, USD 10.1 billion from 1990to 1995, and USD 35.8 billion from 2002to 2007(NSB, 2007).Flooding from high storm surges during hurricanes has caused approximately 14 600 deaths over the last cen-Published by Copernicus Publications on behalf of the European Geosciences Union. tury; about 50-100 deaths occur per hurricane event (Greenough et al., 2001).In addition to deaths caused by flooding, hurricanes can cause a variety of health impacts, including illnesses that result from ecological changes (changes in the abundance and distribution of disease-carrying insects, rodents, mold, and fungi), damage to healthcare infrastructure and reduced access to healthcare services, damage to water and sewage systems, overcrowded conditions in shelters, and psychological effects from the trauma faced by victims (Greenough et al., 2001). Several studies have looked at the influence of tropical cyclones on river flooding in small catchments.Kostaschuk et al. (2001) investigated tropical-cyclone-induced flooding in the Rewa River system in Viti Levu, Fiji.They observed that rainstorms caused a higher number of floods but that floods caused by tropical cyclones were much larger (Kostaschuk et al., 2001).Waylen (1991) conducted a partial duration series flood analysis for the Santa Fe River in Florida and found similar results.Tropical-cyclone-induced floods were found to occur less often than floods from other rain-producing events.However, they tended to have larger magnitudes and longer durations (Waylen, 1991).Specifically, they found that tropical cyclone floods were ∼ 3 times larger and ∼ 2 times longer than other floods (Waylen, 1991). Greenhouse gases in the atmosphere not only increase atmospheric temperature but also can lead to increased sea surface temperatures (Irish et al., 2014).The warmer the sea surface temperature, the greater the intensity of tropical cyclones.Thus, global warming may intensify tropical cyclones, such that storms may tend to have higher storm surge levels (Frey et al., 2010;Irish et al., 2014;Mousavi et al., 2011).The Intergovernmental Panel on Climate Change (IPCC) predicts that global sea surface temperatures will increase 1.1-6.4• C over the next century (Irish and Resio, 2013;Mousavi et al., 2011).Sea surface temperatures need to be at or above ∼ 26.7 • C for tropical cyclones to form (Steenhof and Gough, 2008).The current hurricane season extends from June to November; however, longer seasons (i.e., storms occurring before June and/or after November) have been occurring in recent years (Dwyer et al., 2015).While research on this topic is not conclusive, there is some indication that global climate change may lead to a change in the Atlantic hurricane season (Dwyer et al., 2015).There is an 8 % increase in a tropical cyclone's central pressure for each 1 • C increase in tropical sea surface temperature (Irish and Resio, 2013;Irish et al., 2014;Mousavi et al., 2011).Further, there is a 3.7 % increase in a tropical cyclone's wind speed for each 1 • C increase in tropical sea surface temperature (Irish et al., 2014).Climate models also suggest that precipitation rates from tropical cyclones may increase by 20 % by 2100 (Geophysical Fluid Dynamics Laboratory (GFDL), 2016; Knutson et al., 2010). Numerous studies have indicated that global climate warming may intensify tropical cyclones and is very likely to result in sea level rise (Bronstert et al., 2002;Frey et al., 2010;Greenough et al., 2001;Irish and Resio, 2013;Irish et al., 2014;Kostaschuk et al., 2001;Mousavi et al., 2011;Ouellet et al., 2012).Major hurricanes, those that are category 3 or higher on the Saffir-Simpson scale, are the most likely to intensify (Frey et al., 2010;Mousavi et al., 2011).However, there is some debate about changes in tropical cyclone frequency.Some research predicts that tropical cyclone frequency will increase (e.g., Greenough et al., 2001;Ouellet et al., 2012), while other research suggests that tropical cyclones are likely to intensify with global climate warming but occur less frequently (e.g., Irish and Resio, 2013;Kostaschuk et al., 2001). Several studies about the effects of climate change on tropical cyclone intensity have been conducted for the Corpus Christi, TX area (Frey et al., 2010;Mousavi et al., 2011).Frey et al. (2010) determined how severe historical hurricanes would be if they were to occur in the current climate, and those predicted for the 2030s and 2080s.They found that, in all three climate scenarios, storm surge flood depth, area of flood inundation, population affected, and economic damages would all increase compared to the historical levels (Frey et al., 2010).In a follow-up study by Mousavi et al. (2011), sea level rise and tropical cyclone intensification, due to global warming, are likely to equally contribute to increased flood depths. While there has been much focus on the impact of tropical cyclones on coastal flooding, there has been little research on how these high-intensity precipitation events affect the hydrology of streams just inland of coastal areas.Further, few studies have focused on how inland flooding is likely to be altered with an extended hurricane season in the near future due to likely global climate change.This study investigates the potential increase in flooding risk with an extension of the hurricane season on four rivers in the southeastern United States.The goal is to help determine how flooding potential may change in the near future in order to elucidate the impact such changes may have on communities in the southeastern United States. Study areas This research is focused on the southeastern United States, where tropical cyclone events occur quite frequently and where severe flooding following these events can have profound impacts on the prosperity of communities.Specifically, four river basins (Neches, Pearl, Mobile, and Roanoke) were selected for analysis (Fig. 1; Table 1).These four basins were chosen to be in areas that experience tropical cyclones and a high number of severe hurricanes.Currently, tropical cyclones impacting these four basins rarely cause flooding.As is shown later in this paper, this is primarily due to the overlap of the current hurricane season with the lowdischarge seasons on these four rivers.However, an extension of the hurricane season, such that it encroaches upon the high-discharge seasons on these rivers, could likely lead to increases in flooding following tropical cyclones that impact these basins. Gaging stations along these rivers were chosen to be inland of coastal areas so that tidal fluctuation and storm surge would not be factors when analyzing discharge, and far enough downstream to include as much of the basins as possible.These four basins were selected to represent a range of sizes and geographic locations that exist throughout the southeastern United States.United States Geological Survey (USGS) gages were used where data were available for the period extending from 1998 to 2014.In many cases (Pearl, Mobile, and Roanoke) USGS stream gages for these basins either did not have daily discharge data or did not have a long enough history of daily discharge data, or if sufficient daily discharge data were available, the location of the gaging station was either too close to the coast where there were tidal fluctuations or too far upstream in the catchment such that only a small fraction of the catchment was flowing to the gaging station.In these situations, Dartmouth Flood Observatory (DFO) satellite river gages were used (Brakenridge et al., 2016). Frequency and timing of tropical cyclones NOAA's Atlantic hurricane database (HURDAT2) (Landsea et al., 2015) was used to determine when tropical cyclones passed over the four basins.For each tropical cyclone event on record, this dataset provides information on the year, month, day, time, latitude, longitude, maximum sustained wind speed (in knots), minimum pressure (in millibars), and several wind speed radii extents for points along a tropical cyclone's track (where points are spaced at 6 h intervals).The data provided in the HURDAT2 dataset are downloadable in a text file format.A Python script was developed to extract this information in order to create point shapefiles of tropical cyclone paths that could be analyzed in GIS.The paths of tropical cyclones between 1998 and 2014 were buffered to a width of 300 mi (∼ 500 km), the average precipitation extent of a tropical cyclone (Darby et al., 2013).Then, a selectionby-location procedure was used to determine which buffered tropical cyclones passed over each of the basins.The coordinates of the buffered points along tropical cyclone paths passing over the basins were then used to look up the cor- responding dates each storm passed over each basin in the HURDAT2 dataset. Determining bankfull discharge Daily discharge data for the outlet of each of the basins over the period from 1998 to 2014 were obtained from either the USGS or the DFO's satellite river discharge measurements.The DFO sites provide daily measures of discharge beginning 1 January 1998 (Brakenridge et al., 2012).Discharge is estimated from NASA and the Japanese Space Agency TRMM microwave data (Brakenridge et al., 2012).This dataset is particularly useful because it allows the user to place gaging stations at any location along world rivers.Brakenridge et al. (2012) tested the accuracy of DFO satellite river discharge measurements and reported regression r 2 values > 0.6.They also provide a site-specific "quality assessment" which, for sites in the United States, is based on calculating the Nash-Sutcliffe (NS) statistics for the DFO site and near gaging station hydrographs (Brakenridge et al., 2015).For the Mobile River site, for example, the DFO quality assessment ranking is 2 (fair), which means that the NS statistics were > 0.44.However, since both bankfull and time series discharge are estimated from the same source in this study, while the absolute value may somewhat differ from the actual discharge, temporal trends and fluctuation magnitude were found to be well captured.This is clearly evident in the Mobile River DFO site (http://floodobservatory.colorado.edu/SiteDisplays/467.htm). Using the daily discharge data obtained, the log Pearson type III statistic (Interagency Advisory Committee on Water Data (IACWD), 1982) was calculated for each basin.The log Pearson type III statistic can be used to provide an "industry standard" of bankfull discharge for a river at a particular gaging station; times when discharge is greater than the bankfull discharge indicate the occurrence of a flood (IACWD, 1982).In the study of tropical cyclone floods in Fiji by Kostaschuk et al. (2001), the log Pearson type III statistic was found to represent their partial duration flood series more accurately than the Pareto distribution, even though it tended to underestimate the largest flows slightly. The log Pearson type III statistic was calculated using maximum yearly discharge values from 1998 to 2014: where Q is the discharge of some return period, log Q is the average of the log Q maximum discharge values, K is the frequency factor (found using the K frequency factor table, which is based upon return period and the skew coefficient), and σ is the standard deviation of the log Q discharge values (Oregon State University (OSU), 2005).The variance can be found using Eq. ( 2): where n is the number of maximum discharge values (i.e., the number of years) (OSU, 2005).The skew coefficient can be found using (OSU, 2005) The bankfull discharge was calculated using a return period of 2.33, following Waylen (1991). Analyzing the effects of an extended hurricane season on flooding susceptibility An analysis was performed to determine how many days from 1998 to 2014 during the hurricane season would have been at risk of flooding were an average tropical cyclone to have occurred on any given day.For each tropical cyclone in each basin from 1998 to 2014 the discharge the day before the event was compared to the peak discharge during the storm in order to determine the increases in discharge due to the tropical cyclones.For each basin, these increases in discharge were averaged to determine the average increase in discharge due to a tropical cyclone. For each June-November day from 1998 to 2014, the daily discharge in the Neches River was increased by the average increase in discharge due to a tropical cyclone experienced by the Neches Basin.This increased discharge due to an average tropical cyclone was compared with the bankfull discharge value on each individual day for the Neches River. A day with a discharge greater than bankfull discharge indicates that the Neches River likely would have flooded on this day if an average tropical cyclone were to have impacted this basin.Similar analyses were conducted for the Pearl, Mobile, and Roanoke basins. The above methodology was then repeated with an extended Atlantic hurricane season of May-December.A 1month extension of the present June-November Atlantic hurricane season (Dwyer et al., 2015) was considered because several May (1 month outside the current hurricane season) tropical cyclones have impacted the Roanoke Basin in 2007, 2009, and 2012.The HURDAT2 dataset also indicates the occurrence of some May, as well as some December, Atlantic tropical cyclones.These data were then compared to the percentage of days susceptible to tropical-cyclone-induced flooding in the current hurricane season. Tropical cyclone frequency and timing From 1998 to 2014 (17 years), 15 tropical cyclones impacted the Neches Basin, 28 impacted the Pearl Basin, 30 impacted the Mobile Basin, and 36 impacted the Roanoke Basin.The number of tropical cyclones impacting each basin each year has not been constant over the period of study.The years 2004 and 2005 had high numbers of storms in every basin, and in recent years there have been very few storms.For example, in 2004 and 2005 most basins experienced two or more tropical cyclones, while in 2013 and 2014 only the Roanoke Basin was impacted by tropical cyclones (and only one in each year).Most notably, almost all tropical cyclones impacting these four basins occur during low-discharge seasons, when flood risk is minimized (Figs. 2 and 3). Effects of an extended hurricane season on flooding susceptibility On average, tropical cyclones increased discharge (calculated from the difference between peak discharge and discharge the day before the storm) by 97.85 m 3 s −1 on the Neches River, 226.71 m 3 s −1 on the Pearl River, 787.25 m 3 s −1 on the Mobile River, and 101.26 m 3 s −1 on the Roanoke River (Table 2).The average percent increase in discharge following a tropical cyclone impact in all four rivers was 92 % (Table 2).The Roanoke River was the most susceptible to potential flooding from an average tropical cyclone in the current hurricane season scenario.On about 50 (out of 3111) days in the 1998-2014 June-November hurricane seasons the Roanoke River would be above bankfull discharge and at risk of flooding from an average tropical cyclone (Fig. 4d; Table 2).That is, about 1.61 % of days would be susceptible to potential flooding were an average tropical cyclone to occur (Table 2).The Mobile River showed the least susceptibility with only about 10 days (or 0.32 % of the time).The average susceptibility for potential tropical-cyclone-induced flooding for all four rivers was about 32 days (or 1.04 % of the time) (Fig. 4; Table 2).The extended hurricane season showed greater flooding risk for all four of the rivers.Again, the flood risk was greatest on the Roanoke River (84 days, or 2.02 % of the time) and least on the Mobile River (28 days, or 0.67 % of the time) (Fig. 5; Table 2).On average, the extended hurricane scenario led to about 20 more days per basin that likely would be at risk of a flood were the average tropical cyclone to occur (Table 2).Over the 17 seasons, this is a 63 % increase in the number of days at risk of flooding, or an increase from 1.9 to 3.1 days yr −1 . Discussions and conclusions Most tropical cyclones impacting these four basins occur during September, or the middle of the low-discharge season (Figs. 2 and 3).The current hurricane season coincides primarily with the low-discharge seasons of the four basins.Thus, tropical cyclones rarely cause flood events on these rivers, even though they bring high amounts of precipitation, because they occur primarily during the low-discharge season.This is in contrast to tropical cyclones in Southeast Asia, for example, which are frequent during the monsoon season, causing widespread inland flooding (Darby et al., 2013).Some May tropical cyclones have already occurred in the Roanoke Basin during 2007, 2009, and 2012, and NOAA's HURDAT2 dataset contains other May and December tropical cyclones occurring in the Atlantic Ocean.This suggests that, while tropical cyclones rarely led to inland flooding from 1998 to 2014 in the four basins, a future extension of the hurricane season, such that it encroaches upon the highdischarge season in these rivers, has the potential to considerably enhance flooding risks. Adding the months of May and December increases the number of days during the year that fall within the hurricane season by 34 %.For just a 34 % increase in the length of the hurricane season, there was, on average, a 63 % increase in the number of days at risk of a tropical-cyclone-induced flood along these southeast rivers (Table 2).When averaged over the 17-year period analyzed in this study, the number of days at risk of tropical-cyclone-induced flooding increases from 1.9 to 3.1 days yr −1 .While 3 day yr −1 may not seem substantial, it not only represents a 63 % increase, but it is also a conservative number, as it excludes predicted enhancements in the intensity and/or frequency of future tropical cyclones (Bronstert et al., 2002;Frey et al., 2010;Greenough et al., 2001;Irish and Resio, 2013;Irish et al., 2014;Kostaschuk et al., 2001;Mousavi et al., 2011;Ouellet et al., 2012).Further, this research does not consider synergistic effects due to the potential interplay between May and/or December tropi-cal cyclones and midlatitude cyclones, which could increase precipitation and flooding risk even further. The timing of the hurricane season in relation to the highand low-discharge seasons is crucial to understanding flooding risk following tropical cyclones on these rivers.The Mobile and Roanoke rivers showed the greatest increase in flooding risk (68 and 180 % respectively) in an extended May-December hurricane season as compared to the Neches and Pearl rivers (Table 2).The Pearl River showed the least increase in flooding risk following the average tropical cyclone (28 %) in an extended May-December hurricane season.While the Neches, Mobile, and Roanoke rivers tend to have slightly higher discharges in May than in June, discharges in the Pearl River are slightly lower in May than June.Thus, this study reveals not only that flooding risk following tropical cyclones is expected to increase if the hurricane season is extended due to global climate warming but also that this increase will not be uniform across the southeastern United States.Rivers with high-discharge seasons in May and December, such as the Mobile and Roanoke rivers, are likely to be most affected by a lengthened hurricane season. The main limitation of this study is its use of average statistics.Future work could extend this study to look at increase in flood risk not only due to the average tropical cyclone but also due the full range of tropical cyclones that a basin is likely to experience (the tropical cyclone with the maximum increase in discharge, the tropical cyclone with the minimum increase in discharge, etc.).For instance, given that tropical cyclones are likely to intensify (Bronstert et al., 2002;Frey et al., 2010;Greenough et al., 2001;Irish and Resio, 2013;Irish et al., 2014;Kostaschuk et al., 2001;Mousavi et al., 2011;Ouellet et al., 2012), flooding risk in an extended hurricane season likely could exceed the results presented in this paper, although May and December tropical cyclones likely could be weaker than mid-season storms.Further, more explicit modeling of future tropical cyclone dynamics using a stochastic approach, rather than average statistics, could potentially produce a more robust understanding of the effects of future climate dynamics on flood susceptibility.Because the high-discharge season varies from basin to basin, extending this study to other basins along the east and Gulf coasts would allow for a fuller understanding of which areas in the southeastern United States are likely to be more at risk of flooding following tropical cyclones due to an extension of the hurricane season in response to global climate warming. Figure 1 . Figure 1.Location of the study basins analyzed in this study (blue); colored dots represent points along the tracks of all tropical cyclones since 1998 that have impacted the study basins, where the color/size of the dot indicates the severity of the storm at that location (see legend).(Hurricane track data were retrieved from NOAA's Atlantic hurricane database, HURDAT2 (Landsea et al., 2015); basin boundary data were retrieved from USGS's National Hydrography Dataset, NHD, and Watershed Boundary Dataset, WBD (USGS, 2016); basemap is from ESRI.) Figure 2 . Figure 2. Comparison of average monthly discharge (blue bars) with the number of tropical cyclones occurring each month (yellow line) from 1998 to 2014 for the Neches (a), Pearl (b), Mobile (c), and Roanoke (d) basins. Figure 3 . Figure 3.Comparison of monthly discharge maximum-minimum range (red bars) with the number of tropical cyclones occurring each month (yellow line) from 1998 to 2014 in the Neches (a), Pearl (b), Mobile (c), and Roanoke (d) basins. Figure 4 . Figure 4. Bankfull discharge (black lines), flow duration curves (blue curves), and flow duration curves with discharge increased due to the average tropical cyclone (red curves) for the current hurricane season on the Neches (a), Pearl (b), Mobile (c), and Roanoke (d) rivers. Figure 5 . Figure 5. Bankfull discharge (black lines), flow duration curves (blue curves), and flow duration curves with discharge increased due to the average tropical cyclone (red curves) for an extended May-December hurricane season on the Neches (a), Pearl (b), Mobile (c), and Roanoke (d) rivers. Table 1 . Gage location in and size of the four study basins. Table 2 . Flooding risk from 1998 to 2014 for the four study basins with the current hurricane season and with an extended May-December hurricane season.
2018-12-02T16:37:45.017Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "674ff88208d3e02a34b8e66fb9098d316a1d1fab", "oa_license": "CCBY", "oa_url": "https://www.nat-hazards-earth-syst-sci.net/17/439/2017/nhess-17-439-2017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bc6bf71cbdb0a5d3fe6981ea9124e6df84ad272d", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
56170715
pes2o/s2orc
v3-fos-license
FORGe: prioritizing variants for graph genomes There is growing interest in using genetic variants to augment the reference genome into a graph genome, with alternative sequences, to improve read alignment accuracy and reduce allelic bias. While adding a variant has the positive effect of removing an undesirable alignment score penalty, it also increases both the ambiguity of the reference genome and the cost of storing and querying the genome index. We introduce methods and a software tool called FORGe for modeling these effects and prioritizing variants accordingly. We show that FORGe enables a range of advantageous and measurable trade-offs between accuracy and computational overhead. Electronic supplementary material The online version of this article (10.1186/s13059-018-1595-x) contains supplementary material, which is available to authorized users. Introduction Assembled genomes are typically stored and understood as strings, simple sequences of base pairs. Highthroughput technologies have brought an explosion of population genetics information, including from projects like HapMap [1], the 1000 Genomes Project [3], and UK10K [52]. The question is emerging: how can we use population genetics information to improve accuracy of genomic analyses? This has fueled interest in techniques that depart from a linear string as point of reference for all individuals and toward pan-genome representations [36,50] more inclusive of genetic variation. While methods for including variants in the reference are growing in number [18,21,25,43,44,48,49], there is little or no work on how to choose which variants to include. Past studies have made such decisions in ad hoc ways, with some filtering according to allele frequency [21,31], ethnicity [43], or both [49]. Here, we examine the advantages and disadvantages of adding variants to the reference. We show that the disadvantages are important to measure, since simply adding more variation to the reference eventually reduces alignment accuracy. We suggest efficient models for scoring variants according to the effect on accuracy and "blowup" (computational overhead), and further show that these scores can be used to achieve a balance of accuracy and overhead superior to current approaches. For example, genomes differ-i.e., where the donor genome has a nonreference allele-are systematically penalized. This can (a) reduce the correct alignment's score below the threshold considered significant by the aligner, (b) cause the aligner's heuristics to miss the correct alignment, and (c) cause the correct alignment's score to fall below the score at a different, incorrect location. The problem is magnified in hyper-variable regions such as the major histocompatibility complex (MHC) [12,17]. It is also problematic when individuals differ dramatically, e.g., if they are from distinct inbred strains [44], or when downstream analyses are vulnerable to allelic bias, such as when detecting allele-specific expression [4,9,43] or calling heterozygous variants [10,15]. Augmenting the reference genome with known variants helps in two major ways. First, it reduces the genetic distance between donor and reference genomes, removing the tendency to penalize correct alignments that overlap non-reference alleles. Second, it avoids the allelic bias, also called "reference bias, " [9] that results when one donor haplotype resembles the reference more closely than the other(s). There are many proposals for how to include and index genetic variants along with the reference genome. Two early approaches were GenomeMapper [44] and the Enhanced Reference Genome [43]. GenomeMapper came from a project to sequence many inbred strains of Arabidopsis thaliana, and it used a graph representation and an accompanying k-mer index to represent and align to a graph representing all strains. The Enhanced Reference Genome [43], which specifically addresses reference bias for allele-specific expression, included variants by taking the non-reference allele along with flanking bases and appending these "enhanced segments" to the linear reference genome. Since the resulting reference is linear, a typical read aligner like Bowtie [27] can be used. Several studies have expanded on these ideas. deBGA [30] uses a colored De Bruijn graph [22] and accompanying hash-table index. BWBBLE [21] and gramtools [31] use an FM Index [16] with an expanded alphabet and modified backward-search algorithm to account for variants. GCSA [49] generalizes the compressed suffix array to index not a single reference but a multiple alignment of several references. HISAT2 [25] combines GCSA with the hierarchical FM Index implemented in HISAT [24]. GCSA2 [48] indexes paths in arbitrary graphs and is implemented in the VG software tool [18] which can align reads to such indexes. MuGI [8] and GraphTyper [15] use k-mer-based indexes. Genome assemblies are also evolving along these lines. The GRCh37 and GRCh38 human assemblies [6,7] include "alt loci, " alternate assemblies of hypervariable regions including MHC. Other studies suggest modifying the linear genome by replacing each non-major allele with its major alternative [11,23]. This leverages populationlevel information while keeping a linear representation. While including variants in the reference incurs a computational cost, the nature and magnitude of the cost depends on the method. For some methods, the most direct manifestation is in the size of the reference genome and index. The Enhanced Reference Genome [43] and its index both grow with the number and length of the "enhanced segments" added to cover all windows containing non-reference alleles. The number of segments required to cover a set of k variants within the span of a single read is 2 k − 1. Even if this phenomenon is isolated in a few areas of the genome (e.g., hypervariable regions), the appearance of k in the exponent means it can be very significant. The GCSA method [49] used in HISAT2 [25] incurs similar blowup in its "path doubling" step. We elaborate in Additional file 1: Note S1, Additional file 1: Figure S1, and Additional file 1: Figure S2, including specific discussions of how the cost manifests in different methods, how it leads to more ambiguity in the reference genome which can ultimately lead to reduced accuracy, and how it can be controlled. Variant selection and evaluation Past efforts that evaluated graph aligners have been selective about what variants to include in the graph, but without a clear rationale. Some included all variants from a defined subset of strains or haplotypes [8,30,44] or from a database such as the 1000 Genomes Project callset [3] or dbSNP [46]. In some cases, variants were filtered according to ethnicity, e.g., keeping just the Finnish 1000 Genomes individuals [49] or the Yoruban HapMap [1] individuals [43]. The ERG study (concerned with allelespecific expression) excluded variants outside annotated genes. The gramtools study [31] used 1000 Genomes variants but excluded those with observed allele frequency less than 5%. GraphTyper [15] used dbSNP variants in one experiment, excluding single-nucleotide variants (SNVs) with under 1% frequency in all populations. HISAT2's software for selecting variants to include filters out SNVs with an allele frequency of under 10% in some cases [25]. Here we explicitly model the variants according to their effects on alignment, and we provide methods for choosing an optimal set based on those models. We apply these methods in combination with two different augmentedreference alignment methods and compare to a range of relevant competing methods, including a linear reference with reference alleles, a linear reference with all-major alleles, and an ideal "personalized" reference that customized to fit the donor individual's alleles (including at heterozygous positions) as closely as possible. This experimental design allows us to make statements about how our methods affect accuracy, how those effects vary with genomic region, how close the methods come to achieving ideal accuracy, and how practical current graph alignment methods are overall. Strategy FORGe works in cooperation with a variant-aware read aligner (graph aligner) such as HISAT2 [25]. Consider the alignment process as being divided into offline (index building) and online (alignment) stages. FORGe operates in the offline stage. Specifically, FORGe takes a reference genome (FASTA format) and catalog of variants and their frequencies in the population (Variant Call Format). FORGe can also use phasing information when provided in the VCF. FORGe then uses a mathematical model to score each variant according to its expected positive and negative impacts on alignment accuracy and computational overhead. The model could consider factors such as the variant's frequency in a population, its proximity to other variants, and how its inclusion affects the repetitiveness of the graph genome. Using these scorestogether with a parameter for the overall percentage or number of variants to include-FORGe outputs the topscoring subset of variants, which can then be fed to the index-building component of a graph alignment tool like HISAT2's hisat2-build program. In the online stage, the aligner uses this FORGe-customized index to align the sequencing reads. Simulation We used Mason 0.1.2 [20] to simulate reads (details in Additional file 1: Note S2). Mason simulates sequencing errors and base quality values. Mason also annotates each read with information about its true point of origin. We disabled Mason's facility for adding genetic variants, since we simulate from already-individualized references. We classify an alignment as correct if its aligned position in the reference is within 10 nt of the true point of origin. If the aligner reports several alignments for a read, we consider only the primary alignment-of which there is exactly one per aligned read, usually with alignment score equal to or greater than all the others-when determining correctness. Alignment We tested FORGe with two read alignment strategies capable of including variants in the reference: HISAT2 [25] and the Enhanced Reference Genome (ERG) [43]. HISAT2 is a practical graph aligner that we hypothesized would benefit from careful selection of genetic variants to include. The ERG is simple and compatible with linear aligners like Bowtie. We use ERG only with short unpaired reads (25 nt) to test the hypothesis that the seed-finding step of an aligner can benefit from including FORGe-selected variants. Adapting the ERG approach to paired-end alignment is probably not practical (see the "Discussion" section). In its offline stage, HISAT2 takes a linear reference genome and a VCF file with single-nucleotide variants and indels. HISAT2 uses GCSA indexing [49] to build a graph-genome index. The resulting graph is the generating graph for all combinations of reference (REF) and included alternate (ALT) alleles. HISAT2 also provides software that, starting from a VCF file (or the UCSC "Common SNPs" track, derived from dbSNP [46]), selects a subset of variants to include. It filters in two ways. First, it excludes variants with allele frequency under 10%. Second, where variants are densely packed, it imposes artificial haplotype constraints to avoid the exponential blowup that results from considering all combinations of REF and ALT alleles. We call this the HISAT2 auto method. We also tested FORGe with our implementation of the ERG [43]. ERG's offline phase starts with a linear reference genome and a variant file. It builds an augmented reference genome by adding enhanced segments: reference substrings that include ALTS and flanking context. The amount of context depends on a user-specified window size, r, which typically equals the maximum read length. When n variants co-occur in a window, 2 n − 1 enhanced segments are added to cover all combinations of ALT and REF alleles. The original ERG study limited growth by considering only the leftmost k variants per length-r window, with k = 5 in practice. We use a variation on this limit: if a window contains more than k variants, we consider (a) the leftmost variant, and (b) the k − 1 other variants with highest allele frequency according to the input VCF. Including the leftmost guarantees that each variant has its ALT included in at least one of the overlapping enhanced segments. We also set the limit higher (k = 15) by default. While k is configurable, we used the default in all experiments here. After adding enhanced segments to the reference, we indexed it with Bowtie [27]. In the online stage, we used Bowtie to align to the enhanced reference. Details on our ERG implementation are in Additional file 1: Note S3. In all experiments, we ran HISAT2 with the -k 10, --no-spliced-alignment, and --no-tempsplicesite options. In the ERG experiments, we ran Bowtie with the -v 1 option to allow alignments with up to one mismatch. Note that HISAT2 is able to find alignments with mismatches, insertions, or deletions, whereas Bowtie can only find alignments with mismatches. In all cases, we used Python's rusage module to measure peak resident memory usage and we used the Linux time utility to measure real ("wall clock") running times. All tools were run using a single thread. Variant models As detailed in the "Methods" section, FORGe has two main models for ranking and selecting variants to include in the reference. First is Population Coverage (Pop Cov), which scores variants according to allele frequency. Second is Hybrid, which weighs both a variant's allele frequency and the degree to which its addition would make the reference more repetitive. Additionally, we evaluated versions of these two models enhanced with a blowup avoidance strategy that, at variant adding time, dynamically down-weights candidates that are close to alreadyadded variants. These versions are called Pop Cov+ and Hybrid+. All of these strategies are detailed in the "Methods" section. Chromosome 9 simulation We tested FORGe in a series of simulation experiments. We used human chromosome 9 from the GRCh37 assembly [6]. GRCh37 was chosen to match the coordinates for the official 1000 Genomes Project Phase-3 variants [3]. We simulated sequencing reads from chromosome 9 of NA12878, a female from the CEPH (Utah residents with Northern and Western European ancestry) group studied in the 1000 Genomes Project. Specifically, we generated 10 million unpaired Illumina-like reads from each haplotype of NA12878 for a total of 20 million reads. Each read comes from one of the two haplotypes. We created a VCF file containing all single-nucleotide variants (SNVs) appearing in chromosome 9 in at least one 1000-Genomes individual, excluding NA12878 and family members. The resulting file contained 3.4 million SNVs. Details on how this set of SNVs was obtained are presented in Additional file 1: Note S4. We used the Pop Cov, Hybrid, Pop Cov+, and Hybrid+ models to score the 3.4 M SNVs. The Hybrid and Hybrid+ models used phasing information, whereas the Pop Cov and Pop Cov+ models did not (explained in Methods). We compiled subsets of SNVs consisting of the top-scoring 0%, 2%, 4%, 6%, 8%, 10%, 15%, and 20% up to 100% in 10 point increments. HISAT2 Figure 1 shows alignment rate and accuracy when using HISAT2 to align our simulated 100 nt reads to the genome indexes created with hisat2-build. The leftmost point (or in the case of Fig. 1c, the point labeled 0%) corresponds to a HISAT2 index with no SNVs added, i.e., a linear reference genome. The diamond labeled Major Allele corresponds to a linear reference with all major alleles; i.e., with every SNV set to the allele that was most most frequent among CEU individuals in the filtered callset. The diamond labeled HISAT2 auto corresponds to the pruned set obtained by running HISAT2's scripts. The diamond-labeled Personalized shows results when aligning to a personalized NA12878 genome with all nonreference homozygous (HOM) alleles replaced by their ALT versions and all heterozygous (HET) SNVs added as variants, so that neither REF nor ALT are penalized at alignment time. This is not a realistic scenario, but helpful for assessing how close the tested methods come to the personalized-genome ideal. Plotted lines show results obtained when adding progressively larger subsets of SNVs to the graph genome, prioritized by model score. Figure 1a, b shows alignment rate and fraction of alignments that are correct (henceforth "correctness") as a function of the number of SNVs included in the genome. For all models except Hybrid+, peak alignment rate and correctness occur in the 8-12% range of SNVs included. All the FORGe models at their peak achieve higher alignment rate and correctness than the major-allele and HISAT2 methods. When greater fractions of variants are included-more than around 12%-alignment rate and correctness generally decrease. Correctness eventually decreases to a level only somewhat higher than that achieved by the linear reference, showing that alignment suffers when too many variants are included. Figure 1d, e is similar to a and b but show alignment rate and correctness as a function of HISAT2's memory footprint at alignment time. While FORGe's models at their peak have a roughly 50% larger memory footprint than the linear references (both major-allele and reference-allele), they use roughly half the memory of the "HISAT2 auto" method. Figure 1c plots a point or a parametric curve for each indexing strategy and model. The vertical axis is the fraction of reads (not alignments) that aligned correctly, and the horizontal axis is the fraction of reads that aligned incorrectly. Notable points on the curves are labeled with the fraction of SNVs included. Diamonds mark points on the curves with maximal y − x, where y is fraction correct and x is fraction incorrect. This is a combined measure for alignment rate and accuracy, and maximal values are reached in the 8-10% range of SNVs included (except Hybrid+, which peaked at 30%). The best performing are superior to (above and to the left of ) the linear-genome methods, the "HISAT2 auto" method, and to the genome obtained by adding all of the SNVs (labeled 100%). The best-performing graph genomes come much closer to the personalized-genome ideal than the other methods. It is notable that the alignment rate curves in Fig. 1a, b, d and e eventually trend downward. Like most read aligners, HISAT2 uses heuristics to limit the effort spent aligning reads to many repetitive regions of the same reference genome. HISAT2 is unusual in that when a read has too many repetitive alignments, it will abort and leave the read unaligned. Bowtie does not have this heuristic; rather, Bowtie chooses one best-scoring alignment to report even when the read has many repetitive alignments. Because of this, HISAT2's alignment rate decreases as more variants are included and the genome becomes more repetitive. Fig. 1 Results from NA12878 simulation for GRCh37 Chromosome 9. 100 nt unpaired reads were simulated from Chromosome 9 with NA12878's variants included. FORGe and HISAT2 created and indexed augmented reference genomes with various variant sets. Besides the Pop Cov and Hybrid rankings, we also included a strategy that gave variants random ranks ("Random"). a and d show the fraction of reads aligned. b and e show the fraction that aligned correctly to the simulated point of origin. c plots a parametric curve of the fraction of reads with a correct alignment (vertical) versus the fraction with an incorrect alignment (horizontal). Lines follow measurements made over a range of fractions of SNVs, with points for 0%, 2%, 4%, 6%, 8%, 10%, 15%, and 20% up to 100% in 10 point increments. The diamond labeled HISAT2 auto is an augmented genome produced using HISAT2's pruning scripts. The diamond labeled Major allele ref is a linear reference with all positions set to the most frequent allele. Other diamonds indicate the SNV fraction maximizing y − x, where y is the fraction of reads aligned correctly and x is the fraction aligned incorrectly. The HISAT2 and Major allele diamonds are excluded from panels a, b, and f because there is no clear way to measure the fraction of variants included by these methods. The black filled circle and square in panel c represent measurements when 0% and 100% of variants are included, respectively A known drawback of graph aligners is that accuracy and overhead can suffer when many variants co-occur in a small window of the genome. To measure the impact this has on FORGe's models, we also plotted results using blowup avoiding versions of the Pop Cov and Hybrid models (Fig. 1, dotted lines), called Pop Cov+ and Hybrid+. These versions will, when selecting variants to add, deprioritize variants that are near already-added variants. We observed that blowup avoidance had a minimal impact on the shape of the Pop Cov curve; e.g., Fig. 1d, e shows the solid and dotted lines for Pop Cov on top of each other. Notably, blowup avoidance did cause the alignment memory to increase more slowly with respect to the number of added variants for the Pop Cov ranking (Fig. 1f). For the Hybrid model, blowup avoidance did not change the relationship between memory footprint and number of variants added (Fig. 1f ) and had an adverse effect on alignment rate and correctness. This is likely because the Hybrid model already takes clustered variants into account in its k-mer counts. We repeated these experiments for paired-end reads (Additional file 1: Figure S3) and the results closely followed those in Fig. 1. Alignment rate and accuracy both increased when using paired-end reads, since an accurate alignment for one end can "rescue" the other in the presence of ambiguity. Peak accuracy (maximal y − x) was achieved at the same SNV fraction except in the case of the Hybrid ranking, which peaked at 15% rather than at 10%. We also repeated these experiments for reads simulated from Yoruban (YRI) individual NA19238, also sequenced in the 1000 Genomes Project (Additional file 1: Figure S4). As we did for NA12878, we excluded variant calls for NA19238 and family members before providing variants to the model for scoring. These results also closely followed those in Fig. 1, with accuracy and recall peaking at a somewhat higher percentage of variants included (15% for YRI compared to 8-10% for CEU), likely due to YRI's greater divergence from the reference. We return to this in the "Discussion" section. Finally, we repeated the unpaired NA12878 experiment including both SNVs and indels in the FORGe analysis (Additional file 1: Figure S5). Whereas previous experiments modeled and scored 3.4 M SNVs, here we modeled and scored 3.4 M SNVs and 131k indels, composed of 49k insertions ranging in length up to 411 nt and 82k deletions up to 92 nt. Given these variant scores, we selected top-scoring fractions, built indexes, simulated reads from NA12878 (both SNVs and indels included) and performed alignments as before. When assessing correctness of the resulting read alignments, we took coordinate shifts due to indels into account. Overall the results are similar to those in Fig. 1. While there is a slight drop in peak alignment and correctness rate, the rates varied over a wider range of percentages relative to the SNV-only experiment. Maximal y − x occurred at slightly higher variant fractions relative to the SNV-only experiment: 10% for Pop Cov and Pop Cov+, 15% for Hybrid and 30% for Hybrid+. Enhanced Reference Genome Figure 2 shows alignment rate and correctness when using Bowtie [27] to align simulated 25 nt reads to enhanced references constructed with the ERG method [43]. We used shorter reads and configured Bowtie to find alignments with up to 1 mismatch (-v 1) to mimic the seed alignment step of seed-andextend aligners. Unlike HISAT2, Bowtie always reports an alignment if one is found, regardless of how repetitively the read aligns. Consequently, the alignment rate shown in Fig. 2a and d strictly increases as variants are added to the graph. Apart from that, the results reinforce those from Fig. 1. Peak correctness occurs at a relatively small fraction of SNVs (6-20%). As more variants are added, correctness eventually decreases, though the Hybrid ranking does not suffer this drop until over 70% of SNVs are included. The alignment-time memory footprint of the best-performing FORGe indexes is higher than that of the linear reference; e.g., including the top 6% of Pop Cov+-scored SNVs increases the footprint 29%, from 127.9 MB to 165.0 MB. But it is a fraction of the size of the index when 100% of variants are included (1.87 GB). Blowup avoidance (Fig. 2, dotted lines) had a somewhat minor effect on alignment rate and correctness for Pop Cov, and a clear negative (Fig. 2f). Figure 1c showed that when we move from 0 to 8% of variants included in the augmented reference, the number of correct alignments increases by about 0.4 percentage points (as a fraction of reads) and the number of incorrect decreases by about 0.1 points. Though these may seem like small differences, in a study with 1.2 billion reads-approximately the number of unpaired 100 nt unpaired reads required to cover the human genome to 40-fold average depth-this would yield about 4.8 M more correctly aligned reads and 1.2 M fewer incorrectly aligned. Stratification by variant density, variant rarity, and repetitiveness Still, we hypothesized that certain read subsets might be affected more dramatically by the inclusion of variants. To this end, we measured alignment rate and correctness when we varied the number of alternate alleles overlapped by a read (Fig. 3a-c), whether the alternate allele was common or rare (Fig. 3d-f ) and what kind of genomic region or repeat the read originated from Fig. 3g-i. The measurements studied here are the same as those presented in Fig. 1, but filtered as described below. Figure 3a-c shows alignment rate and correctness stratified by the number of non-reference SNVs overlapped by a read. To obtain these subsets, we first removed reads originating from reference-genome regions deemed Fig. 3 First row: Results for simulated reads stratified by the number of SNV alternate alternate alleles overlapped by the read. Reads overlapping regions with high DangerTrack [13] score-indicating the regions are difficult to align to-are omitted. Second row: Results for simulated reads overlapping exactly one common alternate allele (and no other alternate alleles) and reads overlapping exactly one rare allele. Reads overlapping regions with high DangerTrack [13] score, indicating the regions are difficult to align to. Third row: Results for simulated reads stratified by region of origin. Regions examined are regions labeled with the "Alu" family by RepeatMasker, regions captured by the Nextera exome sequencing protocol ("Exome"), and regions labeled with any repeat family by RepeatMasker ("Rep") repetitive by DangerTrack [13] (score over 250). We did this after finding that these regions had a combination of low SNV density and repetitive content that caused the 0-SNV stratum to behave very differently from the others. Reads containing 1 or more SNVs undergo a rapid increase in alignment rate and correctness from 0 to 10% of SNVs. Beyond 10%, all strata experience a slow decrease in alignment rate and correctness up to 100% of SNVs added. The 0-SNV stratum has decreasing alignment rate and correctness across the whole range, as expected since the addition of variants cannot help (since the reads lack alternate alleles) but can harm alignment by increasing the repetitiveness of the reference. Strata with more SNVs experience a more dramatic rising-and-falling pattern; for the 3-SNV stratum, alignment rate varies from about 80-98%. While curves for the various strata have different shapes, all peak at a relatively low SNV fraction: 20% or lower. Figure 3d-f show alignment rate and correctness for reads containing a single rare SNV allele (1000 Genomes frequency < 0.5) versus reads containing a single common SNV allele (≥ 0.5). In both cases, we considered only reads with a single non-reference allele. Rare-SNV reads peak lower and at a higher SNV fraction than common-SNV reads for both alignment rate and correctness (Fig. 3d-f ). This is expected, since the Pop cov model prioritizes common over rare SNVs. In other words, by the time a rare variant is added, many common variants have already been added, making the genome more repetitive. Figure 3h-j shows alignment rate and correctness for reads stratified by feature of origin. We analyzed reads originating from (a) RepeatMasker-annotated repetitive regions (http://www.repeatmasker.org), (b) RepeatMasker-annotated "Alu" repeats, (c) regions captured by the Nextera exome sequencing protocol, and (d) all reads. Reads from repetitive regions generally had lower alignment rate and correctness compared to all reads. As before, alignment rate and correctness curves peaked at low SNV fractions: 10% or lower. Reads from more repetitive features were more sensitive to the number of variants included in the reference, as evidenced by the vertical spans of the curves. In a related experiment, we examined the graph genome's effect specifically on the hypervariable MHC region. We simulated reads from NA12878 Chromosome 6 and used HISAT2 to align to both a linear and a graph genome augmented with the top-scoring 10% of SNVs. We visualized the read-alignment pileup in the hypervariable MHC region using IGV [51] (Additional file 1: Figure S6). Qualitatively, the pileup for the augmented reference looks superior-with more coverage in variantdense regions and with more even overall coverage-to the pileup for the linear reference. Ethnicity specificity We also studied how ethnicity-specific augmented references, advocated in other studies [2,33,47], can improve alignment. We used FORGe to select variants from two lists: one with variants drawn from and scored with respect to the overall 1000-Genomes phase-3 callset, and another drawn from and scored for just the CEU individuals. In both cases, variants private to NA12878 and family members were excluded and reads were simulated from NA12878. Figure 4 shows alignment rate and correctness when aligning to CEU-specific and pan-ethnic references. As expected, the CEU-specific reference yielded higher alignment rate and correctness. CEU-specific curves also peaked at lower numbers of SNVs compared to panethnic. However, the differences were only a few hundredths of a percentage point and cover only a small fraction of the remaining distance to the ideal point. Looking at this another way, if we extrapolate the results to a whole-genome DNA sequencing experiment with 40fold average coverage, around 250,000 alignments would be affected. We return to these small differences in the "Discussion" section. Whole human genome Simulated reads To show our methods generalize to whole genomes, we repeated experiments like those presented in Fig. 1 using the full GRCh37 reference. We gathered 80.0 million SNVs from the Phase-3 callset of the 1000 Genomes Project [3]. We used FORGe's Pop Cov+ model to score SNVs and compiled subsets consisting of the top-scoring 2%, 4%, 6%, 8%, 10%, 15%, and 20% up to 100% in 10 point increments. We built graph-genome indexes for each using HISAT2. We used the Pop Cov+ model because the others required excessive time and/or memory; specifically, the Pop Cov model (without blowup avoidance) produced a set of variants that HISAT2 was unable to index in a practical time and space budget (Additional file 1: Note S5) and the Hybrid and Hybrid+ models required excessive time for the step that generates the FASTA file for G * due to exponential blowup (Additional file 1: Note S6). Figure 5a, b plots HISAT2 alignment rate and correctness as a function of the SNV fraction. We aligned 20 million 100 nt unpaired reads from simulated from NA12878. We omitted NA12878 and family members from variant selection. Results using the ideal personalized index are also shown for comparison. Maximal y − x, where y is the fraction of reads aligned correctly and x is the fraction aligned incorrectly, occurred at 10% of SNVs (Fig. 5c). Interestingly, the maximal point does not approach the personalized-genome ideal point as closely here as it did for the chromosome-9 experiment (Fig. 1). This seems to be due to the added ambiguity that comes when variants Fig. 4 Results from chromosome-9 NA12878 simulation when using an ethnicity-specific ("CEU") versus a pan-ethnic ("All") augmented reference. Reads are 100 nt and unpaired and the plots have similar axes as the plots in Fig. 1 panels a-c. Since we are assessing two ethnicities, each with a different total number of variants, the horizontal axes in panels a and b and the peak points in panel (c) are labeled with absolute number of variants included rather than percentages in all non-chromosome-9 portions of the genome are added (Additional file 1: Figure S7). Platinum reads, SNVs We conducted further experiments using a set of 1.57 billion real 100 nt unpaired sequencing reads from the Platinum Genomes Project [14] (accession: ERR194147). Like the simulated reads, these also come from NA12878. We gathered a set of 80.0 million SNVs from the 1000 Genomes phase-3 callset, omitting variants private to NA12878 and family members. We again used the Pop Cov+ model to select variants. We cannot assess correctness since the reads were not simulated. Following a prior study [34], we measured the number of reads that align uniquely-where HISAT2 reported exactly one alignment-versus the number that aligned perfectly, matching the reference exactly with no differences. The goal was to capture the variantinclusion trade-off; we hypothesized that adding more Fig. 5 Results from aligning NA12878-simulated reads to HISAT2 graph genome for the whole GRCh37 genome. Variants were selected using FORGe's Pop Cov+ model. Plots have the same axes as the plots in Fig. 1 panels a-c. The green diamond in panel (c) shows the result when aligning to a personalized graph genome with exactly the individual's variants variants will remove the alignment-score penalty associated with known genetic variants (increasing the number of perfect matches) without increasing reference ambiguity (decreasing the number of unique alignments). As shown in Fig. 6a, the points that achieved the peak number of unique plus perfect alignments corresponded to 30% of the SNVs. This fraction is higher than most of our simulated results, perhaps due to the fact that uniqueplus-perfect is an imperfect proxy for correct-minusincorrect (Additional file 1: Figure S8). Platinum reads, SNVs, and indels To highlight the effect of including indels in the reference, we repeated the previous experiment but using both SNVs and indels from the 1000 Genomes phase-3 callset. Specifically, we gathered 83.1 million variants, both SNVs and indels, but omitting variants private to NA12878 and family members. We again used the Pop Cov+ model to select variants. We again plotted the number of reads that aligned uniquely versus the number that aligned perfectly (Fig. 6a). The graph genome built from both SNVs and indels achieved peak unique+perfect at 30% of variants, like the graph built from SNVs alone. However, at every percentage it yields more unique and perfect alignments. Reference bias We measured how reference bias varies with the fraction of variants included. We analyzed the alignments of the ERR194147 reads to the whole human genome with both SNVs and indels included in in reference. Figure 6b shows a series of boxplots summarizing bias at a set of 2.07 million HET SNVs called in NA12878 by the Platinum Genomes Project [14]. The set of 2.07M HETs was chosen by taking all HETs covered by at least 25 reads in all of our experiments. Each boxplot summarizes the fraction of REF alleles (REF/(REF + ALT)) at the HET site for all 2.07M HETs. As expected, bias decreased as more variants were included. The decrease plateaued at 10-20% of variants. Beyond 20%, including more variants did further reduce bias, but only slightly. From 20 to 70% of variants the mean decreased by only 0.00011. This is consistent with previous results showing that most of the benefit is achieved at a small fraction of variants. HLA typing accuracy Finally, to measure FORGe's effect on downstream results, we measured how HLA typing recall vary with the fraction of variants included in the reference. We used the same NA12878/ERR194147 alignments evaluated in previous sections, extracted alignments in the MHC region, then provided those alignments to the Kourami [28] HLA typing tool to make HLA calls. We repeated this with indexes for all the same variantinclusion fractions evaluated previously. More details on the HLA typing methodology are described in Additional file 1: Note S7. In comparison with linear genome, HLA typing recall and accuracy increased substantially when the highest-scoring 10% of SNVs were included in the augmented reference. Recall and average coverage plateaued at larger SNV fractions (Additional file 1: Figure S9). Overall, we see that-as we observed in other results-HLA Fig. 6 a Perfect/unique alignment results when aligning real reads. The blue curve is parametric, as a function of the fraction of variants included from 0% (bottom left) to 80% (top). The green diamond marks the number perfect and unique mappings for HISAT2's custom variant pruning script applied to the set of SNVs and Indels. Graph genomes were built for SNVs alone (red) and for SNVs and Indels (blue), both ranked with the Pop Cov+ strategy. Blue and red diamonds mark the fractions that achieved the highest sum of unique and perfect alignments. b Allelic bias for the 2.07 M heterozygous SNVs that met a minimum coverage threshold of 25 in all experiments. Whiskers show the 5th-to-95th percentile range allele recall benefits from the addition of a carefullychosen fraction of variants, and that a fraction of only 10% is sufficient to achieve peak recall. Methods FORGe works in cooperation with a variant-aware read aligner such as HISAT2 [25] or the ERG [43]. The strategy has two stages. In the offline stage, FORGe selects variants to include in the augmented reference based on a variant model -which predicts the pros and cons of including a variant -and a variant limit. The model and limit together constitute a variant inclusion strategy (VIS) that aims for a balance between accuracy and overhead. Once variants have been selected, the aligner software is used to create an index of the augmented reference. The second stage is an online stage where the read aligner aligns reads to the augmented reference using the index. Offline stage Inputs to the offline stage consist of (a) a reference genome, (b) variants in VCF format, (c) a VIS, and (d) a window size s. The variant inclusion strategy (VIS) consists of a variant model and a limit on the number or fraction of variants to include. The VIS is the user's most direct means for balancing blowup and alignment accuracy in the augmented reference. We now propose multiple variant models, each aiming to give higher scores to variants that will impart a greater net benefit when considering accuracy and blowup. The window size s is used in three separate places in the software (described below) and should typically be set to the maximum read length. Variant models Let G ref denote the linear reference genome and G * the complete augmented genome including all variants in the population. Let G be a possible result of a VIS, i.e., an augmented genome that includes a subset of population variants. For simplicity, assume all variants are SNVs (substitutions). Let a localized s-mer s, l be a string of length s (the configurable windows size) that matches some combination of alleles in an augmented genome G starting at offset l; we also call these simply s, l -mers. For instance, if G is GATYACA, where Y can be either C or T, then GAT, 0 , TCA, 2 , and TTA, 2 are all 3, l -mers of G. For an s, l -mer σ , let p(σ ) be the probability a random s, l -mer drawn from a random individual in the population equals σ . This can be calculated as: where p l (σ ) is the probability a random s-mer begins at σ 's offset, which we approximate by 1 the probability a localized s-mer starting at l has alleles matching σ 's. We approximate p s (σ ) by assuming independence and multiplying the frequencies of each allele, or, if phasing information is available, by using allele co-occurrence frequencies. Population coverage The population coverage C(G) of an augmented reference G is proportional to the population variation included, weighted by allele frequency. Specifically: We want to prioritize alleles according to how much they increase C(G). To do so accurately, each variant's effect on C(G) must be calculated according to which nearby variants (within s − 1 positions) are already in G. While this is possible, it requires much recalculation of scores as variants are added to G. It also means there is no way to produce a single, static list of per-variant model scores. For these reasons, we instead compute each variant's effect on C(G) assuming that all surrounding variants are already in G; in other words, we compute the decrease in C(G) caused by removing the variant from G * . We call this the complete graph assumption. Although FORGe is capable of using phasing data-describing which alleles co-occur on the same haplotype-the complete graph assumption makes this irrelevant for our calculation here. We do make (optional) use of phasing data in the Hybrid model, discussed below. Uniqueness The uniqueness U(G) of a genome G decreases as the multiplicities of its k-mers increase, i.e., as the genome becomes more repetitive. Let f G (s) be the number of s , l -mers in genome G with s = s . We define uniqueness of the genome as: While we rely on this definition below, we do not expect uniqueness alone to be an effective variant model. This is because for most variants all the added (overlapping) s, l -mers are unique. All such variants therefore receive an identical score. The hybrid measure, presented next, effectively breaks ties by also considering allele frequency. Hybrid score The hybrid score H(G) of a genome G considers both population coverage and uniqueness. Again let f G (s) be the number of s , l -mers in G with s = s and let p( s , l ) be the probability a random s, l -mer drawn from a random individual equals s , l . We define the hybrid measure H(G) of an augmented reference G as Note that this is simply the dot product of the terms from the C(G) and U(G) sums. For a variant v, we wish to compute the increase in H(G) caused by adding v. For each s, l -mer overlapping v and containing the alternate allele, let s, l 1 , s, l 2 , ..., s, l n be all other s, l -mers with the same sequence s. Before adding v, the hybrid score can be written as where C is the hybrid-score portion due to the s , l -mers with s = s. After adding s, l to G, the score becomes The change in hybrid score due to the addition of s, l is Assuming each s, l -mer overlapping variant v has a distinct sequence s, their H s,l terms are independent. Thus the total change in hybrid score due to the addition of v is the sum of the H s,l 's for each s, l -mer overlapping and including v. There are a couple caveats to how FORGe implements the hybrid model. First, As with the Pop Cov model, we make the complete graph assumption, allowing us to produce a scored variant list without dynamic re-scoring of variants as they are added. Second, computing H s,l 's for all variants is expensive, since it involves calculating the read probability for each other occurrence of sequence s for every overlapping s, l -mer. Instead, we approximate it using average probabilities. Specifically, we pre-calculatep ref , the average p( s, l ) for all s, l -mers in G ref , andp * , the average p( s, l ) for all s, l -mers in G * but not in G ref . We approximate the summation with a weighted average: Whereas the complete graph assumption rendered phasing data irrelevant to the Pop Cov model, we can use phasing data in the Hybrid model. This is because the Hybrid model weights the terms of the sum according to their frequency in the genome. By default, FORGe uses phasing information when it is available. Hybrid score implementation The Uniqueness and Hybrid models are concerned with s-mer counts both in the linear reference genome (G ref ) and in the complete augmented reference (G * ). FORGe uses Jellyfish v2.2.6 [32] to calculate these counts. Since Jellyfish counts s-mers in a FASTA input file, FORGe must first construct an augmented FASTA such that s, l -mers in this FASTA map one-to-one to s, l -mers in G * . This is also the goal of the Enhanced Reference Genome [43] representation, which accomplishes this by adding 2 k − 1 "enhanced segments" for every length-s window containing k variants. Thus, to obtain s-mer counts for G * , we first constructed such as FASTA file using our implementation of the ERG, then counted s-mers using Jellyfish. Once s-mers have been counted, FORGe computes the average probability for reads in the linear reference (p ref ) and in the complete augmented reference (p * ), for use in the Hybrid model formula. Finally, we compute the change in H(G) for each s-mer in both and update the Hybrid model scores for every variant with an alternate allele in that read. After this, we have the full set of Hybrid model scores for all variants. Considering blowup Adding variants to the augmented reference increases computational costs, including (a) size of the index on disk, (b) memory footprint during read alignment, and (c) time required for read alignment. We collectively refer to these as "blowup. " Blowup is most drastic in genomic regions where variants are densely clustered, driving a exponential increase in the number of allelic combinations possible. A model based purely on minimizing blowup would prioritize isolated variants over those in clusters. We do not expect such a model to perform well on its own, though, since (like the Uniqueness model described above) it would fail to prioritize among the isolated variants. For this reason, we sought a way to combine a blowup avoidance strategy with the models already described above. Selecting variants with blowup avoidance After ranking variants, FORGe selects the subset of variants to include in the augmented reference. The user specifies either a number or a fraction of all variants to include. In the simplest case, variants are chosen in order, starting with the highest-scoring variant, until the desired number have been included. As an additional defense against blowup, we also propose a dynamic re-scoring scheme that can be added to an existing model. In this scheme, when a variant is added to the reference, FORGe searches for other variants within s nt (the window length) of the added variant that have not yet been selected for addition. These nearby variants are re-scored by multiplying their score by a penalty factor w, where 0 < w ≤ 1. By letting w be variable, FORGe can trade off between maximizing the model score and minimizing blowup. w = 1 maintains the original scores, whereas a penalty near w = 0 would ensure all isolated variants were added before any neighboring variants. We found that a penalty of w = 0.5 performed well in practice, and this is FORGe's default, used in all experiments performed here. Pop Cov+ and Hybrid+ are how we refer to those models when they are combined with this dynamic re-scoring scheme. Breaking ties Variants can be given identical scores using these models. For instance, variants with the same allele frequency will receive the same score by the Pop Cov model. These ties are broken according to the variants' position on the genome. We define variant A to be upstream of (and higher priority than) variant B if it is on a lower-numbered chromosome or if its offset is to the left of B's. Discussion FORGe's modeling of positive and negative effects of including genetic variants in an augmented reference yields accuracy-blowup tradeoffs superior to current approaches. We proposed models for prioritizing variants with distinct rationales and strengths. We found repeatedly that the most advantageous set of variants consisted of a fraction (6-30%) of the variants called in the 1000 Genomes project. This was true across a variety of alignment scenarios: for two different graph alignment methods (HISAT2 and ERG), for both unpaired and paired-end alignment modes, when just SNVs and when both SNVs and indels are included, for both a single human chromosome and for the whole human genome, and for both CEPH and YRI individuals. We also showed that FORGe's modeling can substantially improve downstream results related to reference bias and HLA typing, also at relatively low variants-inclusion fractions. To test if FORGe's results yield a simple filtering rule, we can translate the peak-performing variant inclusion fractions for the Pop Cov model into allele frequency thresholds. For the Chromosome-9 NA12878 experiments, the 8% variant inclusion fraction performed best in both the unpaired (Fig. 1) and paired-end (Additional file 1: Figure S3) experiments. This translates to an allele frequency threshold of ≥ 7.42%. For the Chromosome-9 YRI experiment (Additional file 1: Figure S4), the bestperforming fraction of 10% translated to an allele frequency threshold of ≥ 3.76%. While these fall on either side of the 5% threshold used in at least one prior study [31], further work is needed to establish whether that or any other simple threshold is justifiable in general. A finer-grained sweep over variant inclusion fractions would yield a sharper threshold, for example. For now, our strategy of gradually introducing variants in the context of a simulation study is both principled and practical. FORGe and HISAT2 combine to make a practical graph aligner that works with human data with large variant databases like the 1000 Genomes Phase 3 call set. Using hisat2-build to index a GRCh37-based graph genome with the top 8% of variants from Phase-3 set required 4 h and 165 GB of memory. Aligning 20 million reads to this graph required 19 min and 6.5 GB of memory, about 50% more time and 50% more memory than HISAT2 requires to align to the linear GRCh37 genome. (To prioritize the variants prior to indexing, FORGe required about 110 min on a single processor.) This is competitive with the performance of aligners like Bowtie 2 and BWA-MEM when aligning to the linear reference, suggesting graph-based tools are ready for broader use. Though we estimate that the overall improvement in alignment accuracy for a 40× whole-genome DNA sequencing experiment would lead to 4.8M more correctly aligned reads and 1.2M fewer incorrectly aligned reads, the magnitude of the improvement imparted by modeling variants depends on the genomic region. For some regions and variant classes (rare, isolated SNVs), the benefit is small. To improve alignment to these regions might require an iterative approach that aligns to a graph containing known variants, calls donor-specific variants, then realigns to a graph that includes both. Strategies like this are implemented in the GATK HaplotypeCaller [10], GraphTyper [15] and other tools [35]. Better variant models might also benefit these hard cases. Even so, the effects we measured translate into substantial net increases in the number of correctly aligned reads, and the results are pronounced in regions such as MHC as shown by Additional file 1: Figures S6 and S9. An ethnicity-specific reference conferred a slight accuracy improvement compared to a pan-ethnic reference with a similar number of variants. This is notable in light of proposals to use ethnicity-specific references [2,33]. It suggests that the advantages of an inclusive reference, applicable regardless of the donor individual's ethnicity, might outweigh the slight accuracy gain that comes with ethnicity-specificity. Also, ethnicity-specific references could be counterproductive or misleading in cases where donor ethnicity is reported incorrectly or where the donor is admixed [19]. The accuracy achieved at relatively small fractions of the 1000 Genomes variants has implications for the design of graph aligners. A central challenge for these tools is to operate efficiently even when variants are densely clustered, causing local explosion in the number of allelic combinations. But our observations that peak accuracy occurs at a relatively small fraction of variants, and that memory footprint increases by a factor of 2 or less at peak accuracy, suggests that this is not a major barrier to practical graph-genome alignment as long as variants are chosen carefully. It should also be possible to adapt FORGe to study how including structural variants can improve alignment. A common observation of studies that have assembled human genomes from long reads is that the assemblies contain many megabases of sequence not present in the standard human reference [2,5,45,47]. The models we propose are equally applicable to structural variants, assuming the variants are called in enough individuals to estimate allele frequencies accurately. While we primarily investigated unpaired alignment here, we also showed that the chromosome-9 results generalized to paired-end alignment (Additional file 1: Figure S3). Generally speaking, this work can be adapted to paired-end alignment, with the main issue being how to adjust the method's window lengths as a function of the paired-end dataset's read and fragment lengths. The windows in question are (a) the s-mer length used in the model, (b) the maximum window length used when forming enhanced segments for ERG-based alignment, and (c) for blowup avoidance in the Pop Cov+ and Hybrid+ models, the radius to look within when seeking nearby variants to deprioritize. While one option is to simply increase the window size to the maximum fragment length, that can easily lead to an unacceptable blowup penalty. For this reason, we suspect that there is no practical way to adapt our ERG-based to paired-end alignment; rather, as we explain, we think it is best viewed as a model for the seed-finding step of a seed-and-extend aligner that might itself handle paired ends. Initial experiments suggest that it is practical to leave the window lengths relatively short-the length of a read rather than a fragment-when using HISAT2 for paired-end alignment (Additional file 1: Figure S3). Further exploration is needed to more fully characterize the relationship between FORGe window length and fragment and read lengths for paired-end reads. While we found that including FORGe-selected variants improved results for a downstream HLA-typing method, we also found the method failed to achieve perfect accuracy based only on HISAT2+FORGe's alignments. This was likely because one of NA12878's HLA alleles (DQB1*02:01) was so different from the referenceeven after including the FORGe-selected variants-that its identifying reads failed to align (see Additional file 1: Note S7). This highlights the continued importance of other alignment methods that use alternate assemblies such as the GRCh38 ALT loci [7] or that can align to graphs that include larger-scale variation [18] in addition to relatively small variants studied here and in the 1000 Genomes Project.
2018-12-18T14:46:16.048Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "1b9184241af02c1f774d25de8bb82752a80393a3", "oa_license": "CCBY", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-018-1595-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83273881ce52b240626d6eee94191960101b2587", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
222246301
pes2o/s2orc
v3-fos-license
Qualification and application of liquid chromatography-quadrupole time-of-flight mass spectrometric method for the determination of carisbamate in rat plasma and prediction of its human pharmacokinetics using physiologically based pharmacokinetic modeling Carisbamate is an antiepileptic drug and it also has broad neuroprotective activity and anticonvulsant reaction. In this study, a liquid chromatography-quadrupole time-of-flight mass spectrometric (LC-qTOF-MS) method was developed and applied for the determination of carisbamate in rat plasma to support in vitro and in vivo studies. A quadratic regression (weighted 1/concentration2), with an equation y = ax2 + bx + c, was used to fit calibration curves over the concentration range from 9.05 to 6,600 ng/mL for carisbamate in rat plasma. Preclinical in vitro and in vivo studies of carisbamate have been studied through the developed bioanalytical method. Based on these study results, human pharmacokinetic (PK) profile has been predicted using physiologically based pharmacokinetic (PBPK) modeling. The PBPK model was optimized and validated by using the in vitro and in vivo data. The human PK of carisbamate after oral dosing of 750 mg was simulated by using this validated PBPK model. The human PK parameters and profiles predicted from the validated PBPK model were similar to the clinical data. This PBPK model developed from the preclinical data for carisbamate would be useful for predicting the PK of carisbamate in various clinical settings. INTRODUCTION Carisbamate (S-2-O-carbamoyl-1-o-chlorophenyl-ethanol) is an investigational neuromodulatory agent, initially developed from SK Biopharmaceuticals (Seongnam, Korea) for antiepileptic treatment [1][2][3][4][5][6]. Although the exact mechanism of action for carisbamate is not known, carisbamate has been studied for the treatment of epilepsy, essential tremor and migraine [4,5]. In phase I clinical trials to healthy subjects, carisbamate has shown linear pharmacokinetics at doses of 100 to 1,500 mg, high oral bioavailability (F) of > 95% and a low oral clearance (CL/F) of 3.4-4.2 L/h, equaling < 5% of liver blood flow [1,2,7]. The drug also has shown efficacy and tolerability in a phase II clinical trial for epilepsy patients, and three phase III clinical trials have been completed for treatment in partial seizures [8][9][10]. Among them, only one phase III clinical trial (randomized double-blind) study showed significant efficacy results in adults with refractory partial seizures receiving a dose of 400 mg/day [11,12]. In 2008, carisbamate was provisionally approved by the Food and Drug Administration for adjunctive treatment for patients aged 16 years or over with partial onset seizures (brand name: comfyde), but failed to receive marketing approval in 2009. Carisbamate is currently undergoing clinical trials for Lennox Gastaut syndrome as an indication by SK Biopharmaceuticals. Recently, physiologically based pharmacokinetic (PBPK) model has been increasingly used as a powerful tool to predict clinical pharmacokinetic (PK) parameters and profiles by using preclinical Absorption, Distribution, Metabolism, and Excretion (ADME)/PK data [13][14][15][16][17][18][19][20]. PBPK modeling is a very complex mathematical model that reflects systemically specific physiological properties and drug's physicochemical properties [21]. The PBPK model has been applied in the pharmaceutical industry to predict ADME/PK properties of drugs by using in vitro and in vivo properties as an input data [22][23][24]. In this study, a simple and robust liquid chromatography-quadrupole time-of-flight mass spectrometric (LC-qTOF-MS) method for carisbamate was developed and successfully applied to in vitro and in vivo PK studies in rat. Based on these results, a PBPK model was built, optimized and validated for predicting the PK of carisbamate in humans. After establishing the PBPK model, the human PK of carisbamate was predicted and compared with the reference clinical PK data associated with food effect PK studies [7]. To the authors' best knowledge, this is the first approach for human PK prediction of carisbamate through PBPK model. The information in this study will help designing the studies related to various clinical studies of carisbamate. Chemicals and reagents Carisbamate was purchased from Toronto Research Chemical (Ontario, Canada). Verapamil was purchased from Sigma-Aldrich (St, Louis, MO, USA). Formic acid was purchased from Daejung Chemical (Siheung, Korea). Male Sprague-Dawley (SD) rat and human liver microsomes were purchased from Corning Incorporated (Corning, NY, USA). Acetonitrile (ACN) of HPLC grade was purchased from Honeywell Burdick & Jackson (Ulsan, Korea). Distilled water (DW) of HPLC grade was purchased from Samchun Chemical (Pyeongtaek, Korea). All other chemicals reagents were of analytical grade and purchased from commercial sources. The stock solution (1 mg/mL) for the internal standard (ISTD, verapamil) was prepared in DMSO and stored in the refrigerator at −20°C until use. The ISTD was diluted to prepare a final spiking concentration of 20 ng/mL in ACN prior to sample preparation. Sample preparation Fourty microliters of the blank rat plasma samples were placed in cluster tubes, and 4 µL of the standard or QC samples was added to each cluster tube, whereas 4 µL of make-up solution (DMSO) was added to each study samples to prepare the same matrix conditions. The 150 µL of ACN containing ISTD was added to the standard, QC and study samples for protein precipitation. Samples were centrifuged at 10,000 rpm for 5 minutes and then supernatants were evaporated to dryness under vacuum in a Savant SpeedVac™ (Thermo Scientific, Rockford, IL, USA). The dried residues were reconstituted using ACN/DW (1:1), vortexed, and then centrifuged at 10,000 rpm for 5 minutes. The resulting supernatants were transferred to liquid chromatography (LC) vial for LC-qTOF-MS analysis. LC-qTOF-MS conditions The liquid chromatography-high resolution mass spectrometric system consisted of Shimadzu CBM-20A HPLC pump controller (Shimadzu Corporation, Columbia, MD, USA), 2 Shimadzu LC-20AD pumps, CTC HTS PAL autosampler (LEAP Technologies, Carrboro, NC, USA), and quadrupole time-of-flight (qTOF) TripleTOF™ 5600 mass spectrometer (Sciex, Foster City, CA, USA). The HPLC analytical column used was a C18 column 2.1 × 50 mm (Phenomenex, Torrance, CA, USA). The mobile phase consisted of: mobile phase A, distilled and deionized water containing 0.1% formic acid; and mobile phase B, acetonitrile containing 0.1% formic acid. The gradient was as follows: from 0 to 0.5 minutes, 10% B; from 0.5 to 0.9 minutes by a linear gradient from 10% B to 95% B; 95% B was maintained for 0.6 minutes; from 1.5 to 1.6 minutes by a linear gradient from 95% B to 10% B, and then 10% B was maintained for 1.4 minutes for column re-equilibrium. The gradient was delivered at a flow rate of 0.4 mL/min and the injection volume was 10 µL. The TOF-MS scan mass spectra and the product ion scan mass spectra were recorded in the positive ion mode. The scan range was m/z 100-600 for both TOF-MS scan and product ion scan. For the quantification, [M + H] + ion of carisbamate and ISTD (m/z 216.0 and 455.3, respectively) were selected and their product ions at m/z 155.0 and 165.1 were used for quantitative analysis, respectively. The source temperature was set at 500°C with a curtain gas flow of 33 L/min. The ion spray voltage was set at 5,500 V. For Carisbamate and ISTD, declustering potential was 80 and 125 V, and the collision energy was 17 and 30 V, respectively. Method qualification The method qualification was performed with a 'fit-for-purpose' approach. The qualification run contained duplicate standards at seven concentrations and QCs at three concentrations. The acceptance criteria for standards and QCs in the qualification run were within ± 25% of the precision and accuracy values. A quadratic regression with an equation y = ax 2 + bx + c was used to fit calibration curves over the concentration range for carisbamate. The accuracy and precision were calculated at each QC level concentration. Extraction efficiency of carisbamate was also assessed using the same QC samples. The ratio of the mean concentration of pre-extracted spiked plasma to that of the blank plasma extracts spiked after extraction at the same concentration was used to calculate the extraction efficiency. Preliminary stability assessments were performed to evaluate the different stability conditions; stock solution, short-term, long-term and freeze-thaw using low, medium and high QC samples. For stock solution stability assessment, prepared stock solution was stored at −20°C for 28 days. The short-term stability samples were stored at room temperature for 6 hours and long-term stability samples were kept frozen at −20°C for 28 days. For the freezethaw stability assessment, the samples were subjected to three freeze and thaw cycles at −20°C. The acceptance criteria for all stability tests were within ± 25% of the precision and accuracy values. Plasma protein binding The plasma protein binding assessment was carried out by equilibrium dialysis method in pooled SD rat and human plasma, respectively. The Thermo Scientific™ Rapid Equilibrium Dialysis device system (Thermo Scientific) was used for the equilibrium dialysis method. For the equilibrium dialysis, 300 µL of plasma samples containing carisbamate (1 μg/mL) were spiked into one insert and 500 µL of phosphate-buffered saline (PBS) was spike into the other insert, followed by equilibrium dialysis reaction for 4 hours at 37°C. After 4 hours, 100 µL of plasma samples was transferred to cluster tubes, and the same volume of PBS was added. In the same way, 100 µL of PBS samples was transferred to cluster tubes, and the same volume of blank plasma was added to make the same matrix conditions. Then, all samples were treated with the protein precipitation followed by the LC-qTOF-MS analysis. The unbound fraction of carisbamate in plasma (F up ) was obtained by the following formula: In the above formula, Cu is the post-dialysis drug concentration in plasma, Ct is the postdialysis drug concentration in PBS, which means the final concentration of unbound. Microsomal metabolic stability The microsomal metabolic stability in rat and human liver microsomes was investigated under the following final incubation conditions: carisbamate (0.2 μg/mL), rat and human liver microsomes (0.5 mg/mL) with β-nicotinamide adenine dinucleotide hydrate (NADPH) regenerating system solution (solution A is comprised of NADP + and glucose-6-phosphate [Glc-6-PO 4 ], while solution B is comprised of glucose-6-phosphate dehydrogenase [G6PDH]). First, a 5 minutes pre-incubation (37°C) was conducted by adding cofactor solutions containing NADPH regenerating system solution A & B to the liver microsomes. After pre-incubation, carisbamate was added to the liver microsomes suspension. All incubations were quenched with ACN containing ISTD at 0, 30, 60, and 120 minutes after incubation. Then, all samples were centrifuged and the each supernatant was analyzed by the LC-qTOF-MS method. The in vitro intrinsic clearance, CL int, in vitro (mL/min/mg), was calculated by using the slope (k) of the log-linear regression analysis of the remaining % (the ratio of sample peak area/ ISTD peak area) versus time profiles, and The in vivo intrinsic clearance, CL int (mL/min/kg), was calculated by scaling the in vitro values to the in vivo ones for rats and human by using following equation. Where the liver microsomal protein concentrations were 44.8 and 48.8 mg microsomal protein/g liver in rats and humans, respectively, and the liver concentrations were 40 and 25.7 g liver/kg body weight in rats and humans, respectively [25]. The hepatic clearance (CL H ) extrapolated from CL int using "well stirred" model is expressed as shown in equation below [26,27]: Where Q h values, the hepatic blood flow, were 55.2 and 20.7 mL/min/kg in rats and humans, respectively. Through "well-stirred" model, it was assumed that the liver was a well-stirred compartment and distribution equilibrium was achieved so quickly that the amount in the blood and the unbound amount in the liver were in equilibrium. Application for a PK study in rat All animal studies were performed in accordance with the "Guidelines in Use of Animal" established by the Chungnam National University Institutional Animal Care and Use Committee (Daejeon, Korea). This study was approved by the Chungnam National University Institutional Animal Care and Use Committee (No. CNU-01104). PK studies were conducted in SD rats (300 ± 10 g). All rats were fasted 12 hours prior to carisbamate administration. And then, carisbamate was administered to rats via intravenous bolus (IV) injection (1 and 2 mg/kg) or oral (PO) dose (1 and 5 mg/kg). The blood sampling time were 0, 2, 5, 10, 30, 60, 90, 120, 240, 360, and 1,440 min for IV PK and 0, 5, 15, 30, 60, 90, 120, 240, 360, and 1,440 minutes for PO PK in collection tube containing sodium heparin as anti-coagulant. Blood samples were centrifuged at 10,000 rpm for 5 minutes and supernatant plasma samples were stored at −20°C until analysis. PK parameter estimates from IV or PO plasma concentration-time data were acquired using WinNonlin ® version 8.0.0 (Certara, Princeton, NJ, USA). The nominal dose administered to each group was used to calculate PK parameters. PK parameters were calculated by noncompartmental analysis (NCA). Prediction of PK profiles using the PBPK model GastroPlus™ (version 9.7; Simulations Plus, Inc., Lancaster, CA, USA) PBPK model was used to predict the plasma concentration-time profile of rats and humans. In this study, the PBPK model was adopted as a perfusion-limited tissue model, which the kinetics of drug to tissue were determined by the Kp values (distribution coefficient). In vitro data (unbound fraction in plasma and in vitro intrinsic clearance) were used as input parameters to build the PBPK model. GastroPlus™ ADMET predictor module was used for the prediction of physicochemical and ADME properties such as pKa, logP, permeability, solubility and blood/plasma ratio based on the structure of the carisbamate. The input parameters for PBPK simulation are summarized in Table 1. In addition to the in vitro data, PK parameters obtained by IV administration at a dose of 1 mg/kg in rats were used to optimize the PBPK model. The GastroPlus™ optimization module was used to optimize the PBPK model. The PBPK model was then validated using other in vivo PK data in rats and scaled to the PBPK model for human physiology. Then, human PK of carisbamate was simulated by population simulator (n = 12) using a scaled PBPK model, and the predicted PK results were compared with the observed baseline clinical data associated with food effect PK studies [7]. Method qualification The calibration curves were selected based on the analysis of the data with weighted quadratic regression (1/concentration 2 ). The mean correlation coefficient (r) value was > 0.99 for the linearity of calibration curve ranged from 9.05 to 6,600 ng/mL. Fig. 1 shows the calibration curve of carisbamate. Representative chromatograms of carisbamate (lower limit of quantification [LLOQ], 9.05 ng/mL) and verapamil (ISTD) samples are also shown in Fig. 2. All the results of preliminary stability studies are shown in Table 3. As a results, the results indicated that carisbamate in rat plasma was stable for 6 hours at room temperature, which is sufficient enough for the sample preparation process, and stable for 28 days at −20°C, and also stable for 3 cycles of freeze-thaw process at −20°C. Also, carisbamate in stock solution was stable for 28 days at −20°C. Generally, a LC-MS/MS method was applied to quantify small molecule drugs in bioanalytical samples because of sensitivity. However, due to the recent development of MS application technology, a LC-qTOF-MS also shows sufficient sensitivity to analyze bioanalytical samples, and many applications are applied in the references [28][29][30][31]. In addition, high resolution-full scan analysis using LC-qTOF-MS also can give us better picture of the metabolite profiles of the target compound. Therefore, the LC-qTOF-MS method was sufficient to develop and qualify for the bioanalysis of carisbamate in rat plasma. In vitro experiments As results of plasma protein binding test for carisbamate, F up values were 47.55% and 51.29% in rat and human plasma, respectively. And the results of the microsomal metabolic stability in rat and human liver microsomes are shown in Table 4. CL int, in vitro values were 0.0009 and 0.0006 mL/min/mg microsomal protein in rat and human liver microsomes, respectively. The scaled CL int values were 1.70 and 0.72 mL/min/kg, and the extrapolated hepatic clearance values were 1.65 and 0.70 mL/min/kg in rat and human, respectively. These F up and CL int, in vitro values were used as input parameters for each species in Gastroplus™ ( Table 1). Application for a PK study in rat The qualified LC-qTOF-MS method was successfully applied to a pharmacokinetic study of carisbamate in rats. Fig. 3 shows the pharmacokinetic profiles of carisbamate after IV and PO administration in rats. PK parameters were calculated by NCA using Phoenix WinNonlin ® and shown in Table 5. The results show that maximum plasma concentration (C max ) and area under the curve (AUC) was dose proportional, while clearance (CL) and volume of distribution (Vd) are dose-independent in both IV and PO at dose range of 1 to 5 mg/kg, so that carisbamate has a linear PK in this dose range. The average value of CL was about 3.98 mL/min/kg, indicating that the drug was very stable in vivo metabolic condition. The average of bioavailability (BA) was about 100%. It is assumed that the there is less concern of firstpass effect (FPE) or drug absorption for carisbamate in rats. https://tcpharm.org https://doi.org/10.12793/tcp.2020.28.e15 Prediction of PK profiles using the PBPK model The predicted and observed time-concentration profiles of carisbamate after a single IV administration at a dose of 1 mg/kg in rats are presented in Fig. 4. The first simulation was conducted using the input parameters only in Table 1 and it showed an over-estimated result at the initial time point and under-estimated result at the later time point compared to the observed in vivo data (Fig. 4A). It was expected that the discrepancy between the predicted and the measured PK profile was due to the difference between the predicted and the measured Vd values. The liver and kidney are mainly considered to be organs that determine the disposition of small molecule drug [19,20]. In case of carisbamate, the previous studies show little metabolic disposition in the kidney [24,32,33]. Therefore, we assumed that carisbamate is primarily distributed and metabolized to liver. Based on this information, we optimized the liver Kp value to correct the difference between the predicted and the observed Vd value. GastroPlus™ optimization module was conducted for this optimization process. The predicted PK profiles in the PBPK model optimized for the Vd values were more similar to the observed PK profiles (Fig. 4B). The optimized PBPK model was then validated with other in vivo PK data and the results are shown in Fig. 5. Fig. 5 Table 1, (B) PK profile predicted after optimizing the Kp value of liver. PK, pharmacokinetic. model was well fitted when comparing the predicted PK profile with the observed PK profile at a different dose (2 mg/kg) or a different route of administration (oral administration). And then, the validated PBPK model by observed data in rats was scaled to human species. The human PK data orally administered at dose of 750 mg under physiologically fasted and fed condition were simulated by population simulator (n = 12) using scaled human PBPK model and compared to reference clinical data. The results are shown in Fig. 6 and Table 6. As a result, prediction fold error value was within 2 folds between the predicted and the reference clinical data in both fasted and fed conditions, suggesting that the predicted human PK profile and parameters are considerably acceptable from the industrial perspective [16,19,20,34]. https://tcpharm.org https://doi.org/10.12793/tcp.2020.28.e15 LC-qTOF-MS determination and PBPK modeling of carisbamate fed condition (each of n = 12). BE limit, bioequivalence limit of observed concentration (80%-125%); 90% CI mean concentration, 90% confidence interval of predicted concentration. 0.9 0.9 1.1 0.9 Prediction fold error are also included. PK, pharmacokinetic; AUC last , area under the curve from 0 to last measurable time; C max , maximum plasma concentration; T max , time to maximum plasma concentration; T 1/2 , half-life.
2020-10-10T07:51:02.533Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "26890ca5991baa56c1b46e816ffc16ccbf32d3f8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12793/tcp.2020.28.e15", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26890ca5991baa56c1b46e816ffc16ccbf32d3f8", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
207937135
pes2o/s2orc
v3-fos-license
Iron overload resulting from the chronic oral administration of ferric citrate induces parkinsonism phenotypes in middle-aged mice Iron homeostasis is critical for maintaining normal brain physiological functions, and its mis-regulation can cause neurotoxicity and play a part in the development of many neurodegenerative disorders. The high incidence of iron deficiency makes iron supplementation a trend, and ferric citrate is a commonly used iron supplement. In this study, we found that the chronic oral administration of ferric citrate (2.5 mg/day and 10 mg/day) for 16 weeks selectively induced iron accumulation in the corpus striatum (CPu), substantia nigra (SN) and hippocampus, which typically caused parkinsonism phenotypes in middle-aged mice. Histopathological analysis showed that apoptosis- and oxidative stress-mediated neurodegeneration and dopaminergic neuronal loss occurred in the brain, and behavioral tests showed that defects in the locomotor and cognitive functions of these mice developed. Our research provides a new perspective for ferric citrate as a food additive or in clinical applications and suggests a new potential approach to develop animal models for Parkinson’s disease (PD). INTRODUCTION The challenges presented by neurodegenerative diseases (NDs) in an aging population make research into the pathogenesis of these diseases urgently needed [1]. Brain iron abnormalities have been implicated in various NDs, including Alzheimer's disease (AD), Huntington's disease (HD), amyotrophic lateral sclerosis (ALS), multiple sclerosis (MS) and especially in Parkinson's disease (PD) [2,3]. With postmortem, MRI and transcranial ultrasound, the excessive iron deposition is consistently demonstrated in the substantia nigra and basal ganglia of the brain in PD patients, and a 25% to 100% increase of the iron levels in substantia nigra is present according to the quantitative data [4,5]. Iron plays important roles in multiple biochemical processes by facilitating two-way electron transport, and it functions as a critical cofactor of many proteins involved in cellular proliferation, differentiation, and apoptosis [6,7]. Given that the metabolic activity of brain is high and the iron functions as an enzymatic cofactor in myelinogenesis, the concentration of iron in Iron is taken up through the blood-brain barrier (BBB) in the brain, from the basolateral membrane of endothelial cells to the cerebral compartment. The present evidence suggests that the transferrin/transferrin receptor/divalent metal transporter 1 (Tf/TfR/DMT1) pathway is the major pathway for iron transport across the BBB, which includes the processes of binding, endocytosis, acidification, dissociation and translocation [15,16]. On the other hand, brain iron release is dependent on the only iron exporter currently identified, ferroportin-1 (Fpn1), which releases iron into circulation to be loaded onto Tf by collaborating with ceruloplasmin or ferroxidase [17,18]. Although more than two-thirds of the total amount of iron needed in the body is from the degradation of senescent red blood cells and the rest comes from the diet [19], according to the WHO, iron deficiency is the most common nutritional disorder in the world, especially in developing countries [20,21]. In addition, iron deficiency is a multifactorial condition in which the incidence increases with age in adulthood, and a substantially higher prevalence is present in middleaged and elderly populations than in young populations [22,23]. Thus, rational iron supplementation is important to maintain iron homeostasis in the body and, of course, in the brain. Many different types of iron supplements are available on the market, including ferrous and ferric iron salts, such as ferrous sulfate, ferrous gluconate, ferric citrate, and ferric sulfate [24]. Therefore, as trace element supplementation becomes increasingly normalized, additional attention must be paid to the side effects of excessive iron supplementation. The toxicity of iron overload on brain functions was widely studied in iron injection models, and the intranigral infusion of ferric citrate or some other iron carriers resulted in increased sensitivity to 1-methyl-4phenyl-1,2,3,6-tetrahydropyridine (MPTP), enhanced oxidative stress in nigral neurons, and accelerated dopamine (DA) depletion [25,26]. However, the toxicity of overloaded iron intake by oral supplementation on brain functions has rarely been explored. A study performed by Sobotka et al. found increased brain iron concertation and some neurobehavioral dysfunctions in rats with dietary iron overload [27], while Schroder et al. reported memory deficits in adult rats orally administered excessive ferromyn, a common iron supplement [28]. Ferric citrate is another common oral iron supplement and is widely used as a food additive in flour, formula milk, crackers, etc. Ferric citrate is on the registered list of food ingredients from the Ministry of Health, Labour and Welfare of Japan, and the Code of Federal Regulations (CFR) of the US [29]. No evidence for chronic toxicity or tumorigenicity of ferric citrate was found in mice administered long-term and low-dose (0.06% and 0.12%) supplementation [30], and no changes in the brain weight of adult rats were observed under high-dose ferric citrate (up to 4%) oral supplementation for 13 weeks [31]. However, it was reported that the oral administration of high-dose ferric citrate quickly induced a significant increase in iron in the male rat brain [32]. Therefore, it is reasonable to suspect that oral supplementation with high-dose ferric citrate would be harmful to the structure or function of the brain, especially under long-term conditions in middle-aged or elderly subjects, who are more sensitive to iron overload and its resulting oxidative stress [33,34]. In this study, we aimed to address this issue and investigate the effects of the chronic oral administration of ferric citrate on brain histology and neurobehavioral functions in middle-aged mice to provide new perspectives for iron supplementation. Chronic oral administration of ferric citrate induces selective iron overload in the brain To evaluate the effects of the chronic supplementation of ferric citrate on the brain functions of middle-aged subjects, 9-month C57BL/6 mice were intragastrically administered ferric citrate (2.5 mg or 10 mg) daily for 16 weeks. Weekly body weight and food intake, as well as brain weight, were measured, and no significant differences among the different groups were observed during the experimental period ( Figure 1A-1C). The accumulation of iron in the body was analyzed after the mice were killed. The absorption of ferric citrate led to a robust increase in the serum iron level in the ferric citrate groups ( Figure 1D), and the accumulation of iron AGING was also observed in the heart, liver, spleen and kidney, especially in the 5% ferric citrate group ( Figure 1E). In the brain, the iron level was quantified by flame atomic absorption analysis. We found that the accumulation of iron was dramatically increased in the substantia nigra (SN), caudate putamen (CPu), olfactory bulb (OB) and thalamus (THA) after ferric citrate administration, and the hippocampal (Hip) iron level moderately increased in the high-dose ferric citrate group, but no such changes were detected in the cortex (Ctx), cerebellum (CB) and hypothalamus (HYP) ( Figure 1F). The accumulation of iron in the SN and CPu, further confirmed by Prussian blue staining, indicated that there was a marked dose-dependent increase in the positive signals in the ferric citrate groups ( Figure 1G and 1H). Increased iron transport, as indicated by the upregulated expression of the major iron uptake transporter TFR1, may be responsible for the accumulation of iron in the brain after ferric citrate supplementation ( Figure 1I). Excessive iron is excreted by the protective exporter mechanism of the brain, and FPN1 functions as an iron efflux transporter in the brain [35]. A robust increase in FPN1 expression was detected in the 1.25% ferric citrate group, while a dramatic decrease was observed in the 5% ferric citrate group ( Figure 1J), suggesting the dose-and time-dependent destruction of the balance between iron uptake and export with ferric citrate supplementation. These data demonstrated that the chronic oral supplementation of ferric citrate, especially at a high dose, could lead to an accumulation of iron in the brain with selective regional differences. This finding is consistent with previous reports that the concentration of iron varies greatly among different regions of the brain, and more iron tends to accumulate in the regions associated with motor functions than nonmotor-related regions [36,37]. Motor and cognitive defects are associated with iron accumulation in ferric citrate-supplemented mice Increasing evidence has demonstrated that excessive iron accumulation in selective brain regions may induce oxidative stress-related damage and thereby cause neurobehavioral dysfunctions that are widely implicated in NDs [38,39]. Considering the potential accumulation of iron in the brain after ferric citrate supplementation, multiple behavioral tests were performed during the experiment. First, locomotor functions were assessed by an open field test. Representative maps of mouse activities showed that the oral administration of ferric citrate could reduce the mobility of mice (Figure 2A). Further statistical results found that the total travel distance and the speed, frequency, distance and time spent in the center zone were decreased in the ferric citrate groups in a time-and dose-dependent manner ( Figure 2B-2F). Second, the accelerated rotarod test and pole test were performed to measure the gross motor skill and motor coordination of these mice [40,41]. Quantification showed that compared with the mice in the other groups, the mice supplemented with 5% ferric citrate displayed a significant time-related decrease in fall latency ( Figure 2G), while the times required for the mice to turn around and descend to the floor in the pole test were remarkably increased ( Figure 2H and 2I). Then, in the last experimental week, the grip strength of these mice was measured with a traction test [42], and the results showed that the mice from the 5% ferric citrate supplementation group spent much less time on the rope than those from the other two groups ( Figure 2J). In addition, as mentioned above, the iron concentration in the hippocampus was also increased in the 5% ferric citrate-supplemented mice; thus, we also performed a Y-maze test to assess the cognitive function of these mice [43]. As shown in Figure 2K, the frequency that mice entered the novel arm of the Ymaze was lower in the 5% ferric citrate group than in the control group. These results showed the effects of the chronic oral intake of ferric citrate on impairing the motor and cognitive functions of middle-aged mice, and these behavioral defects are known to be indicative of experimental parkinsonism [44]. Therefore, we consider these middle-aged ferric citrate over-supplemented mice to be a potential PD animal model, which will be a powerful tool for research on PD mechanisms and drugs. Iron overload induced by ferric citrate supplementation causes neurotoxicity in SN and CPu Given that the brain iron accumulation resulting from the chronic oral uptake of ferric citrate caused motional and cognitive defects, we further explored the underlying histopathological damage. As shown by H&E staining, nerve cell swelling was present in the SN, while white matter edema and vasodilatation were observed in the CPu of the mice supplemented with 5% ferric citrate ( Figure 3A and 3B), but no observable pathological changes were found in the 1.25% ferric citrate and control groups. Moreover, cell swelling or white matter edema was also found in the globus pallidus, thalamic and red nuclei (Supplementary Figure 1). These histopathological findings suggested the occurrence of neuroinflammation after ferric citrate supplementation, which was further evidenced by detecting the expression of inflammatory factors. As shown in Figure 3E, the expression levels of the proinflammatory factors TNF-α and IL-6 were increased, while the expression of the anti-inflammatory factor IL-4 was suppressed in the 5% ferric citrate group ( Figure 3E). Nissl staining was performed to quantify the numbers of neurons in the SN and CPu and AGING displayed a remarkable neuronal loss in the 5% ferric citrate group ( Figure 3C, 3D and 3F). Specifically, the neurons lost in the SN were dopaminergic neurons, as indicated by tyrosine hydroxylase (TH) staining and qRT-PCR ( Figure 3G-3I), which further resulted in the depletion of dopamine (DA) and its metabolite (dihydroxyphenylacetic acid, DOPAC) in the CPu ( Figure 3J and 3K). Besides, the expression of dopamine transporter (DAT) in striatum was also reduced in 5% ferric citrate group ( Figure 3L). Moreover, our study further demonstrated that cellular apoptosis was responsible for the neuronal loss in the SN and CPu, as many more positive signals were observed in the subjects in the 5% ferric citrate group by TUNEL and cleaved caspase-3 staining ( Figure 3M to 3O). Dopaminergic neurons constitute a major source of dopamine, which is one of the most important neurotransmitters involved in the nigrostriatal pathway that controls voluntary motor movement [45]. Therefore, the neurotoxicity to SN dopaminergic neurons after the chronic oral uptake of ferric citrate may be the cause for the behavioral defects previously observed. Lewy bodies are important clinical manifestation in PD patients, but in our model, even we have detected increased expression of alpha synuclein (asyn) in the CPu from 5% ferric citrate group ( Figure 3L), but we didn't observe any Lewy body both in SN and CPu (Data not shown). Oxidative stress-induced neuronal loss is implicated in the neurotoxicity of ferric citrate supplementation As a transition metal, iron is capable of generating hydroxyl radicals via the Fenton reaction. Consequently, elevated iron deposition induces oxidative stress and triggers the accumulation of oxidative damage and neuronal death, which is widely implicated in NDs [8,46]. To explore whether oxidative stress was induced by chronic ferric citrate supplementation, oxidative damage was analyzed in the SN and CPu of mice. Lipid peroxidation was evaluated by 4-hydroxynonenal (4-HNE) staining, and a widespread increase in 4-HNE positive signals was observed in the SN and CPu of mice, especially in the 5% ferric citrate group ( Figure 4A and 4B). This increased 4-HNE level was accompanied by an increase in malondialdehyde (MDA) ( Figure 4C), another product generated from lipid peroxidation [47]. Oxidative damage to proteins and DNAs was quantified by a protein carbonylation assay kit and an 8hydroxydeoxyguanosine (8-OHdG) assay kit, respectively [48]. The data showed that markedly higher levels of protein carbonylation (PC) and 8-OHdG were present in the SN and CPu of mice in the 5% ferric citrate supplementation group than in the control group ( Figure 4D and 3E). Iron accumulation was reported to result in the depletion of reduced glutathione (GSH), resulting in decreased oxidative defense [49]. Consistent with this observation, we detected a significant decrease in GSH in the SN and CPu of mice from the 5% ferric citrate group ( Figure 4F). In addition, the expression levels of multiple critical antioxidant defense genes, such as superoxide dismutase 1 (SOD1), catalase (CAT) and glutathione peroxidase (GPX), were downregulated in the SN and CPu of mice supplemented with 5% ferric citrate ( Figure 4G), and the activities of SOD in these tissues were also reduced ( Figure 4H). Accumulating oxidative damage triggered cellular apoptotic processes, as shown in Figure 3L and 3M. This finding suggested that the oxidative stress generated in the ferric citratesupplemented mice was involved in dopaminergic neuronal loss and neurobehavioral defects. DISCUSSION The potential neurodegenerative effects of iron overload in specific brain regions have been explored before. Correlations among iron accumulation, DA/DOPA concentrations, and progressive nigral atrophy have been found in models intranigral infused with different iron reagents, such as ferric chloride, ferric citrate and ferric ammonium citrate [25,26,50]. Iron overload models induced by oral supplementation were also preliminary studied by some groups. Sobotka et al. fed adult weanling rats diets composed of different doses of iron for 12 weeks, and reduced total activity, impaired avoidance learning and prepulse inhibition were detected in the high-dose group (20 000 ppm). However, the iron concentrations in different brain regions and the pathological injuries responsible for behavioral defects were not evaluated in that study [27]. Iron overloaded diets were administered to adult rats for 7 days by Yu et al., who found increased iron/MDA and decreased GSH in the brain [51]. The far-reaching effects of postnatal iron over-supplementation on learning behavior were also evaluated in rat pups. Short-term (3 days) administration of excessive ferromyn to 10-day-old rats resulted in significantly increased iron concentrations in the SN and memory defects at adult ages [28]. In this study, we first systematically evaluated the effects of long-term oral iron overload on neurobehavior and its underlying mechanism in middle-aged subjects. Selective iron deposition was observed in different brain regions, which is different from a previous study with short-time administration [51]. The iron accumulation induced oxidative stress in the SN/CPu and further induced neuronal apoptosis, which led to dopaminergic neuronal loss and defects in the motor and cognitive functions of the mice in our study. These data reveal the AGING neurotoxicity of the chronic oral uptake of ferric citrate to the brain of middle-aged mice in a region-selective and time-dependent manner. In addition to iron supplements, ferric citrate is also used as a phosphate binder to treat hyperphosphatemia both in patients with dialysis-and nondialysisdependent chronic kidney disease (CKD) [52]. The typical initial dose of ferric citrate hydrate is approximately 500 mg 3 times per day after meals; then, the dosage is adjusted based on the concentration of serum phosphorus, and a maximum daily dose of 6 000 mg ferric citrate hydrate is suggested. Ferric citrate hydrate is composed of approximately 20% water by weight; thus, the maximum daily dose of ferric citrate was approximately 4800 mg [29]. In our study, the daily doses of ferric citrate were approximately 83.3 mg/kg and 333.3 mg/kg in the 1.25% and 5% groups, respectively. These daily doses could be converted to equivalent doses for human adults (subjects with 70 kg bodyweight) according to previously described [53,54] of approximately 646.5 mg and 2585.9 mg. These equivalent doses (646.5 mg and 2585.9 mg) are less than the currently suggested maximum daily dose for ferric citrate (4800 mg). As progressive neurobehavioral dysfunctions and accumulating brain pathological damages were present in the mice administered ferric citrate in our study, we think that more attention needs to be directed to the current suggested dose of ferric citrate or ferric citrate hydrate both for the treatment of hyperphosphatemia and as an iron supplement, especially in cases with long-term medication and middle or even elderly ages. Considerable injuries occur before the onset of clinical symptoms in PD patients, making the identification of early events a challenge. Animal disease models, both toxic and genetic, are important for the pathophysiological studies, new medical target identification, and risk factor screening of PD. MPTP injection is the most widely used method to generate PD models in mice and nonhuman Error bars indicate SEM. Bars, 100 μm. Compared with the Ctr group, *p<0.05 and **p<0.01. Compared with the 1.25% ferric citrate group, # p<0.05 and ## p<0.01. AGING primates [55,56]. A profound loss of DA in the CPu/SN resulting from damage to the nigrostriatal DA pathway is present after MPTP injection [57]. However, an acute or subacute pathological process is displayed in MPTP models, so it is not suitable for the observation of developing pathological processes or screening early diagnostic indicators of PD. Such defects are also present in models induced by 6-hydroxydopamine (6-OHDA). Moreover, the establishment of this model is not very convenient because 6-OHDA cannot cross the blood-brain barrier, and it can only be directly injected into the SNc, medial forebrain bundle or striatum to induce parkinsonism [58]. Rotenone is another commonly used agent to develop PD models. In contrast to MPTP and 6-OHDA, the induction of parkinsonism by rotenone can be chronic and continuous [59]. Modeling time can last up to approximately 2 months, and many features of PD can be reproduced in this model [60]. However, the mortality of this model can be high, and the replication is poor [61]. In our study, we found that the chronic oral administration of ferric citrate could induce the phenotypes of parkinsonism in mice, including the selective degeneration of the dopaminergic neurons, iron accumulation and oxidative stress in the CPu and SN, as well as defects in locomotor and cognitive functions, which suggest that this model could be a potential animal disease model for PD. And, this model has two major advantages over existing ones. First, the longer modeling time and progressive behavioral and pathological development make this model more suitable for monitoring the early events and screening the early diagnostic indicators of PD. Second, selectively iron deposition in this model will make it valuable for the study of PD treatments. For example, the emergence of iron mismanagement has elicited interests in developing neurotherapeutic strategies with chelation therapies, which have been tested in cell models, animal models and clinical studies. Desferrioxamine (DFO), a cell impermeable iron chelator, has been reported to reduce DA neuronal degeneration both in the 6-OHDA-induced rat model and MPTP-treated mouse model [62]. While, VK28, a strong brain permeable iron chelator, also displays neuroprotection effect on the PD progression in the 6-OHDA-treated rat model [63]. Besides, neurorestorative effects of iron chelation on PD have even been reported in some studies [64][65][66]. However, iron deposition is not a typical feature for both MPTP models or 6-OHDA models, and this is the advantage of our model. Thus, this model could be a more suitable choose to evaluate the effects of iron chelator on PD, and to study whether this protection of iron chelator is dependent on chelation of iron or not. In addition to these advantages, limitations need to be thought for our model. First, considerations and studies of the impact of peripheral iron overload on the progression of PD in this model are needed. Second, as region specific iron may vary depending on different PD stages, the dosage of iron supplements during disease progression may be different and could be changed. In conclusion, we first reported that long-term oral supplementation with high-dose ferric citrate to middleaged mice caused selective iron accumulation in the SN/CPu, which further induced oxidative stressmediated dopaminergic neuronal loss. The defects in locomotor and cognitive functions resulting from these histopathological injuries were observed in these mice. Our research provides a new perspective for ferric citrate in food additives and clinical applications and a new potential method for developing PD animal models. Animal care and maintenance All animal works were performed in accordance with the requirements of "The National Institutes of Health Guide for the Care and Use of Laboratory Animal" and were approved by the Animal Welfare and Animal Ethics Committee of Sichuan Agricultural University, China. Sixty C57BL/6 male mice (9 months old) were obtained from Beijing Weitong Lihua Experimental Animal Technology Co. Ltd. and maintained in individual cages in a specific pathogen-free environment with an automatically controlled 12-hour light/dark cycle and free access to food and water for 7 days. Then, the mice were randomly divided into 3 groups, with 20 mice in each group, including the control group (Ctr), 1.25% ferric citrate group and 5% ferric citrate group. The dosages of ferric citrate refer to Toyoda's study [31]. We used male mice to generate our model because the males do not have the physiological cycle of the females, and the hormones in males maintain a dynamic balance, which is important for the experimental stability and repeatability. Before ferric citrate administration, 5 g ferric citrate was dissolved in 100 ml physiological saline to obtain a 5% ferric citrate solution when heated to boiling. Then, a gradient dilution was performed to obtain a 1.25% ferric citrate solution. Under these conditions, the solubility of ferric citrate was good, and the solution was clear. In our study, 0.2 ml of ferric citrate solution was intragastrically administrated to the mice each day, and an equal volume of physiological saline was intragastrically administrated to the mice in the control group. Therefore, the daily chemical intake was 2.5 mg and 10 mg in the low-and high-dose ferric citrate groups, respectively ( Table 1). The intragastric administration of ferric citrate was performed daily Ctr, control group that was intragastrically administered physiological saline. *The conversion dose of ferric citrate was calculated according to the initial average bodyweight of the mice (~30 g) beginning at 10:00 am for 16 weeks, and the body weight and food intake were determined weekly. Behavioral tests were conducted to evaluate the effects of ferric citrate on locomotor and cognitive functions during the first week of the trial and each subsequent month. At the end of the trial, all mice from each group were randomly and evenly divided into two groups. Subjects from one group were killed by decapitation, and then the brain, heart, liver, spleen and kidney were obtained and frozen in liquid nitrogen for RNA extraction and biomedical assays. Subjects from the other group were anesthetized with 4% chloral hydrate and perfusion-fixed with 4% paraformaldehyde. The same organs were obtained and fixed in 4% paraformaldehyde and cryopreserved for subsequent histological and immunostaining. Detection of iron The levels of iron in serum, heart, liver, spleen and kidney were determined by a Colorimetric Assay Kit (Nanjing Built Biology, Nanjing, China) according to the manufacturer's instructions. The iron levels in the brain were quantified by flame atomic absorption analysis. Briefly, the sample was digested in concentrated nitric acid at 180 °C for 2 h (Mars 6, Thermo Fisher Scientific Inc. Waltham, MA, USA). Then, the iron concentration was determined by flame atomic absorption spectrometry (Model PE-800, PerkinElmer, USA). Validation of the mineral analysis was conducted using green tea or bovine liver powder as a standard reference material (National Institute of Standards and Technology, Beijing, China). Perls staining First, paraffin slices were dewaxed, soaked in distilled water for 3 min, and then incubated in Perls solution containing 7% potassium ferrocyanide and 3% hydrochloric acid at a 1:1 ratio for 30 min, followed by three washes in PBS. Second, the slices were soaked in 1% H2O2 for 30 min and washed 3 times with distilled water. Finally, the slides were incubated in PBS containing 0.25 mg/mL DAB and 0.02% H2O2 for 10 min, counterstained for 5 min, dehydrated with gradient alcohol and mounted with xylene. Open field test The open field test was performed according to a previous report [67]. An open field device was provided by Jiangsu Cyrus Biotechnology Co. Ltd. and consisted of a white square arena (50×50 cm 2 , 50 cm high) and video capture system. The test was initiated by placing the mouse at the center of the arena and allowing the mouse to explore the arena for 10 min. Then, locomotor activities were analyzed by using an ANY-maze animal behavior video analysis system (Global Biotech Inc., USA). Accelerated rotarod test An accelerated rotarod test was performed according to a previous study [68]. An accelerated rotarod experimental device was provided by Jiangsu Cyrus Biotechnology Co. Ltd. The mice were placed in a uniform rotating rod (rotation speed 5 r/min) with a 9 cm wide lane and a 3 cm diameter rotating rod. When the speeds of the mice were stable, they underwent a uniform acceleration process (maximum time of 5 min, speed increases every 8 s) three times. The average retention time on the revolving rod was determined. The day before the test, all the mice were pretrained on the rotarod three times (1 h interval). Pole test The pole test was performed as previously described [41]. Briefly, mice were placed vertically on a 50 cm tall pole with a 1 cm diameter, after which the mice make a 180° turn and return to the base of the pole. The day before the test, the mice were habituated to the pole 5 times. During the test, the amount of time was recorded for the mouse to turn toward the ground (time to turn) and to reach the ground (time to climb). Each AGING mouse underwent five trials, and the average times were quantified. Traction test Both forelimbs of the mice were hung on a wire with a diameter of 1.5 mm, 30 cm above the ground, and a cap was placed 1 cm above the rod to prevent the mice from turning over. The time before landing was recorded, and each test interval was 1 min. A total of 5 tests were conducted to average. Y-maze test A food reward type of Y-maze test was performed as previously described [69]. Briefly, all mice were subjected to a 2 Y-maze test trials separated by a 1-h intertrial interval to assess spatial recognition memory. The mice were fasted the day before the test. The first stage was the training period. The new arm was blocked by the partition, and the mouse was placed in the labyrinth and allowed to freely move from the starting arm for 10 min. The second stage was the detection period. Food was placed at the new arm, and the mouse was allowed to freely move in the maze for 5 min. Data are expressed as the percentage of novel arm entries made during the 5-min trial. H&E and Nissl staining Organs were fixed in 4% paraformaldehyde before paraffin sectioning. Then, hematoxylin and eosin (H&E) and Nissl staining were performed according to the instructions provided by the manufacturer (Beyotime, Shanghai, China). The number of neurons was quantified by Image Pro Plus (MEDIA CYBERNETICS, USA). Enzyme-Linked Immunosorbent (ELISA) and Biochemical Reaction assay The levels of dopamine (DA), 3,4-dihydroxyphenylacetic acid (DOPAC), 8-hydroxy-2-deoxyguanosine (8-OHdG) and protein carbonylation (PC) were measured by ELISA assay kits (Shanghai Enzyme Linked Organisms, Shanghai, China) according to the manufacturer's instructions. The activity of superoxide dismutase (SOD) and the levels of glutathione (GSH) and malondialdehyde (MDA) were determined by a biochemical reaction assay kit (Nanjing Built Biology, Nanjing, China) according to the manufacturer's instructions. Quantitative real-time PCR Total RNA was extracted from the sample using RNAiso Plus (TaKaRa, Dalian, China). Total RNA was subjected to reverse transcription using the PrimeScript RT reagent kit with gDNA Eraser (Perfect Real Time) (TaKaRa). Quantitative real-time PCR was performed using the Bio-Rad ® CFX96 PCR System (Bio-Rad, CA, USA), and the relative gene expression was normalized to β-actin as the internal control. The primer sequences of the target genes are described in Table 2. Statistical analysis The regions of the mouse brain were located according to an anatomical map by Pingyu Wang [71]. A one-way ANOVA with LSD correction was used to compare different groups. Data are expressed as the mean ± standard deviation (X±SD) for bodyweight, food uptake, behavioral test and staining, while the quantifications for iron concentration, qRT-PCR, and oxidative damage are presented as the mean ± SEM (X ± SEM). Analyses were performed using SPSS 20.0 software (IBM Corp, USA) for Windows, with the level of significance set at 0.05. CONFLICTS OF INTEREST The authors declare that there is no conflicts of interest to disclose. FUNDING This work was supported by grants from National Key Technology Support Program (2014BAI03B01 to Z. C), and in part by the National Natural Science Foundation of China (31501200 and 31871179 to C.H).
2019-11-14T14:10:47.146Z
2019-11-07T00:00:00.000
{ "year": 2019, "sha1": "a06e76fc1a4522115b2a50c41cc4958ef1b04fdd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.102433", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22be628e800fd417c21617f93df9f89112378419", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
239676817
pes2o/s2orc
v3-fos-license
Reclamation Suitability Evaluation of Damaged Mined Land Based on the Limit Condition Method-Example of Pingdingshan Tianan Ten Coal Mine Reclamation suitability evaluation is the basis for determining the reuse direction of damaged land. Limit condition method is the most widely used method in land reclamation suitability evaluation at present. In this paper, the limit condition method is used to evaluate the suitability of land reclamation of damaged land in Pingdingshan Tianan ten coal mine. The evaluation index of damaged area is determined as six main factors: field slope/land flatness, ponding, depth of collapse, soil texture, thickness of soil layer, irrigation and drainage conditions, and finally determine the reclamation direction of the land to be reclaimed in combination with the overall local land use planning and the overall wishes of local residents. INTRODUCTION With the increasingly prominent conflict between human and land in China, cherishing and rational use of land has been elevated to a national strategic level. More than 95% of primary energy, 80% of industrial raw materials and 70% of agricultural production materials in China come from mineral resources [1] . Large-scale mining of mineral resources inevitably results in the occupation and destruction of large amounts of land, which seriously restricts the sustainable development of mining areas. Land reclamation work can effectively reduce the impact on ecological environment in the process of resource extraction. Therefore, restoring land productivity, increasing effective arable land area, and reclamation of damaged land is an important initiative to realize the sustainable development of ecological environment in mining areas. Land reclamation suitability evaluation is a prospective and predictive evaluation that determines the reasonable direction of land use to be reclaimed based on the investigation of the overall land quality and the statistics and prediction of the damaged land, and provides the basis and foundation for realizing the effective reclamation of land in mining areas [2][3] . Land reclamation suitability evaluation in mining areas lays the foundation for the planning and design of reclamation programs, the determination of reclamation directions, and the implementation of reclamation techniques, and provides a scientific basis for the rational use of land resources and the optimal allocation of land structure, and it is an important way to achieve sustainable development and ecological civilization in mining areas [4 ]. In this paper, based on the analysis of the site survey and proposed damage prediction of Pingdingshan Tianan ten coal mine, the evaluation units are divided and the most suitable land reclamation direction for each evaluation unit is obtained using the limit condition method, which provides a scientific basis for the reasonable reclamation of land resources in the mine area. GENERAL SITUATION OF STUDY AREA The mine area spans two counties, Weidong District, Pingdingshan City and Xiangcheng County, Xuchang City, and is located in the transition area between the hills and the plains, with the overall topography being high in the north and low in the south, and the topography being moderately undulating. Surface water bodies are not developed in the mine area, which has a warm temperate continental semi-humid monsoon climate. The total area of the mine site is 2061.58hm 2 , and the total area of reclamation responsibility is 1943.84hm 2 , including 453.16hm 2 of arable land; 163.73m 2 of garden (2) Division of evaluation unit The division of evaluation units should reflect the relatively homogeneous or similar nature within the unit, and comparability between units, which can objectively reflect the differences of land in a certain period and space [5] . Based on the principles of "similar land properties in the same evaluation unit" and "differences between evaluation units" [6] . The land reclamation suitability evaluation unit was divided into ten mine reclamation responsibility areas based on the type of land use, degree of damage, and the time sequence of damage of Pingdingshan Tianan ten coal mine. The evaluation unit of land suitability was divided into 23 evaluation units within the reclamation responsibility area, based on the damage unit, and then divided into evaluation units according to the degree of damage and damage land type within the damage unit, and the basic information of each evaluation unit is shown in Table 1. Among the damaged land types, the affected towns and villages and industrial and mining land will be repaired and reinforced in time (rebuilt after individual compensation for serious damage), while transportation land and land for water and water conservancy facilities will continue to be used through maintenance and reclaimed as the original land type, and no quantitative analysis will be done. (3)Evaluation method The limit condition method was used to evaluate the suitability of the land reclamation. The limiting method is based on the "barrel principle" of systems engineering, which means that the final quality of a classification unit depends on the quality of the least conditioned factor [7] . According to the principle of the law of least factor, the suitability of the land and its grade are determined by the factor with the lowest suitability grade of a single factor among the selected evaluation factors [8] . The calculation formula of the Limit Method is as follows: In the formula: Yi-final score of the ith evaluation unit Yij-score of the ith evaluation unit's jth eligible factor Selection of evaluation factors and ranking indicators The selection of the most important factors affecting land suitability for a particular land use or land use pattern as items to be evaluated is called evaluation factors [9] .Combining with the actual situation of Pingdingshan Tianan ten coal mine, the suitability evaluation indexes were determined as follows: field slope/land flatness, Ponding, depth of collapse, soil texture, thickness of soil layer, irrigation and drainage conditions. The main limiting factors were selected for suitability evaluation grade criteria, and other limiting factors were used as reference factors to develop the grade criteria for the main limiting factors for land reclamation suitability evaluation in the project area, as shown in Table 2. Table 3. Table 4. "Reclamation Suitability Evaluation of Damaged Mined Land Based on the Limit Condition Method-Example of Pingdingshan Tianan Ten Coal Mine"
2021-10-21T16:31:36.759Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "478297a250151dc960853287422914cbc6848276", "oa_license": "CCBY", "oa_url": "http://everant.org/index.php/etj/article/download/511/415", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6df840b14d7bce7f744207e100145c037730185f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
255315604
pes2o/s2orc
v3-fos-license
The Adaptation of Chinese Split-Site Business Students to British Classrooms: A Cross-Cultural Perspective Chinese business education differs from British business education in many respects. On the whole, it focuses on the acquisition of theoretical knowledge, whereas British business education places far more emphasis on soft management skills and team-work. This paper examines a split-site business degree program offered by a Chinese international school and a British business school, and explores the attitudes and expectations of the Chinese participants and their Chinese and British lecturers from an “English for specific purposes” perspective. The study conducted classroom observation, semi-structured interviews, and a questionaire survey, and identifies areas of difficulty for Chinese business students in the UK, in particular regarding their beliefs about teacher and student roles, their learning priorities and learning strategies, and their “goal-oriented” approach to discussion, which is at odds with the more collaborative and exploratory Western discussion strategies. The findings have implications for pre-sessional and in-sessional English course design, the management of split-site business degree programs, the teaching of Chinese students, and the enhancement of learning experiences generally in international business programs. In what follows, we will argue that there is a further complicating factor: the growing awareness, in China, of the need for Chinese learners to conform to Western expectations when they study abroad. In an attempt to prepare their students for degree programs in the West, lecturers in China may be encouraging their students to display "Western" cultural characteristics in a rather simplistic way, leading to misunderstanding and disappointment on all sides. The Internationalization of Business Education in China Since China entered the World Trade Organization in 2001, increasing numbers of Chinese university students have chosen to study business. Almost every Chinese university runs business and management undergraduate programs, generally offered within Schools of International Trade, Economics, or Business Administration (Shu & Jing, 2010;Le, 2010), but Chinese companies are particularly interested in employing graduates who have studied in both Chinese and Western higher education systems (Chen, 2010;Xu, 2010), so in recent years the Chinese Ministry of Education has founded nine semi-independent international schools at prestigious Chinese universities, intended to prepare students to pursue foreign degrees, especially in subjects relating to business and management. One of the study sites for our research is an international school of this sort, the other is a British university with which it operates a split-site degree program. After studying for two years at the international school in China, students study for a final year at the university in the UK before being awarded an undergraduate degree in International Business. Undergraduate Business Teaching and Learning in China Most Chinese business undergraduates start to study business at university without any basic business knowledge, since Chinese secondary schools offer courses in Marxist politics and economics rather than business-related disciplines (Yang, 2004). Once at university, business undergraduates are taught business theory, business law, accounting, finance and trade, assessed mainly by examination (Wan, Huang, & Dai, 2010). Any management training is oriented towards the field of management science, a quantitative natural science which treats management as a systematic procedure, requiring much the same skills as project management in engineering (Zhang & Zhuang, 2006). Soft skills in management are expected to be acquired at a later stage in a student's career, through professional practice, possibly enhanced by MBA study. This is in contrast to British and American business degrees, which include management as a major component. Although the Chinese educational expansion policy has enabled many more Chinese middle school graduates to go to colleges and universities, it has placed great pressure on the internal job market (Yang, 2008) so that most Chinese business programs now lead to relatively low-level jobs, rather than executive posts (Zhou, 2001). Rather than skills in leadership, strategic management and entrepreneurship, Chinese undergraduate business education therefore aims to develop shiying (适应), a response to one's environment which entails obedience, acceptance, and the efficient implementation of orders (Le, 2010). According to Zimmerman and Fey (2001), Chinese business culture does not see a great need for strategic planning and motivational skills, as only a few people are thought to have leadership potential. In a study by Branine (2005), Chinese mid-level managers from state-owned enterprises reported that their career development was not in management, as such, but rather in the performance of obedience. This concept of shiying is also evident in cases where Chinese business lecturers nominate group leaders-those they believe to have innate leadership skills-rather than letting students choose leaders for themselves (Song, Yang, & Ma, 2010). In such cases group members are expected to follow the leader, whatever he or she decides. The phenomenon of yanxue (厌学), a psychosomatic syndrome of weariness and depression which leads students to resist study, is often reported by Chinese lecturers and is supposed to be especially common among business students (Chen, 1998;Gu & Guo, 2005). Yanxue in business studies is sometimes blamed on students' indifference to business as a discipline (Yang, 2004;Yang, 2008); many business students are encouraged by their parents to choose this subject, and start their studies without knowing what the course entails (Zhu, 2009). This suggests a breach in the "psychological contract" formed between the student and the course provider; Bordia et al. (2015) argue that psychological contract fulfillment is one of the keys to high attendance and participation. It should be noted, however, that sufferers from yanxue are unlikely to fail their degree programs in China; once admitted to a Chinese university they are almost bound to graduate, regardless of the quality of their work. Chinese university lecturers tend to regard themselves as knowledge providers (Xie, 2010), and lecturing is the dominant teaching method. Lecturers evaluate business problems from an academic rather than a practical perspective, distancing themselves and their students from the business cases they are studying, and arriving at their own pre-planned conclusions (Zheng, 2007;Luo, 2010). Any relevant ideas contributed by students are likely to be appropriated by the lecturer and elaborated for the benefit of the class as a whole, while student-initiated ideas that the lecturer considers irrelevant are swiftly rejected. Interaction is thus tightly controlled by the lecturer and students do not participate in extended dialogue; Xie (2010) argues that this deprives them of the opportunity to practice sharing their thoughts with others. Chinese university business lecturers are well-qualified academically but do not usually have professional experience (Gu & Guo, 2005); they do not seek to simulate real business environments, and business cases are usually simplified to highlight the key aspects of a phenomenon or theory (Li, 2005;Luo, 2010). The intended learning outcome is an understanding of the meaning of the theory, and the cases themselves are often quickly forgotten by the students (Luo, 2010). Unlike in Western business education, the case method is rarely combined with other learning approaches such as group work, presentations or projects. One Chinese learning method is particularly worthy of note. Fanxing (反省) draws on the saying of Confucius that "When we see men of worth, we should think of equaling them; when we see men of a contrary character, we should turn inwards and examine ourselves (Jian xian si qi yan, jian buxian er neixing ye; 见 贤思齐焉, 见不贤而内省也)" (Confucius, Legge trans. 1893, 4.17). Rather than being teacher-led, this is actually a way for students to act autonomously, by comparing their own behavior with that of others, reflecting on the differences, and then adjusting their own behavior accordingly (Chen, 2010). Undergraduate Business Teaching and Learning in the UK Business education in the UK started in the 1940s and thus has had a longer history than in China. Compared to Chinese business education, which tends to treat management as a "hard" discipline, modern British business education tends to develop soft and applied management skills (Ivory et al., 2006;Perriton & Reynolds, 2004). Holman (2000) identifies four business education models that have been followed in the UK: "academic liberalism," "experiential vocationalism," "experiential liberalism," and the "experiential/critical school." The first two of these models aim at providing students with the generic principles, theories, and hard skills needed by organizations. The second two pay more attention to developing soft skills such as critical insight, personal autonomy, situated reflexivity and interpersonal skills, and are based on the belief that it is possible for business students to construct knowledge in simulated learning environments. Holman (2000) describes how the earlier models were supplanted by the more experiential models in the mid-1980s. This was in accordance with a general trend towards experiential pedagogies, but also reflected the fact that British organizational structures were becoming flatter, requiring the managerial participation of all members (Rugman & Collinson, 2009). In British business programs, students are expected to learn how to encourage participation, coordinate effort, provide feedback, manage conflict, and facilitate team decisions (Hess, 2007). In the modern British business world, managerial decisions are often made collectively (Schyns, Kiefer, Kerschreiter, & Tymon, 2011), and in British business schools, team leadership, where every member plays a complementary role, is given more importance than solo leadership. Belbin's (1993) Team Role Theory and Test is widely used to help students recognize their own personal qualities and adopt appropriate group roles (Bryant & Albring, 2006;Gammie, Gammie, & Cargill, 2002). British students are generally expected to work independently and take responsibility for their own study. Although Yang (2004), Yang (2008) and Lau and Roffey (2002) all complain that Chinese business courses leave students with a great deal of free time, British business students generally have far less contact with their lecturers than business students in China. British business lecturers often assign coursework well in advance of the submission date, and students then have the freedom to either follow the lecturers' instructional pace or learn on their own. Nearly one third of British lecturers in business enter academia later in life, having previously worked in industry (Ivory et al., 2006), so in contrast to many Chinese business lecturers they can refer to their personal experiences in industry as a means of introducing students to the profession. British business lecturers are also able to draw on the work experience of their students, as according to a survey conducted by Crawford and Cribb (2012) for the UK Department for Education, a "significant proportion" of young people in the UK (perhaps 20 %) take a gap year before starting college or university. Moreover many British students are employed part-time in commerce or industry whilst studying. The National Centre for Universities and Business strongly advocates work experience for university undergraduates, as a means of gaining transferable employability skills (Wilson, 2016). Chinese business students are unlikely to have ever taken up paid employment. In contrast to the Chinese teaching and learning system, where before the final year internship students mainly acquire business knowledge by attending lectures, reading textbooks and preparing for exams, Western business education is more self-directed, and involves students in a wider range of integrated activities. Moreover in UK degree courses, content and assessment schemes are often developed in close collaboration with relevant professional, statutory and regulatory bodies with a view to preparing students for their professional working lives. Some business learning activities are thus intended to simulate, at least to some extent, the real business world and working environment, and thus one of the main teaching techniques involves the examination of authentic business cases in order to make managerial decisions (Currie & Tempest, 2008;Foster & Carboni, 2008). Because case content is related to real business contexts, the issues that arise in case study discussions are complex, interwoven, and displayed without any single "correct" solution (Barnes, Christensen, & Hansen, 1994). Business lecturers may choose to limit how much case information they provide, requiring their students to gather further details by themselves, but typically the main focus of the task is not on the presentation of information (although this is important) but on the analysis, intended to lead to deeper understanding of the issues. The experiential, social constructivist approach taken in British business education centers around peer collaboration, in accordance with the belief that learning is primarily achieved through "processing, creating and shaping knowledge" in dialogic interaction (Basturkmen, 2016, p. 154), rather than by absorbing information provided by a lecturer or a textbook. This belief in the importance of group work and spoken communication is also shared by leading UK employers. According to the Global Graduates into Global Leaders report from the National Centre for Universities and Business (Diamond et al., 2011), top employers consider the ability to work collaboratively and communicate effectively (through both speaking and listening) to be the most important "global competencies" for graduates entering the workforce. "An ability to embrace multiple perspectives and challenge thinking" was also rated highly by these employers. Basturkmen (2002) used the "Initiation, Response, Follow-Up" (IRF) exchange structure model which was initially proposed by Sinclair and Coulthard (1992) to examine how students worked together to construct meaning during MBA seminars at a British university, finding that complex interactional patterns drove information exchange and enabled ideas to emerge and be negotiated. Using the same IRF schema, Li and Nesi (2004) compared discussion patterns in an all-British and an all-Chinese group conversing in their own first languages. There was greater evidence of negotiation of meaning in the British group, who took more than three times and more turns than the Chinese group and gave far more feedback to other group members, within roughly the same length of recording time. Similar results were also found by Wang (2014) who analyzed two group discussions involving Chinese and non-Chinese students at a British business school. In this case, each new turn taken by the Chinese students tended to return to the initial topic, thus preventing topic development. Research Questions The study set out to address the following research questions: (1) How did the culture of learning in the international school in China compare with that in the business school in the UK? (2) What were the beliefs and expectations of students at the two learning sites, and how well did they adapt to the transition in their final year of study? By addressing these questions, the study aims to inform the development of greater cultural awareness on the part of international students and their lecturers, possibly leading to adjustments in the way international business is taught in both phases of the program, and to the pre-departure advice provided in China. Data Collection Methods The methods used for this study were inspired by various forms of needs analysis in the English for Specific Purposes research tradition, including "means analysis" (Holliday & Cooke, 1982), and the investigation of different parties' perceptions of need (see e.g., Ferris, 1998). The behavior and attitudes of the Chinese students were investigated via semi-structured interviews and a questionnaire, with findings triangulated with those gathered from the analysis of non-participant observations and recordings of peer interactions. Data was considered from both an etic and an emic perspective, as the first author is Chinese, and was a student herself at the time the data was collected. A number of complementary analytical approaches were employed on the grounds that no one approach would provide a full picture, but that "taken together they create a broader understanding of the challenges facing students and teachers" (Green & Dixon, 2002, p. 400). The Research Site: A "2+1" Business Program The "2+1" undergraduate business program discussed in this paper consists of a two-year Part I run by an international school in China, and a one-year Part II offered by a university in the UK. On successful completion of the program, students are awarded two bachelor's degrees in international business. Data was gathered on site at the two institutions, in China and in the UK. The program was first launched in 2007, and 120 students were enrolled in the newest cohort under study. Part I includes intensive English language training, alongside classes in economics, statistics, management, marketing, financial investment, organizational behavior, human resource management, and the international business environment. Admittance to Part II of the program requires successful completion of Part I and an English language proficiency score equivalent to IELTS 6.5. In Part II, undertaken in the British business school, the "2+1" students join other students enrolled on the undergraduate program in international business. The assessment instruments are oral presentations, written exams, and written coursework produced individually and in groups. The "2+1" program was chosen because lecturers at the British institution reported that they had difficulty conducting seminars with the Chinese students, and that fewer upper second or first-class degrees were being awarded to them. Methodology and Data Collection We collected our data over a nine-month period, via classroom observation, semi-structured interviews, and a questionnaire survey. The procedures are summarized in Table 1. Part II observations and interviews (February to April) were carried out with one cohort of students, and subsequent data collection was carried out with the next cohort of students. From February to April we observed lectures and seminars involving Part II students in the UK. The seminars generally consisted of 30 minutes of lecturer instruction and 30 minutes of small group discussion. During our observations, we audio-recorded ten discussions, and collected copies of the lecturers' handouts and the students' assignments for reference. We also interviewed 13 "2+1" students regarding their study expectations, wants and needs, and five business lecturers regarding the "2+1" students. In May, one of the authors paid a three-week visit to the international school at the Chinese university. She observed and recorded seven classes (two business classes taught by Chinese lecturers and five English classes taught by both British and Chinese lecturers). Also, drawing on what we had already learnt from the analysis of earlier data, she interviewed four 2nd-year students, two subject lecturers (one British and one Chinese), and four English lecturers (all Chinese). The interviews with students focused on their approach to learning and what they wanted to achieve during their first two years of study. The interviews with lecturers focused on their choice of teaching methods, particularly the use of business case studies. Other lecturers and students also provided relevant information during casual conversations with the researcher. Copies of the lecturers' handouts and the students' class notes were collected for reference. Most of these interviews were conducted using synchronous instant messaging, for logistical reasons. Face-to-face interviews were audio-recorded and transcribed. The interviews with the Chinese participants were conducted in Chinese. In October we administered a questionnaire at the British business school with students who we had already observed in May when they were completing Part I. The students were asked why they had chosen the "2+1" program, and were questioned about their learning goals and strategies, and their perceptions of their lecturers' roles. All the data was uploaded to NVivo 8 and analyzed to identify salient themes. Timetabling and Assessment Practices Students received about 25 hours of face-to-face instruction per week, leaving little time for independent learning outside class. Just under half of course marks (40 %) were awarded for classroom behavior, and during the interviews some lecturers implied that students who simply attended every class would be awarded the full 40 %. The remaining marks were awarded for performance in examinations at the end of each semester. The business examinations mostly required the students to define terms and describe theories; they seemed to be designed to check whether the majority had acquired a basic understanding of the subject rather than to differentiate between students at different levels of ability. This is in accordance with the general practice in Chinese universities, where all students are expected to reach a required standard, but do not benefit from receiving higher marks unless they are competing for scholarship funding (international schools in China do not offer scholarships). Classroom Practice In interviews and informally, the British lecturers working in China reported that they encouraged discussions and oral presentations in their classes, but the Chinese lecturers said that they did not. In one discussion observed in one of the English classes taught by a British lecturer, the students were allowed to decide on the make-up of their groups and were all encouraged to contribute. Once formed, however, the groups chose to elect one student as their leader and to offer their opinions to this student, who reported them back to the whole class. Students who were interviewed said that in group work they tended to negotiate which parts of the question each member would undertake, and then worked separately, handing their finished work to the leader to synthesize. In the English classes taught by Chinese lecturers the students were more strictly controlled. The lecturers only asked questions to check understanding and to remind the students about exam-taking skills. The Chinese lecturers in the business classes used similar teaching methods to those used in most Chinese universities, as described by Xie (2010). They delivered lectures, emphasizing focal points by repeating them and writing them on the blackboard. The students' role was to highlight key words in their textbooks, and in some cases to check with the lecturer whether specific lecture content was important. The lecturers also set tasks which were carried out individually in class. They checked the work of individual students on a one-to-one basis, and provided the correct answers to the whole class at the end of the session. Groups of students were observed sitting together and exchanging notes and answers; this may be interpreted as fanxing behavior-a way of monitoring peer progress and achievements-and is also in accordance with the collectivist cultural practice of "paying… attention to others in everyday judgments and decision making" (Bordia et al., 2015, p. 223). Lecturers' Perceptions of Students' Needs The British lecturer we interviewed was aware that the "2+1" students would be expected to work independently once they got to the UK. The Chinese lecturers, however, thought that their students would need help from lecturers in Part II, and that building close ties with them to enlist their support would be an effective strategy. They considered their students to have particularly good interpersonal skills, and they admired the way they had actively sought the help of lecturers when planning their studies in Part I. The business lecturers thought that their students needed to acquire the fundamentals of business theory rather than analytical skills. In the two business classes observed in May, the lecturers dedicated most of the time to introducing theories, using examples with single "correct" solutions. By the end of each class the students had made detailed notes about theories and concepts. In interview, subject lecturers said that this theoretical knowledge would be a good basis for the final year of study in the UK, and a way of compensating for any English language problems. One lecturer pointed out that basic business theories were unlikely to differ from one country to another. Students' Attitudes and Learning Goals The students who were interviewed expressed great interest in business as an academic subject, and there was no sign that their parents had forced them to choose this program. However the program had a high drop-out rate in both Year One and Year Two, suggesting that some of those who enrolled suffered from yanxue, the learning burnout reported by Yang (2004) and Yang (2008). Students dropped out because they had lost interest in the program, not because they had failed a course. In May when the interviews were conducted, the students regarded IELTS preparation as their highest priority, and said that they only expected to obtain passing scores on their subject courses. When reflecting on Part I of the program in October they had a more balanced view of their learning targets, but still prioritized English language skills over subject knowledge (see Table 2). This reflects the fact that Chinese universities do not distinguish between high-achieving and low-achieving students, as long as they reach a minimum acceptable standard. The students thought it relatively easy to reach this standard in their business modules, but many struggled to reach the required level of English language proficiency. In the October questionnaire survey, students were asked to rate the importance of various learning activities undertaken during Part I of the program, using the scale: 1 = not at all important, to 5 = very important. The results are shown in Table 2. Improving their general spoken English was their highest priority, in accordance with the belief held by the Chinese lecturers that interpersonal skills would be very important for success in their final year. The kind of peer interaction expected of them in Part II of the program had not been practiced in Part I, however. Students' Learning Methods The questionnaire asked students how often on a scale from 1 to 5 they had used 10 pre-identified learning strategies. This strategy list had been developed with reference to earlier interviews with lecturers and students. Findings are listed in Table 3. Five of the strategies investigated in the questionnaire involved seeking help from published sources or from fellow students. In the interviews and informal conversations, several students had reported that, rather than using library resources (including e-resources) they preferred learning about business issues from blogs and online encyclopaedias written in Chinese. All the interviewees reported that when they prepared for speaking and writing tasks they often made use of text taken from Chinese websites, processing it through an online translation tool. These interview findings concur with the questionnaire responses, indicating heavy reliance on internet resources and less use of the library, peer support, or revision. The word "dictionary" in Chinese (zidian, 字典) refers specifically to the book form, so e-dictionaries were counted as "translation tools." Bordia et al. (2015) claim that individuals from strong uncertainty-avoidance cultures "feel the need to clarify tasks and goals, ideally before they start the task" (p. 222), and we found that "predicting exam questions" was reported as a popular strategy by questionnaire respondents. Likewise in class, students were observed checking with their lecturers whether a point was particularly important and worth learning. Miller and Partlett (1974) noted this type of "cue seeking" strategy being exhibited by British students, but it did not seem to be one that the British lecturers in our study readily responded to. Some of the Chinese students said that they were disappointed that the British lecturers did not help them prepare for exams by highlighting important content. Three group strategies were listed in the questionnaire: preparing work to give to a group leader, choosing to be the leader of a group, and practicing English with peers. The first of these strategies was by far the most popular, suggesting that students preferred to follow rather than to take responsibility for group output or the exchange of knowledge. Students' Perceptions of the Teacher's Role Classroom observation and interviews had revealed very close personal relationships between students and their Part I lecturers; the questionnaire probed student attitudes further with reference to three well-known Chinese proverbs (see Table 4). The students responded particularly positively to the idea that lecturers provided moral and personal guidance. Timetabling and Assessment Practices During Part II of the program, students received five hours of lectures and six hours of seminars each week, leaving a great deal of time for independent learning outside class. They were assessed continuously throughout their final year, and their coursework marks greatly affected their degree classification. Case studies were extensively used for teaching and assessment, and students were required to find and critically analyze authentic sources relating to these cases. Examination questions tended to be open-ended, prompting critical discussion of business phenomena, and were designed to give students the scope to demonstrate the full range of their knowledge and abilities. Classroom Methods The main educational objectives of Part II of the program were to improve students' linguistic proficiency pertinent to the business environment, and their ability to discover and solve problems related to contemporary transnational issues. Active oral participation was expected of every student, and they were also expected to monitor their own progress and aim for high marks. A mere pass mark would not usually constitute success in the eyes of the teaching staff, because they had experience of British employers who tend to refer to degree classifications when recruiting new graduates. The lecturers we observed made extensive use of questioning strategies in their lectures, to prompt the students to reflect on their understanding and their prior work experience. In the seminars, the students formed their own groups, although the lecturers split and reorganized groups if they contained too many Chinese students in proportion to those of other nationalities. During discussions, the lecturer acted as a facilitator, prompting student contributions rather than imparting knowledge. This was a very different teaching strategy from the Chinese method of providing model answers. Student Conduct Discussions across four groups containing equal numbers of Chinese and non-Chinese students were analyzed in terms of their IRF structure (Sinclair & Coulthard, 1992, used for the analysis of group discussions by Basturkmen, 2002, Li & Nesi, 2004, and Wang, 2014. It was found that the Chinese students took longer turns, but in each discussion only contributed about 60 turns out of an average of 300, considerably fewer than their non-Chinese peers. They seemed to consider it obligatory to respond to questions, but not to the ideas expressed by others, thus the majority of their turns were responses rather than initiations or follow-up comments. Similar behavior by Chinese discussants was reported by Li and Nesi (2004), and suggests that the Chinese participants were displaying knowledge that they had already constructed and internalized, and were not developing their ideas as they went along. In other words, rather than co-constructing knowledge and "speaking to learn" (Basturkmen, 2016, p. 154), they were delivering short speeches in the manner Barnes (2008) describes as "presentational talk." Many of the "2+1" students spoke little, and relied on a few more active Chinese peers, who they expected to integrate their contributions into the ongoing discussion as their group leaders had done in China. Some hid behind their friends and played with their smart phones, not speaking a word throughout. Absenteeism was also a problem in a small number of cases, perhaps indicative of yanxue and a sense of a breach of psychological contract (Bordia et al., 2015). Lecturer and Student Expectations The primary aim of most student interviewees was simply to pass the course, and the concept of shiying (obedience, acceptance, and the implementation of orders) was sometimes invoked, as in student interview below: I came here to pursue a degree. I did everything I have to do. I accept the rules of the game. Shiying is very important. (personal communication, CS12-Part II, February 2017) There was some evidence that the students regarded the lecturer as the principal evaluator of their work, not other students. This may explain why, in group discussions, they only tended to comment on other students' opinions in response to elicitations from more active group members. From the lecturers' perspective, one of the main problems with the "2+1" students was that they tended to avoid in-depth analysis. An oral presentation by a "2+1" group was criticized by the lecturer in the following way, for example: They offered a large amount of description rather than critically discussing the points in depth. In order to achieve a higher mark they must demonstrate that they are comparing critically rather than merely being descriptive. (personal communication, CS5-Part II, February 2017) Informants struggled with the concept of critical thinking, even towards the end of their final year. In the student interview below, the meaning of "critical" is still equated with negative criticism, for example: I think being critical probably means zhiyi (置疑) or pipan (批判) in Chinese. You do not say what the others said. You have to criticize, to analyze shortcomings, maybe. (personal communication, CS7-Part II, February 2017) Overall, the "2+1" students seemed to aim for solutions to problems, rather than appreciate the process of discussing the topic. Many of them perceived group discussions as an opportunity to practice oral English, and they liked to hold the floor and stick to their prepared opinions because they thought this strategy would give them more chance to speak. We would argue that their unwillingness to build on each other's contributions was also influenced by their expectations of British academic culture. It is a commonplace that Western "individualist" cultures enjoy peer-competition (see for example Bordia et al., 2015, p. 216), and the students had been taught in China that they should argue competitively when taking part in discussions in the UK. Consequently they were rather surprised when the British participants talked around the topic in more collaborative, less confrontational ways, supposedly more typical of collectivist cultures. This is apparent from the following two interviews: The concept of the group discussion as a means of gathering different ideas for a final, teacher-evaluated report is expressed in the following two interviews: I really did not think that [name] had expressed her own ideas clearly…. most of the time she asked questions and synthesized opinions…. But I was told in China that when we speak English we should be brave and not worry about losing face. I hoped everyone would propose an opinion and give us appropriate reasons. And finally we could decide together on our final answer which would be the final group decision. (personal communication, CS9-Part II, February 2017) When I was expressing my own opinion, I expected others to ask me to explain or to allow me some time to offer reasons. I hoped they would really consider my opinion and write it down. I contributed my opinion in order to make our final answer more valid, to be correct. I mean all of our opinions will be integrated at the end of the discussion. They will be presented on the poster… I don't like to reformulate or build on other people's opinions. It shows that I'm not an independent person. I hope they could also give their answers. (personal communication, CS1-Part II, February 2017) Nevertheless the British habit of sharing ideas impressed some students favorably. One argued that although it saved time to follow the Chinese method of dividing the task up and working individually, this method prevented the students from learning from each other. Another, quoted in the following interview, liked the idea of mutual group support: I had not heard about team spirit until I came here. I used to do group work in China but we did not need team spirit then because each member was responsible for one part. Actually we often do group work in the same way here especially when there are many Chinese in a group. British lecturers always emphasize team spirit when they assign group work. I agree with them very much. (personal communication, CS8-Part II, Moreover, although our interviewees reported difficulty communicating with non-Chinese participants they all acknowledged the benefit of observing and emulating effective behavior, a strategy encapsulated by the Chinese concept of fanxing, and clearly expressed in this interview: I like to observe my British lecturers and peers in lectures and seminars to see how they behave in class, how they take notes, how they think, and how they speak. (personal communication, CS3-Part II, February 2017) Conclusions This study explores some of the differences in the approach to studying international business in China and in the UK. It compares business education systems in China and the UK as described in the prior literature, and examines data collected from the two program sites relating to the perceptions and expectations of students and staff. We conclude that the Chinese students were transferring learning strategies appropriate to a teacher-centered, exam-oriented system which focuses on whole-class acquisition of basic theoretical and factual knowledge, to a British context where far more emphasis is placed on team-work, analysis, and the fulfillment of individual potential. We also conclude that concepts of "Western" directness, face-threatening confrontation, and individualism had been over-simplified and overstated by the lecturers in China, so that the students were ill-prepared for the more collaborative approach that was actually expected of them in discussion classes in the UK. The two discussion styles observed in Part II of the program, and commented on by interviewees, are indicative of the differences between the two educational cultures. The Chinese students tended to favor a "goal-oriented," "presentational" style which allowed each speaker to make their own unique contribution to the debate, but required them to ignore or reject any views that had previously been expressed. These students were frustrated by the British and European students' more exploratory style, which they found meandering, purposeless, and lacking in strong independent claims. Clearly there is scope to help students in Part I of the program gain a better understanding of the demands that will be placed on them in Part II. They might be led to expect a different relationship with their lecturers and their peers, with less personal support and guidance from staff, more knowledge construction with fellow students, and more independent, library-based research. Individuals are certainly capable of crossing over from one discursive practice to another, as Grimshaw (2010) points out, and in their final year some of our Chinese informants were prepared to consider the benefits of peer collaboration. It might be useful to establish a peer mentoring or "buddy" system in Part II of the program, as this could build on the existing cultural practice of fanxing as a means of self-improvement. Peer mentoring systems are famously difficult to establish and maintain, but could benefit all the students, from both East and West.
2023-01-01T14:24:32.292Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "3db15a4f14d819e23d13ad9beb0e6085d7171fd8", "oa_license": "CCBY", "oa_url": "https://pure.coventry.ac.uk/ws/files/21676780/Binder4.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "3db15a4f14d819e23d13ad9beb0e6085d7171fd8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
257232711
pes2o/s2orc
v3-fos-license
Safe Peeling for L0-Regularized Least-Squares with supplementary material We introduce a new methodology dubbed ``safe peeling'' to accelerate the resolution of L0-regularized least-squares problems via a Branch-and-Bound (BnB) algorithm. Our procedure enables to tighten the convex relaxation considered at each node of the BnB decision tree and therefore potentially allows for more aggressive pruning. Numerical simulations show that our proposed methodology leads to significant gains in terms of number of nodes explored and overall solving time.s show that our proposed methodology leads to significant gains in terms of number of nodes explored and overall solving time. I. INTRODUCTION This paper focuses on the resolution of the so-called "ℓ 0regularized least-squares" problem given by where y ∈ R m and A ∈ R m×n are input data, λ > 0 is a regularization parameter and · 0 denotes the ℓ 0 -pseudonorm which counts the number of non-zero elements in its argument. Solving (1-P) is of paramount interest in many scientific fields such as statistics, machine learning or inverse problems [1][2][3]. Unfortunately, this problem also turns out to be NP-hard [4,Th. 1]. Hence, the last decades have seen a flurry of contributions proposing tractable procedures able to recover approximate solutions of (1-P). Canonical examples include greedy algorithms or methodologies based on convex relaxations, see [5,Ch. 3]. Although these procedures successfully recover the actual solutions of (1-P) in "easy" setups, they usually fall short for more challenging instances of the problem. This observation, combined with some recent advances in integer optimization and hardware performance, has revived the interest in methods solving (1-P) exactly. A standard approach is to use a Branch-and-Bound (BnB) algorithm that solves (1-P), see [6][7][8][9][10][11]. In this paper, we propose a new strategy, dubbed "safe peeling", to accelerate the exact resolution of (1-P). In a nutshell, our contribution is a computationally simple test applied at each node of the BnB decision tree to identify some intervals of R n which cannot contain a solution of (1-P). This Gilles Monnoyer is funded by the Belgian FNRS. The research presented in this paper is reproducible. The code associated to our numerical experiments is available at https://github.com/TheoGuyard/BnbPeeling.jl. information allows to construct tighter convex relaxations and more aggressive pruning of the nodes of the decision tree. Our numerical experiments show that the proposed method leads to a significant reduction of the solving time as compared to state-of-the-art concurrent methods. The name "safe peeling" comes from the fact that the proposed method enables to reduce (or in more figurative terms, "to peel") the feasible set of the problem at each node of the decision tree while safely preserving the correctness of the BnB procedure. The rest of the paper is organized as follows. Sec. III describes the main ingredients of BnB methods. Our peeling strategy is presented in Sec. IV and its performance is illustrated in Sec. V. Proofs of the results presented in the following are postponed to the appendix. II. NOTATIONS We use the following notations. 0 and 1 denote the allzero and all-one vectors. The i-th column of a matrix A is denoted a i and the i-th entry of a vector x is denoted x i . The superscript T refers to transposition. Any vectorial relation has to be understood component-wise, e.g., x ∈ [l, u] means x i ∈ [l i , u i ], ∀i. Moreover, η(·) denotes the indicator function which equals to 0 if the condition in argument is fulfilled and to +∞ otherwise, [x] + = max(x, 0) refers to the positive-part function and |·| denotes the cardinality of a set. Finally, 1, n with n ∈ N * is a short-hand notation for the set {1, . . . , n}. III. PRINCIPLES OF BNB METHODS In this section, we recall the main principles of BnB procedures. Due to space limitation, we only review the elements of interest to introduce the proposed peeling method. We refer the reader to [12,Ch. 7] for an in-depth treatment of the subject. A. Pruning The crux of BnB methods consists in identifying and discarding some subsets of R n which do not contain a minimizer of (1-P). To do so, one constructs a decision tree in which each node corresponds to a particular subset of R n . In our context, a tree node is identified by two disjoint subsets of 1, n , say ν 0 and ν 1 . The goal at node ν (ν 0 , ν 1 ) is to detect whether a solution of (1-P) can be attained within where x ν k denotes the restriction of x to its elements in ν k . In particular, let X ⋆ be the non-empty set of minimizers of (1-P). Then, if some upper boundp on the optimal value p ⋆ is known and if we let with P ν (x) P(x) + η(x ∈ X ν ), we obtain the implication In words, if the left-hand side of (4) is satisfied, X ν does not contain any solution of (1-P) and can therefore be discarded from the search space of the optimization problem. This operation is usually referred to as "pruning". B. Bounding and relaxing Making a pruning decision at node ν requires the knowledge ofp and p ν . On the one hand, findingp is an easy task since the value of the objective function in (1-P) at any feasible point constitutes an upper bound on p ⋆ . On the other hand, evaluating p ν is NP-hard. This issue can nevertheless be circumvented by finding a tractable lower bound r ν on p ν and relaxing (4) as One ubiquitous approach in the literature [7,9,13] to find such a lower bound consists in: i) Adding an extra term "η(x ∈ [l, u])" to the cost function of (1-P), for some well-chosen bounds l ∈ R n − and u ∈ R n + . 1 In particular, the new constraint "x ∈ [l, u]" must lead to a problem fully equivalent to (1-P), that is ii) Exploiting the convex relaxation of the function · 0 on the bounded set X ν ∩ [l, u], given by with ν • 1, n \(ν 0 ∪ν 1 ) and the convention "0/0 = 0". On the one hand, item i) implies that the pruning test (4) involves the following quantity (rather than p ν ): where P ν (x; l, u) P ν (x) + η(x ∈ [l, u]). On the other hand, a lower bound r ν (l, u) on p ν (l, u) can be obtained by using (7) and solving r ν (l, u) = min x∈R n R ν (x; l, u) (9-R ν ) 1 This additional constraint usually takes the form "−M ≤ x i ≤ M, ∀i" with M > 0 and is known as "Big-M" constraint, see [7,Sec. 3] where We note that (9-R ν ) is a convex problem and can be solved efficiently to good accuracy via numerous polynomial-time numerical procedures, see e.g., [14,Ch. 10]. In practice, the choice of l and u must respect two conflicting imperatives. First, the new constraint "x ∈ [l, u]" should not modify the solution of our target problem (1-P) and condition (6) must therefore be verified. Since X ⋆ is obviously not accessible beforehand, this suggests that the entries of l and u should be chosen "large-enough" in absolute values. 2 Second, the tightness of r ν (l, u) with respect to p ν (l, u) degrades with the spread of the set [l, u]. 3 In particular, the right-hand side of (7) tends to |ν 1 | when l ≪ x and x ≪ u. Therefore, setting the entries of l and u with too large absolute values is likely to degrade the effectiveness of the relaxed pruning decision (5). In the next section, we propose a solution to address this problem by deriving a methodology which locally tightens the constraint x ∈ [l, u] at each node of the decision tree while preserving the correctness of the BnB procedure. IV. PEELING In this section, we introduce our proposed peeling procedure. As an initial assumption, we suppose that some interval [l, u] verifying condition (6) is known. This assumption will be relaxed later on in Sec. IV-C. Our goal is to find a new interval [l ′ , u ′ ] such that These requirements imply that the pruning decision (4) made at node ν remains unchanged when replacing constraint "x ∈ [l, u]" by "x ∈ [l ′ , u ′ ]" in (8-P ν ). More specifically, the following result holds: A proof of this result is available in App. A. A consequence of preserving the pruning decision is that taking the new constraint "x ∈ [l ′ , u ′ ]" into account at node ν does not alter the output of the BnB procedure. In particular, it still correctly identifies the solutions of (1-P). The second requirement (10b) implies that r ν (l ′ , u ′ ) can possibly be larger than r ν (l, u) since the lower bound in (7) is tightened by considering lower absolute values for l and u. Overall, any choice of [l ′ , u ′ ] verifying (10a)-(10b) thus keeps unchanged the output of the BnB procedure while allowing for potentially more aggressive pruning decisions. In the rest of this section, we describe a strategy to find some interval [l ′ , u ′ ] satisfying (10a)-(10b). Because of the symmetry of the problem at stake, we only focus on the construction of the upper bound u ′ . The identification of a lower bound l ′ can be done along the same lines. A. Target peeling strategy Given some index j ∈ ν • and α > 0, we consider the following perturbed versions of (8-P ν ): Problem (12) corresponds to (8-P ν ) where x j is additionally constrained to be strictly greater than α. The following lemma then trivially follows from the definition of p ν α (l, u): This result thus states that any α ∈ [0, u j [ verifying (13) enables to construct some u ′ automatically fulfilling (10a)-(10b). Unfortunately, evaluating (13) involves the same computational burden as solving (8-P ν ). This problem can nevertheless be circumvented by finding some proper lower bound on p ν α (l, u) as described in the next section. B. Tractable implementation In App. A, we leverage Fenchel-Rockafellar duality for problem (9-R ν ) to show that for any w ∈ R m , the following lower bound on p ν α (l, u) holds: where Using this result, condition (13) can be relaxed as Hence, choosing any α ∈ [0, u j [ verifying (16) for some w ∈ R m defines a new valid constraint via (14), in the sense of (10a)-(10b). Interestingly, the left-hand side of (16) depends linearly on α, thus allowing to precisely characterize the range of possible values satisfying the strict inequality (16). This leads us to the main result of this section. then u ′ defined as in (14) with α = 0 fulfills (10a)-(10b). Moreover, if a T j w < 0, then u ′ defined as in (14) with any Our next result shows that Prop. 1 can be applied to all indices j ∈ 1, n either sequentially or in parallel, while preserving the correctness of the BnB procedure: A proof is available in App. A. We note that in terms of complexity the parallel application of Prop. 1 to all indices j ∈ 1, n requires the computation of the inner products {a T i w} n i=1 and one single evaluation of D ν (w; l, u). Interestingly, these inner products are already computed in most numerical procedures solving (9-R ν ) and are thus usually available at no additional cost, see e.g., [11,Sec. 4.3]. The overhead complexity of applying in parallel our proposed peeling strategy thus scales as O(n + m). C. Propagating peeling down the tree In this section, we emphasize that any interval [l ′ , u ′ ] verifying (10a)-(10b) at node ν can be used as a starting point to apply our peeling procedure at the child nodes of ν. More specifically, the following result holds: at node ν and let ν ′ be some child node of ν. Assume that the peeling procedure defined in Prop. 1 is applied at node A proof of this result is available in App. A. In other words, Lem. 4 states that any peeled interval [l ′ , u ′ ] computed at node ν can be used as a starting point to apply a new peeling step at any child node ν ′ . This allows to propagate the peeled interval [l ′ , u ′ ] down the decision tree to hopefully improve sequentially the tightness of the convex relaxation (9-R ν ). V. NUMERICAL RESULTS This section reports an empirical study demonstrating the effectiveness of the proposed peeling procedure to accelerate the resolution of (1-P) on a synthetic dataset. Additional simulation results can be found in App. C. We refer the reader to [8,17] for an in-depth study of the statistical properties of the optimizer obtained from this problem. A. Experimental setup We consider instances of problem (1-P) with dimensions (m, n) = (100, 150). For each trial, new realizations of A, y and λ are generated as follows. Each row of the dictionary A is drawn from a multivariate normal distribution with zero mean and covariance matrix K ∈ R n×n . The (i, j)th entry of K is defined as K ij = ρ |i−j| , ∀i, j ∈ 1, n , with ρ = 0.1. Each realization of y is generated in two steps. We first create a k-sparse vector x † ∈ R n with evenlydistributed non-zero components, where k = 5. The nonzero entries are defined as x † i = sign(r i ) + r i where r i is an independent realization of a zero-mean Gaussian with variance σ 2 . We then set y = Ax † + n for some zero-mean white Gaussian noise n. The variance of the noise is adjusted so that the SNR 10 log 10 ( Ax † 2 2 / n 2 2 ) is equal to 15dB. The parameter λ is calibrated for each instance of (1-P) using the cross-validation tools of the L0LEARN package [18] with the default parameters. More specifically, we call the cv.fit procedure that takes y and A as inputs and returns couples (x λi , c λi ) from a grid of values {λ i } i∈N selected datadependently by the package. The vector x λi is an approximate solution of (1-P) where the weight of the ℓ 0 -norm is set to λ i and c λi is an associated cross-validation score computed on 10 folds of m/10 randomly sampled rows in A and entries in y. We then set B. Competing procedures We consider the following numerical solvers addressing (1-P): i) CPLEX [6], a generic mixed-integer problem solver; ii) L0BNB [10], a standard BnB procedure using a "breadth-first search" exploration strategy, see [10, Sec. 3.3]; iii) SBNB, a standard BnB procedure using a "depth-first search" exploration strategy, see [15,Sec. 2.2]; iv) SBNB-N, corresponding to SBNB enhanced with additional "nodescreening" techniques, see [11]; v) SBNB-P, corresponding to SBNB enhanced with the peeling strategy presented in this paper. L0BNB, SBNB, SBNB-N and SBNB-P all use the same solving procedure for relaxed problem (9-R ν ), namely a coordinate descent method [19]. We use the C++ implementation of CPLEX 4 and the Python implementation of L0BNB. 5 SBNB, SBNB-N and SBNB-P are implemented in Julia. 6 For SBNB-P, peeling is applied at each iteration of the numerical procedure solving the relaxed problem (9-R ν ). We use the current iterate, say x (k) , to define w y − Ax of M . This corresponds to the standard "Big-M " constraint commonly considered in the literature [7,9,13,15]. As far as our random simulation setup is concerned, it can be shown that (1-P) admits a unique minimizer x ⋆ with probability one and we thus choose M = γ x ⋆ ∞ for some γ ≥ 1 in our simulations. This requires to solve (1-P) once beforehand to identify x ⋆ . This operation is here only done for the sake of comparing the sensibility of the solving methods to the choice of γ. In practice, we obtain x ⋆ by solving a sequence of problems with an increasing value of M in the Big-M constraint. More specifically, letting x ⋆ M denote the solution of (1-P) with the additional constraint "−M 1 ≤ x ≤ M 1", we are guaranteed that x ⋆ = x ⋆ M as soon as the strict inequality x ⋆ M ∞ < M holds. We thus compute the solution x ⋆ M for a sequence of M of the form {η i M 0 } i∈N with η = 1.1 and for some M 0 > 0 and stop as soon as x ⋆ M ∞ < M . Fig. 1 presents the performance of the considered solving procedures. All results are averaged over 50 problem instances. Experiments were run on one Intel Xeon E5-2660 v3 CPU clocked at 2.60 GHz with 16 GB of RAM. The left column in Fig. 1 represents the average solving time of each procedure as a function of γ (top) and σ (bottom); the right column illustrates the gain allowed by the proposed method in terms of solving time (solid) and number of nodes explored (dashed) as compared to its best competitor, that is SBNB-N. C. Computational gains We note that SBNB-P leads to the smallest running time in all the considered setups. Since the latter corresponds to SBNB where peeling has been added, the spacing between the red and green curves materializes the gain provided by peeling. As far as our simulation setup is concerned, we see that the proposed method enables an acceleration of almost one order of magnitude with respect to SBNB. It is noticeable that this acceleration occurs even if γ = 1, that is the Big-M constraint is perfectly tuned to the problem at hand. This is due to the fact that peeling can refine individually each component of the initial bounds l and u at each node of the BnB decision tree to fit the local geometry of the problem. We also notice that SBNB-P improves over SBNB-N, which can be seen as another acceleration of SBNB. In particular, SBNB-P performs always as well as SBNB-N as emphasized by the gains in the right-hand side of Fig. 1. We note in particular the gain provided by peeling in terms of number of nodes processed by the BnB procedure: as expected, peeling allows for more aggressive pruning and thus reduces the number of nodes to be explored. VI. CONCLUSION In this paper, we presented a tractable strategy, named "peeling", to tighten the box constraints used in a BnB procedure tailored to ℓ 0 -regularized least-squares problems. Unlike the standard approach which imposes one global constraint to the problem, our strategy aims to locally refine the box constraints at each node of the decision tree. This refinement enables to strengthen the convex relaxations used in the pruning decisions made by the BnB procedure and can lead to significant improvements in terms of solving time, as emphasized by our simulation results. A. Proof of Lemma 1 We first have u] from (10b) and the left-hand side corresponds to the minimum value of a problem more constrained than the right-hand side. The reverse implication in (11) is thus always verified. The direct implication results from the following observations. If p ν (l, u) ≤p, then we have since (10a) ensures that the minimizers of P ν (x; l, u) do not belong to [l, u] \ [l ′ , u ′ ] and are therefore also minimizers of P ν (x; l ′ , u ′ ). We thus obtain the direct implication by contraposition. (15) We first note that B. Proof of and the last equality follows from the fact that the objective function is lower-semi-continuous and its domain is closed and bounded. The lower bound in (15) then stems from the Fenchel-Rockafellar dual problem of (21d). More specifically, we first notice that (21d) can be rewritten as and Second, we have from standard results of convex optimization [14,Ch. 12] that the Fenchel-Rockafellar dual problem of (23) reads where f ⋆ and g ⋆ denote the convex conjugates of f and g (see [14,Def. 4.1]). In particular, we have ∀x ∈ R n , w ∈ R m : as a consequence of weak duality applied to the pair (23)-(25). Our lower bound in (15) corresponds to a particularization of the right-hand side of (26) to (24a)-(24b). In particular, applying the definition of the convex conjugate to the function f , one easily obtains: Invoking Lem. 5 in Sec. B with (b, l, u) = (λ, α, u j ), we obtain Applying again Lem. 5 with where we have used the fact that for all i ∈ ν 1 , Similarly, using Lem. 6 in Sec. B with (a, l, u) = (λ, l i , u i ) we obtain: Finally, combining (27)-(30) leads to the desired result. APPENDIX B TECHNICAL LEMMAS In this appendix, we derive the expressions of two conjugate functions appearing in the derivation of (15). The results are encapsulated in Lem. 5 and 6 below. for b ∈ R and l ≤ u. Then, Proof. By definition of the convex conjugate (see [14,Def. 4.1]), one has ∀v ∈ R: This expression remains valid for l ≤ 0 ≤ u as long as we use the convention "0/0 = 0". Proof. Prior to proving the lemma, let us first show that the following result holds true: for all scalars β ∈ R, µ ≥ 0, we have max Note that the maximum in (35) is attained since the optimization problem amounts to maximizing a linear function over a (nonempty) compact set. Now, if µ = 0 we have max 0≤x≤µ βx − a µ x = 0 by using the convention "0/0 = 0". Moreover, if µ > 0: Overall, one can summarize the two cases as since a ≥ 0. We now turn to the proof of Lem. 6. By definition of the convex conjugate (see [14,Def. 4.1]), one has ∀v ∈ R: The above problem can be equivalently expressed as where On the one hand, g ⋆ 1 (v) can be rewritten as By applying (35) with (β, µ) = (v, u), we then obtain: On the other hand, we have Hence, using (35) with (β, µ) = (v, −l) leads to Finally, plugging (41)-(43) into (38) yields where the last equality follows from the fact that uv − a > 0 =⇒ lv − a ≤ 0. Indeed, if uv − a > 0 then necessarily u > 0 since a ≥ 0. We thus have v > a u ≥ 0. This implies that lv − a ≤ 0 since l ≤ 0 and a ≥ 0. APPENDIX C ADDITIONAL NUMERICAL RESULTS In Sec. V, we only vary the parameters that may directly influence the Big-M constraint. To give a broader overview of the performance allowed by our peeling methodology, we have selected three different types of instances of (1-P) with increasing hardness. We still fix m, n and SNR but we vary k and ρ to control the instance inherent complexity. On the one hand, larger k leads to problems with a more complex combinatorial nature. On the other hand, increasing ρ leads to more correlation between the columns of A, which brings the local minima of (1-P) closer to the global ones. Both of these effects make the problem harder to solve. Our three setups are constructed with the following combination of the parameters: We note that the maximum solution time allowed for the solver is 30 hours, that is just above 10 5 seconds. For particularly hard instances, some solvers hit this time limit. Since it only concerns a minority of the trials, we have decided not to remove them, but this may have slightly lower their mean solution time. Fig. 2 shows the solution time of the different methods for the three different types of instances when varying γ and σ. We roughly observe the same behaviors as in Fig. 1. A notable remark is that the more difficult the instance, the larger the gains permitted by our peeling methodology. Our empirical analysis is that on harder instances, the BnB tree has to be explored more extensively. Tightening the bounds at a given node will therefore allow a greater number of nodes to benefit from the strengthening of the relaxations. The solving time will therefore be further reduced. When varying γ, the gains of the method implementing our peeling strategy (SBNB-P) again its best concurrent (SBNB-N) almost reach a factor 2 on EASY instances. On MEDIUM instances, it reaches a factor 4 and on HARD instances, it reaches a factor 5.
2023-03-01T06:42:45.779Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "7e2506c6175d530b5db69b46fd6f52e5e6b1aaae", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7e2506c6175d530b5db69b46fd6f52e5e6b1aaae", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
269748001
pes2o/s2orc
v3-fos-license
Exploring Tangible Explainable AI (TangXAI): A User Study of Two XAI Approaches Explainable AI (XAI) has garnered significant attention as a theoretical subject in the research community. However, the practical application of XAI, particularly in the realm of user interfaces, remains limited. Moreover, evaluations of these interfaces from the perspective of end-users are scarce. In this paper, we introduce and evaluate two innovative tangible XAI interface concepts. The tangible interfaces capitalize on the widely recognized advantages of data physicalization, offering users a more intuitive and hands-on experience. We implemented two distinct XAI approaches within this tangible framework: feature relevance and local explanations. These approaches were applied to real-world use cases: recommending recipes and selecting jogging routes, respectively. The findings of our Wizard of Oz study indicate that participants had some challenges in distinguishing between the primary objectives of the XAI interface and the typical interactions associated with an AI recommender system. However, tangibility seems to support users’ understanding of AI’s explanations and enables users to reflect on their trust in the AI model. INTRODUCTION As the penetration of AI into our daily lives grows, there is a clear need for AI to be human-centric, ensuring user trust and promoting transparency.Yang et al. highlighted the unique challenges of HCI design for AI systems [18] and concretely Amershi et al. introduced 18 guidelines for human-AI interaction.For instance, Guideline 10 suggests that when there is uncertainty, the AI system should offer three or four alternative suggestions.Additionally, Guideline 15 emphasizes the importance of allowing users to provide feedback during their interaction with the AI [2].Building on this, the field of Explainable AI (XAI) has emerged to specifically focus on explaining AI's decision-making processes to users [15,17].Within XAI, several approaches exist, e.g., highlighting to the user which of the input parameters have the most significant impact on the AI's decision (feature relevance) or highlighting how much a particular parameter would need to shift the AI's conclusion (local explanations) [4].Research on explainable AI (XAI) has primarily concentrated on graphical user interfaces, e.g., enabling users to explore "what-if" scenarios with AI models by using touchscreen sliders [1,5]. Conveying the explanations of XAI effectively to users may be challenging; one solution may be found through the use of data physicalization [3,13].Data physicalization translates digital data into tangible forms, e.g., a physical 3D printed model might represent climate change data, making abstract concepts more tangible and understandable.Tangible user interfaces (TUI) take this one step further, allowing users to interact with the data physically and leveraging the benefits of multiple human senses [12].The research topic of "Tangible XAI" has recently been opened [6][7][8][9] exploring the design space for users to physically interact with AI explanations, deepening understanding and possibilities for collaborative data exploration. Colley et al. [6] introduced a tangible XAI framework that includes four general approaches: simplified rule extraction, feature relevance, local explanations, and visual explanations.Expanding on their work, we conducted the first user study on Tangible XAI interfaces.Our study aimed to understand user interactions and perceptions of these tangible XAI interfaces.Specifically, we designed tangible interfaces based on two of the primary XAI approaches: feature relevance and local explanations [4].These were then assessed through a Wizard of Oz user study [10]. STUDY DESIGN In this section, we first describe our general approach to tangible XAI interface design.We then detail the design of prototype XAI interfaces for two use cases: recipe recommendation and jogging route selection.To gain as broad data as possible in this exploratory research phase, different XAI approaches are applied to each use case. Tangible XAI Interface Design We selected two of the main XAI approaches proposed by prior work, feature relevance and local explanations [4].In feature relevance, the importance of each input parameter on the AI's decision is scored.Hence, the user can understand which parameters matter and which are largely irrelevant in the AI's decision process.One weakness of feature relevance is that it does not consider possible interaction effects between the parameters.Local explanations are an alternative XAI approach that focuses on the AI model's behavior near a given decision, i.e., they do not provide a full explanation of the AI model over its entire range of inputs.Hence, a 'counterfactuals' Local explanations aims to demonstrate the minimum change in input parameters needed to change the AI's decision. In an initial framework, Colley et al. speculated how the XAI approaches of feature relevance and Local explanations could be presented through tangible user interfaces [6].Following this direction, we created mock-up interfaces for two AI use cases, a cooking recipe recommendation and the selection of a jogging route.As the primary focus was on the user experience, a Wizard of Oz study approach was used [10], with the test moderator acting as the AI system, according to a set of predefined rules. Recipe Recommendation -XAI approach: Feature Relevance In this use case, a tangible bar chart was used as the interface to the XAI.Tangible bar charts have been a much-used TUI interface, e.g., [16].They function as data visualization and provide affordance as a tangible input mechanism, e.g., by pushing and pulling the bars to change their height.In our study implementation, the two-column bar chart was formed using Lego Duplo bricks where red bricks represented the cost of the recipe (5€ per block) and yellow bricks the recipe's preparation time (5 minutes per block).Prior work has noted the benefits of using Lego as a prototyping tool [14], e.g., compared to sketching and cardboard models.Following our Wizard of Oz protocol, the test moderator configured the tangible Lego Duplo bar chart to present the input parameters that had caused the AI system's recipe recommendation (Figure 1).The study participants were then invited to interact with the Lego Duplo bars by adding and removing bricks.Based on the selected parameters, the test moderator changed the recipe recommendation according to a predefined mapping table that was not visible to the test participants. Jogging Route Selection -XAI approach: Local Explanations In the jogging route selection case, participants were introduced to a scenario where an AI system, e.g., incorporated into their training app, had suggested a jogging route to them "Forest Path".A tangible XAI interface presented a local explanation around the decision point, highlighting two parameters: the time available to complete the jog and the preferred solitude of the route.All other parameters are fixed.In this condition, the AI system's decision boundary between the selected "Forest Path" and an alternative "City Loop" route was presented by a piece of string.Participants were invited to move a selection puck in the interface to understand how changes in the input parameters affected the AI's route recommendation -the puck crossing the decision boundary line equating to a change in the recommendation.The moderator then explained how changing another parameter, the weather conditions from sunshine to rain, would cause a change in the decision boundary.This was demonstrated by moving the decision boundary string on the tangible chart interface. User Study Process To evaluate the understanding and experience of using the tangible XAI interfaces, we arranged 5 user study sessions, each including 2 participants.Of the 10 participants, 6 identified themselves as women and 4 as men.To improve the validity of the test, large screens were used to visualize the background context of each use case.At the start of the session, the test moderator introduced the general concept of AI.After this, each of the cases was presented and explored in turn.The presentation order of the cases was counterbalanced, with 3 groups starting with the recipe scenario and 2 with the jogging route.The think-aloud process was used during the study, which was audio recorded for later analysis. In each case, the test moderator first introduced the use case, explaining that the AI system had made a recommendation and initiating the participants to question why the recommendation had been made.After this, the prototype XAI interface for each case was introduced, and participants were invited to interact with it.The test utilized the Wizard of Oz protocol [10], with the moderator performing the system's actions depending on the participants' interactions.After each case, participants were questioned on their level of trust in the AI system.At the end of the session, a final interview was conducted, probing participants' general perceptions towards the tangible XAI interfaces. For the recipe recommendation case, following the first evaluation session, it was noted that participants had difficulty separating the AI system's recommendation from the XAI interface.Hence, for subsequent tests, an additional visualization of an Amazon Alexa device was added as the output for the recommendation. FINDINGS Based on the participants' think-aloud feedback while exploring the interfaces and the end interviews, the themes of misunderstanding XAI, the benefits of tangibility, and trust in the AI model were identified. Misunderstanding XAI One fundamental observation was that some participants were initially confused about the general concept of XAI.In both use cases, participants felt they were primarily using an interface to select a recipe or jogging route rather than using the interface to understand and build trust in the AI model's operation.For example, one participant described using the XAI interface to select a recipe, "Because if you input just 'dinner', you might get 4000 recipes.After seeing that, you'd then refine your search, thinking 'I want to spend only 50 € and have just 45 minutes', and then you'd simply use the slider to adjust and find the most suitable results.But in this case, one really had to think a bit beforehand" (Participant 1).To address this, after the first test, an image of an Amazon Alexa voice assistant was added to the recipe recommendation case to highlight that the recommendation was not controlled by the XAI interface. Benefits of Tangibility The tangible recipe XAI interface received positive comments for its clear and concrete aspects.The Lego Duplo bricks received positive feedback for their usability, clarity, and aesthetic appeal.Participants commented that they understood that the height of the stack of bricks represented the input to the decision-making process, making the systems' output easier to interpret.Some participants noted the added value in the tangible interface, "With a physical user interface, there's a lot of, well, added value for me at least.It makes things much more concrete, and I believe it has a lot of subconscious impact on it" (P7).Participant 1 highlighted the difference in interaction speed, and hence experience, compared to a touchscreen slider, "When using this [Lego Duplos], one needs to think more deliberately about the choices compared to a slider" (P1).However, one participant was uncomfortable assigning physical properties to intangible entities such as time, "It feels more appropriate to have a slider for money and time.Because [...] the money and our time blocks seem similar in size and appearance.But, are time and money truly equivalent units?" (P6). All participants understood the function of the jogging route interface.However, it failed to provide clear value for participants due to its abstract nature and lack of clear outcomes.One participant noted that the jogging interface provided a feeling of uncertainty in the AI system's decision, "Compared to the recipe, which was straightforward, the jogging outcome kind of felt like it was hovering.It gave off a vibe like not being entirely certain" (P4).Another participant commented on the poor experiental aspects of the jogging interface, "In the recipe suggestion tool, with these blocks, there's an element of gamification that makes the task feel more relaxed.It doesn't come off as too serious.In contrast, the jogging suggestion tool feels more scientific in its visualization" (P8). Trust and Acceptance of the AI Model Many participants wished to know all the criteria that resulted in the AI system's recommendation, i.e., that the system would provide a fully transparent model.Further, they expressed the desire to be able to tweak the models' parameters to align with their preferences.This questioning approach suggests that participants' starting point is not to trust black box AI models, even on fairly trivial topics such as the ones employed in the study.Rather than questioning the AI model itself, participants were generally disposed to question the accuracy of the data fed into the model, e.g., "Also, weather data, if exclusively sourced from stations, might lack localized information" (P9), and referring to individuals listing items for sale, "I remain skeptical of the accuracy with which individuals label their listings... " (P5). In the recipe recommendation use case, the parameters presented in the XAI interface (preparation time and cost) were perceived as essential and matched the participants' default expectations.This match with expectations created trust in the AI recommendation.This situation was reversed in the case of the jogging use case, where participants felt that the parameters presented by the XAI interface (time, degree of solitude, and weather conditions) were irrelevant to them when deciding on a jogging route.Due to this, most participants expressed mistrust in the decisions of the simulated AI system.This finding highlights that interacting with an XAI interface can also result in reduced trust in the AI systemthis may in fact be the desired result, inspiring users to critically question the AI system's outputs and place the appropriate level of trust in the system. In both use cases, participants expressed the desire for the visibility of more parameters in the XAI interface, e.g., to include ones that they considered to be relevant.Others stated that their trust in the AI's recommendations would be increased through having visibility of more parameters, an understanding of the content of the recipe/jogging route databases, and information on how the reliability of the recommendation was ensured.This again suggests that the participants did not fully understand the intended purpose of the XAI and viewed the XAI interface as part of the recommender AI rather than a way to gain an understanding of the decisions made by the AI system. DISCUSSION AND CONCLUSIONS Our study's findings highlight the need for fundamental research on the placement and role of an XAI interface in the context of AI system usage.Additionally, they offer valuable methodological insights for future user studies on (tangible) XAI.Given that our research is among the initial explorations of the topic, it provides foundational information for subsequent researchers. Training Data and Trust.Interestingly, participants' discussion on trust focused on the accuracy of the AI model's input dataset (i.e., the training data) and the set of parameters exposed in the XAI interface.Taking an experimental approach to establishing trust in AI systems, some participants recounted comparing different route planning applications to build trust.However, the AI model's performance was not explicitly highlighted as a factor contributing to trust. The primary goal of an XAI interface is to guide users to achieve the appropriate level of trust in an AI system.This means that after interacting with XAI, users might trust the AI system more or less than before.Noting also that the trust may be contextdependent, with positive trust for specific tasks and distrust for other tasks [11].If a generally low level of trust is appropriate, then the target of the XAI interface is to convey this to users.In their work defining performance metrics for XAI systems, [11] highlight the development of curiosity in the user as a critical factor in XAI performance.Hence, it is beneficial if the XAI interface encourages users to critically evaluate the AI system's outputs, even beyond the specific details shown in the XAI itself. The Role of (Tangible) XAI.There was evident confusion among participants about the distinction between providing inputs to the AI model, i.e., essentially using the AI tool and using the XAI system to comprehend and foster trust in the AI model.Of the two XAI approaches tested, feature relevance was the best understood and most misunderstood by our test participants.Participants valued the simple data physicalization of time and cost as Lego blocks.However, the format of the interface led them to incorrectly perceive its role as being the user interface to a selection tool.The local explanations XAI interface was generally less well understood, and future works should consider alternative design approaches to this XAI method.We also note that the tangible nature of the presented XAI interfaces slowed the pace of interaction compared to, e.g., a touchscreen interface.As pointed out by study participants, this may result in users thinking more deeply about the interaction and forming a better understanding of the AI model. While tangible interfaces to XAI can enhance the user experience in scenarios where hands-on interaction and deep understanding are beneficial, as with tangible interfaces in general, they may not be suitable for applications requiring high interaction speed, scalability to cover multiple parameters, or portability. Methodological Findings.Employing use cases in XAI user evaluations deeply rooted in personal context, such as recipe suggestions or jogging route recommendations, poses challenges.Participants prioritize the AI's decision accuracy and often hold preconceived notions about the primary decision influencers.This focus detracts from the actual evaluation of the XAI interface.Therefore, we suggest that future XAI studies opt for more generic use cases or, for instance, employ personas to convey that the AI solution is targeted to a third party. We also suggest that in future XAI studies, the AI system and the full range of possible outputs are first introduced to participants, enabling them to form a comprehensive understanding of its operation before introducing any XAI interface.We suggest that this should also include incorrect suggestions.In this way, the full range of performance of the XAI interface can be explored, e.g., enabling understanding of why the AI made incorrect decisions is a critical function of XAI.A vital role of XAI is to guide users to question the outputs of AI systems and only place an appropriate level of trust in the system. Figure 2 : Figure 2: Recipe Recommendation study.Left: The Wizard of Oz AI model used by the moderator that was hidden from the test participants.Right: An example state of the user exploring the model using Lego bricks.
2024-05-13T15:18:47.211Z
2023-12-02T00:00:00.000
{ "year": 2023, "sha1": "803fb686e175587d4ee9c10611e2a1b5cc446373", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3638380.3638426", "oa_status": "HYBRID", "pdf_src": "ACM", "pdf_hash": "8a2b06b53e0e1da608a7c071f49366a0deb7d1e7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
236882203
pes2o/s2orc
v3-fos-license
Unmet need for family planning among reproductive-age women living with HIV in Ethiopia: A systematic review and meta-analysis Background Closing the gap of unmet for family planning is crucial to eliminate new pediatric HIV infections likewise to improve maternal and child health among reproductive-age women living with HIV. However, studies conducted on unmet need for family planning among reproductive-age women living with HIV showed inconsistent and non-conclusive findings on the magnitude of the problem. Moreover, there was no meta-analysis conducted in this area. So this systematic review and meta-analysis were conducted to estimate the pooled prevalence unmet need for family planning among reproductive-age women living with HIV in Ethiopia. Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline was followed to review both published and unpublished studies in Ethiopia. All studies in PubMed, Cochrane Library, Hinari, Google Scholar, CINAHL, and Global Health databases were searched. Meta-analysis was performed using STATA 14 software. The heterogeneity and publication bias were assessed using the I2 statistics and Egger regression asymmetry test, respectively. Forest plots were used to present the pooled prevalence with a 95% confidence interval (CI). Results This review included 7 studies, and 3333 study participants. The pooled prevalence of unmet need for family planning among reproductive-age women living with HIV in Ethiopia was 25.13% (95%CI: 19.97, 30.29). The pooled prevalence of unmet need for spacing and limiting was 13.91% (95%CI: 10.11, 17.72) and 9.11% (95%CI: 6.43, 11.78), respectively. Conclusions One-fourths of reproductive-age women living with HIV had an unmet need for family planning. A variety of programmatic investments are needed to achieve more meaningful progress toward the reduction of unmet need for family planning among reproductive-age women living with HIV. Introduction Women of reproductive age are disproportionately affected by the HIV/AIDS pandemic. In Sub-Saharan Africa (SSA), the region highly affected by HIV, women and girls continue to be the foremost affected and accounted for 59% of all new HIV infections in the region in 2019 [1]. In Ethiopia, women cover more than 60% and 55% of adults living and newly infected with HIV/AIDS, respectively [2]. Even though new HIV infections among children showed a dramatic decline (52%) from 2010 to 2019, still far to reach the 2020 targets set by The Joint United Nations Programme on HIV/AIDS (UNAIDS) and its partners [3,4]. Family planning is one of the proven, cost-effective strategies for preventing vertical transmission of HIV. Studies have shown that even modest decreases in the number of pregnancies to HIV-positive women could prevent HIV-positive births at the same rates as the use of antiretroviral therapy (ART) for prevention of maternal to child transmission (PMTCT) [5][6][7]. In SSA, about 333,000 new infant infections could be averted annually, if all women in the region who did not wish to become pregnant could have access to contraceptive services [8,9]. Providing universal access to contraception can also reduce maternal, infant, and child deaths by 40%, 10%, and 21%, respectively [10][11][12]. Despite this importance, about 270 million reproductive-age women (15-49 years) have an unmet need for contraception worldwide [13]. SSA has the highest prevalence of unmet need for contraception, where one in five women have an unmet need for spacing or limiting pregnancies [14]. An analysis of Demographic and Health Survey (DHS) data from 12 African countries other than Ethiopia showed that 9-23% of HIV-positive women had an unmet need for family planning [15]. Reducing maternal death, ending HIV/AIDS epidemic, and ensuring universal access for family planning are key components of Sustainable Development Goals (SDGs), targets 3.1, 3.3, and 3.7 respectively [27]. Reducing the maternal mortality ratio (MMR) to 199 per 100,000 live births, HIV infection rate among infants less than 2%, and unmet need for family planning to 10% by 2020 are also some of the primary targets of the National Reproductive Health Strategy of Ethiopia [28]. Due to this, the family planning service has been given special attention by several governmental and non-governmental organizations. Determining the prevalence of unmet need for family planning among reproductive-age women with HIV is important in designing effective interventions to reduce the problem. There is no nationally representative primary data source that provides an estimate of unmet need for family planning among HIV-positive women in Ethiopia. The available studies which assessed unmet need for family planning among reproductive-age women living with HIV in Ethiopia also revealed inconsistent and non-conclusive findings. Unmet need for family planning among women living with HIV in Ethiopia varied from 15.5% in Nekemet [16] to 35.3% in the Amhara region [25]. Therefore, this review aimed to estimate the pooled prevalence of unmet need for family planning among reproductive-age women living with HIV in Ethiopia. Registration This systematic review has been registered in the International Prospective Registry of Systematic Review(PROSPERO) with a specific registration number CRD42020155896. Reporting Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guideline was strictly followed in this review [29] (S1 Table). Search strategy A systematic review and meta-analysis of published and unpublished studies were conducted to assess the pooled prevalence unmet need for family planning among reproductive-age women living with HIV in Ethiopia. Studies were searched through PubMed, Cochrane Library, Hinari, Google Scholar, CINAHL, and Global Health databases. Moreover, grey literatures were searched by tracing reference lists. The search was conducted from June 5-12, 2020. The search was made using the search term: "prevalence", "proportion", "magnitude", "incidence", "unmet need", "demand", "need", "family planning", "family planning utilization", "family planning use", "contraceptive use", "contraceptive utilization", "contraception", "factors", "determinants", "predictors", "factors associated", "associated factors", "risk factors", "women", "reproductive age women", "living with HIV/AIDS", "living with HIV", "HIV positive", "ART clinic, "ART care", "HIV/AIDS care", "Chronic HIV/AIDS care", "Ethiopia". All key terms were searched by a combination of Boolean operators "AND" or "OR" as appropriate and the search was done by two authors independently (BK and AA). Study selection and eligibility criteria All available studies conducted from January 1, 2000, to June 1, 2020, which fulfilled the eligibility criteria were included in this review (Table 1). Outcome measurements Unmet need for family planning. The women were considered as having unmet need for family planning if they had unmet need for limiting and/or unmet need for spacing [30]. Unmet need for limiting. Sexually active woman who was not using any method of contraception, and did not want to have more children, and/or whose last pregnancy was unwanted and/or did not know whether to have children or not was taken as having unmet need for limiting. Unmet need for spacing. Sexually active woman who was not using any method of contraception, and wanted to postpone their next birth for at least two years, and/or whose last pregnancy was mistimed and/or did not know when to have children was considered as having unmet need for spacing. All studies which used the above definition to measure the prevalence of unmet need for family planning among reproductive-age women living with HIV were included in this review. Study selection, quality appraisal, and data extraction Those articles searched from selected databases were exported to Endnote X8, and duplicate files were removed. The remaining articles and abstracts were independently screened by two groups (BA and MA) for inclusion in the full-text appraisal. The differences between reviewers were managed through discussion and disagreement was handled by the third party (MY). The quality of articles was assessed using the Joanna Briggs Institute (JBI) critical appraisal checklist [31]. Two reviewers independently assessed articles before inclusion for review. Three authors (BK, YD, and MY) independently extracted all the necessary data using a using Microsoft excel 2010 sheet. The data extraction tool contains information on the author's name, year of study, year of publication, study area, response rate, sample size, study quality score, and prevalence. Statistical methods and analysis The meta-analysis was conducted using STATA 14 software. Forest plot was used to show the magnitude of unmet need for family planning among reproductive-age women living with HIV in Ethiopia. Due to the substantial presence of heterogeneity among studies, the random effect model of analysis was used. The pooled prevalence of unmet need for family planning was presented with 95% CI. The heterogeneity test of included studies was assessed by using the I 2 statistics. It was declared using a p-value less than 0.05 for I 2 statistics [32]. Subgroup analysis was also conducted by different study characteristics such as sub-region of Ethiopia (North or other), study year (during Millennium Development Goal period(MDG) period or Post MDG), study quality score (low or high score). The publication bias was assessed using the Egger regression asymmetry test [33,34]. It was declared with a p-value of less than 0.05. Study selection This systematic review and meta-analysis included both published and unpublished studies conducted on unmet need for family planning among reproductive-age women living with HIV in Ethiopia. A total of 862 records were retrieved through electronic database searching. From these, 109 duplicated records were excluded, and the remaining 740 articles were excluded using their titles and abstracts. Thirteen full-text articles were assessed for eligibility. Characteristics of included studies Articles included in this review were both published and unpublished cross-sectional studies [16,[21][22][23][24][25]40]. The sample size of studies ranged from a minimum of 334, a study conducted in Addis Ababa [21] to a maximum of 658, a study conducted in Hawassa [23]. A total of 3333 study participants were included in this review. The studies were conducted from 2013 to 2018 in different regions of the country. A total of four administrative regional states (Tigray, Amhara, Oromia and Southern Nations, Nationalities and Peoples' Region) and one administrative city (Addis Ababa) were represented in this review ( Table 2). Prevalence of unmet need for family planning The pooled prevalence of unmet need for family planning among reproductive-age women living with HIV in Ethiopia was 25.13% (95%CI: 19.97, 30.29). The highest prevalence of unmet need for family planning was reported from a study done in Amhara Region. The study showed that 35.3% of reproductive-age women living with HIV had an unmet need for family planning [25]. The lowest prevalence of unmet need for family planning was 15.5% among reproductive-age women living with HIV in Nekemte [16]. Substantial heterogeneity was found among included studies in the meta-analysis, I 2 = 92.1%, and p < 0.001 (Fig 2). The funnel plot showed a symmetrical appearance. The Egger's regression asymmetry test also showed non-significant publication bias, p-value = 0.35. Sub-group analysis Sub-group analysis was conducted to deal with the source of heterogeneity. However, the heterogeneity still exists. Thus, the heterogeneity may be explained by other factors not included in this review. The prevalence of unmet need for family planning among studies done during and after 2015 was 25.96 (95% CI: 18.98, 32. 94). The prevalence of unmet need for family planning among studies done in the Northern part of Ethiopia was 29.12% (95%CI: 23.69, 34.56), which is higher than studies conducted in other parts of the country (Table 3). Discussion Unmet need for contraception is an important concept for designing family planning programs and has its own implications for maternal and child health, especially for reproductiveage women living with HIV. This systematic review and meta-analysis was conducted to estimate the prevalence of unmet need for family planning among reproductive-age women living with HIV in Ethiopia. This study found a higher prevalence of unmet need for family planning among HIV-positive women in Ethiopia compared to other countries in Africa. The pooled prevalence of unmet need for family planning among reproductive-age women living with HIV in Ethiopia was 25.13% (95%CI: 19.97, 30.29). The DHS data analysis from 12 African countries also showed that nine countries had a level of unmet need for family planning lower than the finding of this study i.e 9.3% in Togo to 19.5% in Malawi, and other three countries Kenya (20.2%), Côte d'Ivoire (20.4%), and Togo (23.2%) had a similar level of unmet need for family planning with this review [15]. Most DHS included in the analysis were conducted before 2013, but all studies included in this review were conducted during and after 2013. It is also too far to achieve the United Nations Population Fund (UNFPA) consultation to end unmet need for family planning by 2030 [41]. Poor access and quality of family planning services disintegrated HIV/AIDS treatment and care services, absence of strong monitoring and followup system, and lack of networking and coordination might contribute to the high level of unmet need for family planning among reproductive-age women living with HIV in Ethiopia. Studies conducted in the Northern part of the country had a higher level of unmet need for family planning (29.1%) than studies conducted in the other part of Ethiopia (Addis Ababa, Oromia, and SNNPR) (19.7%). The possible reason for this might be studies in the northern part of Ethiopia involved women from rural area higher than studies conducted in other parts of the country. Rural women have low knowledge and access for family planning services. Thus, women who reside in rural areas have a higher unmet need for family planning than women who reside in urban areas [42]. Furthermore, most studies in the other part of the country were conducted at hospitals. However, the majority of studies in the Northern part of the country were conducted at both hospitals and health centers. Hospitals had well equipped with human resources and materials, better health care delivery systems, and integrated health services than health centers. Despite several efforts made by governmental and non-governmental organizations, the level of unmet need for family planning among HIV-positive women is still high. This review showed no reduction in the level of unmet need for family planning among HIV-positive women in Ethiopia in the years 2015 to 2018 (25.96%) [21,[23][24][25] compared with studies conducted before 2015 (24.06%) [16,22,40]. This calls for efforts to meet the need for family planning to achieve the SDG targets to end HIV/AIDS epidemics and to reduce maternal and child morbidity and mortality [27]. Family planning service is a promising strategy to reduce maternal and child morbidity and mortality by preventing high-risk and unwanted pregnancies. However, complications during pregnancy or childbirth are still one of the leading causes of death and disability among women of the reproductive age group in developing countries [43]. Therefore, emphasis should be given to reduce unmet need for family planning through the improvement of family planning access and choice, integration of family planning service with HIV treatment and care services, provision of HIV/AIDS patient-friendly health services. This review has certain strengths and limitations. The PRISMA guideline was strictly followed in the systematic review and meta-analysis. Only four administrative regional states and one administrative city were included in the review. Thus, this may affect the representativeness of the review. Moreover, there were limited studies that presented factors associated with unmet need for family planning. The factors considered in these studies also vary across studies. For this reason, this review was unable to identify factors affecting the unmet need for family planning among reproductive-age women living with HIV. Conclusions The prevalence of unmet need for family planning among reproductive-age women with HIV is high. One in four reproductive-age women living with HIV has an unmet need for family planning. The government and other concerned bodies should urgently stride to reduce unmet need for family planning through strengthening family planning programs and better integrate family planning services in HIV service delivery settings. Prong 2 of PMTCT (prevention of unintended pregnancy among HIV-positive women) must become more visible and given programmatic priority. It is also needed to transform services in the way that will help HIV-infected women and couples achieve their desired spacing, timing, and number of children. Moreover, further large-scale studies are also needed to investigate factors associated with unmet need for family planning among reproductive age living with HIV. Supporting information S1
2021-08-04T05:31:25.857Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "6c86b9903521ab5c0967d57036c938a632c05fdc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0255566&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c86b9903521ab5c0967d57036c938a632c05fdc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44146636
pes2o/s2orc
v3-fos-license
Salmon cartilage proteoglycan promotes the healing process of Staphylococcus aureus-infected wound Wound healing is the critical event for maintaining skin function and barrier. Inflammatory state in which a variety of cells are activated and accumulated is important for wound healing. Bacterial infection in cutaneous wound is a common problem and causes delay of wound healing. Our previous study demonstrated that the salmon nasal cartilage proteoglycan (PG) has an immunomodulatory effect in various mouse models of inflammatory disease. In this study, we investigated the effect of PG on healing process of Staphylococcus aureus-infected wound. PG accelerated wound closure in the initial phase of both infected and non-infected wound healing. In addition, the bacterial number in wounds of the PG-treated mice was significantly lower than that in the vehicle group. Neutrophil and macrophage infiltration was intensively observed in the PG-treated mice on day 2 after S. aureus inoculation, whereas neutrophil and macrophage influx was highly detected on day 6 in the vehicle control. Moreover, the production of TGF-β and IL-6 in the wound tissue was significantly promoted compared to the vehicle control on day 1. In contrast, the production of IL-1β and TNF-α in PG-treated mice was significantly decreased compared to the vehicle control on day 5. These data suggested that PG modulates the inflammatory state in infected wounds leading to promote wound healing. Introduction Wound healing is an important process for retaining tissue homeostasis leading to restoration of barrier function. Inflammatory response is critically involved in the initial phase of wound healing. The inflammatory response requires initial recruitment of neutrophils and macrophages for defense against invasive microbes [1,2]. The inflammatory response is followed by migration and proliferation of dermal and epidermal cells. Extracellular matrix, then, fills the wound space and remodels the skin function [3,4]. In the wound healing process, several cytokines play an important role in wound healing. Tumor necrosis factor (TNF)-a and interleukin (IL)-6 produced by neutrophils and macrophages induce inflammation to removal of foreign matter [5]. Transforming growth factor (TGF)-b promotes reepithelialization [6]. Bacterial infection in cutaneous wound is a common problem and it is one of the key reasons of wound healing impairment [7,8]. Staphylococcus aureus is one of the most common bacteria causing skin and soft tissue infections, including impetigo, folliculitis and cellulitis [9,10]. In addition, S. aureus has high potential for tissue invasion and causes life-threatening infections such as endocarditis and sepsis [11]. Proinflammatory cytokines such as IL-1b and IL-6 responses are upregulated throughout the S. aureus infection [12]. Proteoglycan (PG) is a constituent of extracellular matrix and is widely distributed in connective tissues such as skin, bone, cartilage and vascular wall by forming a complex with collagen, fibronectin, laminin, hyaluronic acid and other glycoproteins. PG consists of core protein and one or more covalently attached glycosaminoglycan chain(s). We have previously demonstrated that PG extracted from salmon nasal cartilage has a potent effect on suppression of inflammatory responses induced by heat-killed Escherichia coli in mouse macrophages [13]. Daily oral administration of PG attenuates the severity of experimental inflammatory colitis [14], autoimmune encephalomyelitis [15] and collagen-induced arthritis [16]. In addition, PG has been shown to be involved in cellular proliferation, adhesion and effective for wound healing in vitro [17,18,19]. However, the healing effect of PG on infected wound is not clear. In this study, we investigated the effect of PG on the initial healing of S. aureus-infected skin wound because anti-inflammatory response is critical for wound healing. To clarify the effect of PG, we assessed histology, recruitment of phagocytes and cytokine responses in the skin wound of PG-treated mice. Mice Male BALB/c mice, 8-week-old, were purchased from Clea Japan, Tokyo, Japan, and maintained in a temperature-controlled room ( Preparation of PG Salmon nasal cartilage PG was purchased from Kakuhiro Co., Ltd., Aomori, Japan. Lyophilized PG powder was dissolved in sterile distilled water (DW) at concentrations ranging from 0.4e10.0 mg/mL. DW was used as control. Mouse model of excisional wound and PG administration Skin wound was made on the back of mice as previously described [20]. In brief, mice were anesthetized with an intraperitoneal administration of an anesthetic mixture [0.075 mg/mL medetomidine (Zenoaq, Tokyo, Japan), 0.4 mg/mL midazolam (Sandoz, Tokyo, Japan) and 0.5 mg/mL butorphanol (Meiji Seika Pharma Co., Ltd., Tokyo, Japan)] at 100 mL/10 g body weight. Hair on back skin was removed using a mechanical shaver. After cleaning the skin with 70% ethanol, 6-mm-diameter circular full-thickness wound was made using a skin biopsy punch (disposable biopsy punch, Igarashi Ika Kogyo, Tokyo, Japan). Ten mL of PG (0.4, 2.0 and 10.0 mg/mL) or DW as the vehicle control was applied on the wound region once a day for 14 days. Pictures of the wound were taken daily to monitor wound healing. The wound area in photographs was measured using image analysis software (Photoshop CS6; Adobe Systems, San Jose, CA). The data was expressed as a percentage of wound area relative to the initial wound size. Bacterial strain and culture condition S. aureus strain 834, a clinical isolate [21], was used for infection of mice in this study. The bacterial cells were grown in tryptic soy broth (BD Diagnosis Systems, Sparks, MD) at 37 C for 15 h, harvested by centrifugation and washed with phosphate-buffered saline (PBS). The washed bacterial cells were diluted with PBS to 2.5 Â 10 9 colony-forming units (CFU)/mL adjusted by spectrophotometric measurement at 550 nm. Inoculation of S. aureus into skin wounds Mice were inoculated with 20 mL of 2.5 Â 10 9 CFU/mL of S. aureus at the sites of skin wound immediately after wounding. Ten mL of PG (10 mg/mL) or DW was applied on the wound region once a day for 6 days. Skin lesion tissues were collected for further analysis at the indicated time after S. aureus inoculation. Histological analysis Skin lesion tissues were excised and fixed in 4% (w/v) paraformaldehyde buffer at 4 C for overnight. Tissues were then embedded in paraffin and cut into 5-mm thick sections. Deparaffinized sections were stained with hematoxylin and eosin (H&E). To observe localization of bacterial cells, the sections were stained with crystal violet, Lugol's iodine solution and picric acid. The stained sections were observed under BZ-X700 microscope (Keyence, Tokyo, Japan) and wound length was measured. Quantitation of viable bacterial cells in wound tissues Wound tissues were collected to determine the number of viable bacterial cells in the infected wounds on day 2 after S. aureus inoculation. Individual tissues were homogenized in sterile Dulbecco's Modified Eagle medium (DMEM, Nissui Pharmaceutical Co., Tokyo, Japan). Each sample was plated in triplicate on tryptic soy agar plates. Plates were incubated for 16 h at 37 C. The number of viable bacterial cells in the wound was determined by counting the colonies on the plates. The data are described as log10 number of CFU/wound. Immunohistochemistry Immunofluorescent analysis was conducted to determine localization of neutrophils and macrophages in the infected wound tissue. Wound tissues were excised and fixed as mentioned above. The tissues were then soaked in PBS containing 30% sucrose and frozen in optimal cutting temperature medium (Sakura Finetek Japan, Tokyo, Japan) at À80 C. The frozen tissues were cut at 10-mm thickness and the sections were incubated with PBS containing 2% normal goat serum. The sections were stained with rat anti-mouse Ly6G IgG (diluted 1:500; Abcam, United Measurement of cytokines Wound tissues were homogenized in DMEM containing complete protease inhibitor cocktail (Roche Diagnostics, Mannheim, Germany) and then were centrifuged for 10 min at 1000Âg. Supernatants were used to determine TGF-b, IL-6, IL-1b and TNF-a levels using commercial ELISA kits according to the manufacturer's recommendation. Statistical analysis Data were expressed as means AE standard deviations, and p < 0.05 from unpaired student's t test in Figs. 1, 2, and 3 or Dunnet test analysis in Figs. 4 and 5 were used to determine the significance of the differences. PG administration accelerated initial healing of skin wound To investigate the effect of PG on wound healing, digital images of the wounds were captured at a fixed distance and angle for planimetric measurement. PG significantly accelerated wound closure compared with the vehicle control in the initial phase until day 7 after wounding (Fig. 4). The effect of PG on the initial wound closure was shown in a dose dependent manner (Fig. 4). Although wound contraction of PGtreated mice was occurred earlier than that of vehicle control, the period of time for complete repair was comparable (Fig. 6). Mice were inoculated with 20 mL of 2.5 Â 10 9 CFU/mL of S. aureus at the sites of skin wound immediately after wounding, and then treated with 10 mL of 10 mg/mL PG daily. On days 2, 4 and 6 after infection, the skin tissues were collected and frozen sections were prepared. Immunofluorescent staining was performed using anti-Ly6G antibody for neutrophils (A) and anti-F4/80 antibody for macrophages (C). Ly6G þ cells (B) and F4/80 þ cells (D) were randomly counted from eight histological sections. The data are representative of 2 independent experiments (4 mice per group per each experiment) (B, D). An asterisk (p < 0.05) and double asterisks (p < 0.01) indicate that the value is significantly different from the control group, respectively. PG administration induced accumulation of neutrophils and macrophages in the early healing process of skin wound To determine accumulation of immune cells in wound area, we carried out immunofluorescent staining using anti-Ly6G antibody or anti-F4/80 antibody for neutrophils and macrophages, respectively. Neutrophil infiltration was intensively observed in the PG-treated mice on day 2 after wounding ( Fig. 7A). In contrast, neutrophil influx was highly detected on day 6 in the vehicle control (Fig. 7A). There was no difference in macrophage infiltration between the PG-treated mice and the control animals (Fig. 7B). PG administration accelerated initial wound healing of S. aureus-infected skin To determine whether PG promotes the initial healing of infected wound, skin wound was infected with S. aureus and then treated with 10 mL of 10 mg/mL PG once a day. On days 2, 4 and 6 after infection, the skin tissues were collected and the length of infected wound was measured. The length of PG-treated wound was significantly shorter than that of the vehicle control on days 2 and 4 after infection (Fig. 1A, B). PG administration reduced bacterial number on day 2 after infection To determine the effect of PG on bacterial invasion into wound tissue, we observed the localization of S. aureus and enumerated the bacterial number in the wound tissues. Most of all bacterial cells localized on the surface of crust in the PG-treated mice and the vehicle control ( Fig. 2A). The bacterial number in PG-treated mice was significantly lower than that in the vehicle group (Fig. 2B). PG administration induced accumulation of neutrophils and macrophages in the early healing process of infected wound To determine accumulation of immune cells in wound area, we carried out immunofluorescent staining using anti-Ly6G antibody or anti-F4/80 antibody for neutrophils and macrophages. Neutrophil infiltration was intensively observed in the PG-treated mice on days 2 and 4 after S. aureus inoculation (Fig. 3A). Neutrophil number of the PG-treated mice was 160% increase compared with the control group (Fig. 3B). In contrast, neutrophil influx was highly detected on day 6 in the vehicle control (Fig. 3A). The number of neutrophils in PG-administered mice was 40% decrease compared with the vehicle control (Fig. 3B). The similar pattern of macrophage infiltration was also found. Macrophage infiltration was intensively observed on day 2 after S. aureus inoculation in the PG-treated mice (Fig. 3C). The number of macrophages in the PG-treated mice was 670% increase compared with the control group (Fig. 3D). On day 4, nearly the same macrophage infiltration was found in both PGtreated and control mice (Fig. 3C, D). On day 6, the number of macrophages in the vesicle control was significantly higher than that in the PG-treated mice (Fig. 3D). The macrophage influx in the PG-treated mice was 70% decrease compared with that in vehicle control (Fig. 3D). 3.6. Cytokine production was promoted in the early phase and suppressed in middle phase of wound healing by PG To determine whether PG affects cytokine production, we measured TGF-b, IL-6, IL-1b and TNF-a levels in the wound tissues. In the PG-treated mice, the production of TGF-b (35%) and IL-6 (56%) in the wound tissues was significantly promoted compared with the vehicle control on day 1 (Fig. 5A, B). However, the production of IL-1b (25%) and TNF-a (64%) in the PG-treated mice was significantly decreased compared with the vehicle control on day 5 (Fig. 5C, D). Discussion It has been reported that closing and moist environment promotes wound healing and reduces the pain [22]. In this study, PG administration accelerated initial healing of non-infected skin wound in a dose-dependent manner (Fig. 4). Similarly, PG administration accelerated initial healing of skin wound infected with S. aureus (Fig. 1). In addition, PG administration reduced bacterial number in the early phase of healing (Fig. 2). PG shows no antibiotic activity against S. aureus (unpublished data). Therefore, the reduction of bacterial number is possibly mediated by host immune response. Neutrophil and macrophage influx is an early inflammatory response of wound healing. This event is essential for the clearance of bacteria and cellular debris in infected wound [23]. Bacterial clearance by neutrophils is a key event requiring for wound healing [24]. Alternatively, wound healing is delayed by continuous influx of neutrophils [25]. PG administration enhanced the accumulation of neutrophils and macrophages in the early healing process of infected wound, whereas fewer neutrophils were observed in the wound tissue in the middle phase (Fig. 3). The comparable results were observed in non-infected wound (Fig. 7). These results suggest that PG administration enhanced the accumulation of neutrophils and macrophages only in the early phase. PG administration significantly augmented IL-6 and TGF-b production in the wound tissue in the early phase of healing (Fig. 5A, B). IL-6 has the mitogenic and proliferative effects on keratinocytes and is a neutrophil chemoattractant [26,27]. TGF-b has been shown to play a critical role in inflammation, angiogenesis, re-epithelialization, and connective tissue regeneration [28]. In the middle phase of healing, PG administration reduced proinflammatory cytokine production such as TNF-a and IL-1b ( Fig. 5C, D). TNF-a and IL-1b are important proinflammatory cytokines that raise inflammation for bacterial clearance and healing of infected wound, while prolonged inflammation by these cytokines causes the development of chronic wounds [29]. Our results suggested that PG administration accelerates the inflammatory response in the initial step of healing in the infected wounds leading to promote infected wound healing. This finding implied that early resolution of wounds may contribute to faster healing. The mechanism of enhancement of inflammatory response by PG in the initial step of wound healing is still unclear. TNF-stimulated gene 6 has been shown to be an important receptor involving in glycosaminoglycan-induced cell-mediated inflammation [30]. Moreover, the binding of salmon nasal PG to CD44 on mouse fibroblasts is the primary mechanism for the effect on in vitro wound closure [19]. Therefore, these receptors may be involved in inflammatory response and wound healing promoted by PG administration. We have reported that PG has an antiinflammatory effect on mouse models of various inflammatory diseases. The antiinflammatory effect of PG depends on induction of Foxp3 þ regulatory T cells [14,15]. Therefore, regulatory T cells might be induced in infected wounds in the middle phase and contribute to suppression of inflammatory response. Regarding high molecular weight of PG, a component(s) of PG which is responsible for wound healing action remains to be clarified. Finally, this finding implied that PG may therefore be a useful substance in the drug delivery of wound healing active agent. In conclusion, our present results demonstrated that PG has a prophylactic effect by modulation of inflammatory responses in infected wounds. These results suggest the existence of novel interaction of extracellular matrix components in inflammatory responses during bacterial infections on wound healing. Author contribution statement Shouhei Hirose: Performed the experiments; Wrote the paper. Kouji Narita: Performed the experiments. Krisana Asano: Analyzed and interpreted the data. Akio Nakane: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data. Funding statement This work was partly supported by Regional Innovation Strategy Support Program, MEXT (AN).
2018-06-05T03:24:26.830Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "01a7bfb00fcc2f9b9d746668bae440d565e836fe", "oa_license": "CCBYNCND", "oa_url": "https://www.cell.com/heliyon/pdf/S2405-8440(17)33042-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01a7bfb00fcc2f9b9d746668bae440d565e836fe", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13659473
pes2o/s2orc
v3-fos-license
Isolation of native plasma membrane H+‐ATPase (Pma1p) in both the active and basal activation states The yeast plasma membrane H+‐ATPase Pma1p is a P‐type ATPase that energizes the yeast plasma membrane. Pma1p exists in two activation states: an autoinhibited basal state and an activated state. Here we show that functional and stable Pma1p can be purified in native form and reconstituted in artificial liposomes without altering its activation state. Acetylated tubulin has previously been reported to maintain Pma1p in the basal state but, as this protein was absent from the purified preparations, it cannot be an essential component of the autoinhibitory mechanism. Purification of and reconstitution of native Pma1p in both activation states opens up for a direct comparison of the transport properties of these states, which allowed us to confirm that the basal state has a low coupling ratio between ATP hydrolysis and protons pumped, whereas the activated state has a high coupling ratio. The ability to prepare native Pma1p in both activation states will facilitate further structural and biochemical studies examining the mechanism by which plasma membrane H+‐ATPases are autoinhibited. The yeast plasma membrane H + -ATPase Pma1p is a P-type ATPase that energizes the yeast plasma membrane. Pma1p exists in two activation states: an autoinhibited basal state and an activated state. Here we show that functional and stable Pma1p can be purified in native form and reconstituted in artificial liposomes without altering its activation state. Acetylated tubulin has previously been reported to maintain Pma1p in the basal state but, as this protein was absent from the purified preparations, it cannot be an essential component of the autoinhibitory mechanism. Purification of and reconstitution of native Pma1p in both activation states opens up for a direct comparison of the transport properties of these states, which allowed us to confirm that the basal state has a low coupling ratio between ATP hydrolysis and protons pumped, whereas the activated state has a high coupling ratio. The ability to prepare native Pma1p in both activation states will facilitate further structural and biochemical studies examining the mechanism by which plasma membrane H + -ATPases are autoinhibited. The ̴ 100 kDa plasma membrane (PM) H + -ATPase, which generates an electrochemical gradient that drives the transport of other solutes across the PM, is a major protein in fungal and plant PMs [1][2][3]. The PM H + -ATPase belongs to the P-type ATPase family; all members of this family studied to date share the same overall fold and form a phosphorylated reaction cycle intermediate during transport (reviewed in [4,5]). Many P-type ATPases contain autoinhibitory domains that regulate pumping activity, and such domains have been identified in the C termini of both plant and fungal PM H + -ATPases [6,7]. The autoinhibitory C-terminal domain maintains the pump in a basal state, which is characterized by a low affinity for ATP (K m % 2.5 mM) and an apparent low coupling ratio (if any) between ATP hydrolysis and protons pumped. Within minutes of sensing glucose (in the case of fungi; refs. [8][9][10] or blue light (in plants; ref. [11][12][13], residue(s) in the C terminus are phosphorylated and potential regulatory proteins are attracted. This forces the C-terminal domain to release its constraint on the pump, allowing it to enter the activated state, with a high affinity for ATP (K m % 0.5 mM) and presumably tight coupling [ [14][15][16]. The genome of the yeast Saccharomyces cerevisiae encodes two PM H + -ATPase isoforms, the essential and highly expressed Pma1p and the nonessential and weakly expressed Pma2p [17]. Pma1p is the most studied fungal PM H + -ATPase, and knowledge of the autoinhibitory regulation of fungal PM H + -ATPases originates primarily from this pump. When a yeast cell senses glucose, a number of events trigger the full activation of the autoinhibited Pma1p. Glucose sensing induces phosphorylation of at least two residues in the C-terminal region of Pma1p (Ser-911 and Thr-912) [9,10] and changes the distribution of this protein in the PM, causing Pma1p oligomers to cluster in small areas [18]. How the C-terminal domain inhibits the catalytic function of the pump is unknown. Mutational studies have identified residues in the cytosolic core domains that, when altered, change the activation state of the protein. The C terminus is thought to interact with the cytosolic part of the core protein [19,20], which might lead to an unstable phosphorylated intermediate that results in uncoupling of ATP hydrolysis from H + transport [21]. It has been reported that Pma1p interacts with acetylated tubulin, which may stabilize the pump in its basal state [22]. A mechanism has also been proposed according to which glucose sensing leads to activation of a serine protease, that in turn causes hydrolysis of acetylated tubulin and its dissociation from the pump, thereby forcing Pma1p into the activated state [23]. A high-resolution structure of the pump protein in the basal state would provide important clues into the mechanics of autoinhibition; however, such a structure is lacking. Solubilizing a membrane protein from its native environment without altering its conformational equilibrium is challenging, and both terminal truncations and the addition of affinity tags may affect the function of the protein. For example, it has previously been reported that it is impossible to solubilize the related plant PM H + -ATPase without influencing the activation state of the protein [24,25]. The only available crystal structures of a PM H + -ATPase are the 3.6 A structure of a C-terminally truncated and a 5.5 A full-length PM H + -ATPase (AHA2) from the plant Arabidopsis thaliana [26]. Both structures are in the activated state and do not reveal the localization of either of the terminal domains. Even though purification protocols for fungal PM H + -ATPases were published more than 30 years ago [27][28][29], the only reported crystal structure of a fungal PM H + -ATPase is a cryo-electron microscopy structure of a Neurospora crassa PM H + -ATPase, and the resolution of this structure (8 A) was too low to locate the C-terminal regulatory domain [30,31]. In this study, we present a purification protocol for native S. cerevisiae Pma1p that does not employ affinity tags. We further show that the basal state of Pma1p can be isolated with high yield and purity without disrupting its autoinhibition using the detergent 7-cyclohexyl-1-heptyl-b-D-maltoside (cymal-7) and the reactive dye Reactive Red 120. Furthermore, Pma1p can be stabilized with P-type ATPase inhibitors in both the E1P and E2P conformation. This purification protocol lays the foundation for obtaining a high-resolution structure of Pma1p in both the basal and activated state, and for advancing studies of single molecule transport by P-type H + pumps, which recently has become possible [32]. Materials and methods Yeast strain and growth conditions Strain YAK2 of S. cerevisiae (MAT, ade2-101, leu2D1, his3D200, ura3-52, trp1D63, lys2-801 pma1D::HIS3, pma2D::TRP1) was used, with yeast PMA1 placed under its own promoter in a centromeric LEU2 plasmid [33]. The yeast was grown at 30°C in minimal medium containing 2% glucose. The optical density at 600 nm (OD 600 ) was determined at 1-h intervals for 28 h, and cells used for protein purification were harvested after 24 h. Cells producing Pma1p in the activated and basal states were prepared as described previously [10]. concentration of detergent (wash buffer). Solubilized Pma1p was incubated on a column for 1 h at 4°C with slow agitation. Five milligrams of protein was used per milliliter of Reactive Red 120. The column was washed with 5 9 CV wash buffer, and bound proteins were eluted first with 5 9 CV wash buffer supplemented with 1 M KCl and then with 5 9 CV wash buffer supplemented with 5 mM ADP. All fractions were collected and concentrated in an Amicon Pro Ò affinity concentrator with 30-kDa cutoff filters. Reconstitution into lecithin liposomes The Pma1p in the activated and basal state was reconstituted into lecithin liposomes as described earlier [21] with the exception that only the detergents used in this study were employed. Pma1p stability The Pma1p was incubated at 4°C in a buffer containing 50 mM MES/KOH (pH 6.5), 20% glycerol, 50 mM KCl, 1 mM DTT, and 1 mM MgCl 2 . The buffer was supplemented with either 1 mM AlF x (1 mM AlCl 2 and 4 mM NaF), 1 mM ADP-AlF x (1 mM ADP, 1 mM AlCl 2 , and 4 mM NaF), or nothing. The protein was incubated for between 4 h and 3 weeks, and protein degradation was analyzed using immunoblot detection with a Pma1p-specific antibody (sc-33735 antibody from Santa Cruz Biotechnology, Dallas, TX, USA). ATPase activity measurements The ATPase activity was determined using the Baginski assay [34]. The assay was carried out at 30°C in buffer containing 20 mM MES/KOH at pH 5.9, 10 mM MgSO 4 , 0-6 mM ATP, 50 mM KNO 3 , 5 mM NaN 3 , 0.44 mgÁmL À1 phosphoenolpyruvate, 4 lgÁlL À1 pyruvate kinase, and 3.5 mM Na 2 MoO 4 . The assay buffers were equilibrated to 30°C, and the assay was started by adding 150 ng protein to 60 lL ATPase buffer in microtiter plates as described [25]. All experiments were performed in triplicate with AE SE. Proton transport and coupling ratio Proton transport into vesicles was measured using fluorescence quenching of 9-amino-6-chloro-2-methoxyacridine (ACMA). Only the neutral form of ACMA can pass membranes freely, and therefore, the dye cannot leave a vesicle again following its protonation in the vesicle lumen. It is not known how ACMA mediates fluorescence quenching; however, as ACMA is an acridine derivative, the mechanism is likely similar to that described for acridine orange [35]. Acridine orange dimerizes when its concentration increases locally and, as the dimer has a different absorbance and fluorescence spectrum compared to the monomer, the initial quenching of fluorescence is linear relative to the amount of H + accumulating in vesicles [35]. Assuming that a similar mechanism operates for ACMA, fluorescence changes during the H + pumping reaction reflect the formation of protonated dimeric dye complexes inside vesicles, but do not directly report either the H + concentration or DpH. H + pumping and ATP hydrolysis assays were carried out simultaneously in the same sample [36]. The H + -ATPase assay was conducted in 96-well microtiter plates in a buffer containing 10 mM MES/KOH (pH 6.5), 50 mM K 2 SO 4 , 5 mM ATP, 2 mM phosphoenolpyruvate, 30 lgÁmL À1 pyruvate kinase, 25 lgÁmL À1 lactate dehydrogenase, 0.5 lgÁmL À1 valinomycin, 0.25 mM NADH, 2 lM ACMA, and various protein concentrations in a final volume of 150 lL. In this assay, ATP hydrolysis is coupled to NADH oxidation. Pma1p in both activation states was reconstituted using the detergent OG as described above. The assay was started by adding MgSO 4 to a final concentration of 8 mM, and fluorescence quenching was monitored at excitation/emission wavelengths of 412/480 nm (ACMA) or 350/ 440 nm (NADH) to minimize spectral overlap between the probes [36]. The pH gradient was collapsed by adding 3 lg nigericin. Linear regression was used to compare ATP hydrolysis and proton uptake rates as follows: The linear regression was calculated in the linear area of both the drop in NADH and ACMA fluorescence. The concentration of purified Pma1p in the basal state was increased until the velocity constant of NADH fluorescence equaled the constant for 2.5 lg purified Pma1p in the activated state. The coupling ratio for Pma1p in the basal state was estimated using the difference between the velocity constant of ACMA fluorescence of the two activation states assuming that Pma1p in the activated state transports one proton per ATP hydrolyzed (the theoretical maximum) [14]. All experiments were carried out in triplicate with AE SE. Mass spectrometric analysis Fractions enriched in Pma1p were analyzed using tandem LC-MS/MS analysis. The proteins were converted to peptides by applying endopeptidases Lys-C (Wako Pure Chemical Industries, Wako, Japan) and trypsin (Promega, Madison, WI, USA). The peptides were purified using C18 stage-tips (3 M) according to Rappsilber et al. [37] and separated on an in-house-packed analytical reverse-phase column (0.075 mm 9 200 mm, 3 lm Reprosil C18, Dr. Maisch GmbH) using a 4-76% acetonitrile gradient over 1 h on a Proxeon Easy-nLC system (Proxeon Biosystems, Roskilde, Denmark). The samples were measured on a Q-Exactive mass spectrometer (Thermo Scientific, Waltham, MA, USA). MS acquisition was performed at a resolution of 70 000 in the scan range from 300 to 1700 m/z. Dynamic exclusion was set to 20 s and the normalized collision energy to 26 eV. The mass window for precursor ion selection was set to 2.0 m/z. The recorded spectra were analyzed using the MAXQUANT software package version 1.2.2.5 [38] by matching the data to the UniProt S. cerevisiae database (version of 06 May, 2012) with a false discovery rate of 1% for proteins and peptides, allowing a maximum of two missed cleavages. Variable modifications were set to 'oxidation of methionines' and 'acetylation of N termini', whereas fixed modifications were set to 'carbamidomethylation of cysteines'. All other parameters were set to the default values of the software. Protein concentration determination The protein concentrations were determined using Bradford reagent and bovine serum albumin as a standard [39]. Expression of Pma1p in Saccharomyces cerevisiae The PM H + -ATPase proteins encoded by the two genes in S. cerevisiae (PMA1 and PMA2) show 89% sequence identity at the amino acid sequence level; however, PMA1 is expressed at a much higher level than PMA2 [41]. To ensure a homogenous preparation with only one PM H + -ATPase isoform, we used a yeast strain in which PMA2 has been deleted (Yak2) [33]. To provoke starvation and thereby keep Pma1p in the basal state, the yeast was harvested after 24 h in the stationary phase (Fig. S1). An average of 16 g of cells was harvested per liter of medium. Purification of Pma1p It was previously reported that detergent solubilization partially activates the plant counterpart of Pma1p [24,25]. To test whether this also applies to Pma1p, we tested the ability of 20 different detergents to solubilize Pma1p in the basal state from the isolated PMs. For the detergents that could solubilize Pma1p, we determined the apparent activity after relipidation to evaluate the activation state. Pma1p in the activated state solubilized with octyl glucoside (OG) was used as a control, as OG has previously been shown to solubilize Pma1p in a functional state [14]. Four detergents were able to solubilize Pma1p from the PM, and after relipidation, ATPase activity could be restored in Pma1p solubilized with OG, cymal-7, and LDAO, but not C 12 E 8 (Table 1). Using an increase in apparent affinity for ATP (decrease in K m ) as a measure of protein activation, neither OG, cymal-7 nor LDAO activated Pma1p when solubilized and relipidated, indicating that the overall conformation is not altered by the process. The optimal protein to detergent ratio was found to be 1 : 3 for OG and 1 : 4 for LDAO and cymal-7. Arabidopsis thaliana AHA2 was previously purified without loss in activity using a combination of affinity chromatography facilitated by a hexahistidine tag and anion-exchange chromatography [26,42]. However, reports of functional purification tags and successful column chromatography purification of fungal PM H + -ATPase are sparse, and only size exclusion chromatography has been shown to be somewhat useful Table 1. Kinetic parameters of Pma1p in the basal and activated state and the effect of detergent treatment. The apparent activity was determined as specific activity (lmol P i /mg protein/min) (n = 3 biological replicates; AE SE). n.a., not analyzed. for purifying the solubilized protein [43,44]. The P-type sarco/endoplasmic reticulum (SERCA) Ca 2+ -ATPase was previously shown to bind to the reactive dye Reactive Red 120, and this binding is readily disrupted after applying adenosine di-or triphosphate [45]. We therefore tested whether Reactive Red 120 had similar binding properties for Pma1p. Proteins bound to the column could be eluted with a high concentration of KCl, whereas neither ADP nor ATP eluted any protein. However, contrary to our expectation, only a small proportion of cymal- 7-solubilized Pma1p was retained by the column, in contrast to most of the contaminating proteins that were retained (Fig. 1). The initial flow through enriched in Pma1p was concentrated and relipidated, and the apparent activity was tested. Pma1p in the basal state showed the same kinetic properties as the autoinhibited pump in membranes (Table 1). The drop in V max in both solubilized and enriched protein compared to Pma1p in the PM is likely caused by a fraction of Pma1p facing inward in the lipid vesicles and therefore being shielded from ATP. Absence of acetylated tubulin in purified Pma1p preparation Even though the purity of Pma1p has been improved with this protocol, some contaminating proteins were still present (Fig. 1A). As it has been reported that acetylated tubulin is essential for keeping Pma1p in the basal state [22,23], we tested whether one of the contaminating proteins was acetylated tubulin. The anti-acetylated tubulin antibody only decorated a band in the PM fractions and in the nonsolubilized material, but was absent from the purified fraction (Fig. 1C). We further analyzed the protein composition of the purified Pma1p preparation using quantitative liquid chromatography-tandem mass spectrometry (LC-MS/ MS). The 10 major contaminants were primarily cell wall proteins (Table 2, Fig. 1A); interestingly, we did not identify any tubulins (for full list of proteins, see Table S1). Transport coupling efficiencies of Pma1p in its basal and activated states We previously showed that ATP hydrolysis is partly uncoupled from H + transport in Pma1p, when in its basal state [21]. However, it is not known whether this is a stable intrinsic property of the polypeptide. To test whether the purification procedure affected the coupling ratio of Pma1p, we measured proton pumping into Pma1p-embedded liposomes using the fluorescent probe ACMA. Simultaneously with measuring proton transport, we detected the ATPase activities in the same well by coupling ATP hydrolysis to NADH fluorescence using phosphoenolpyruvate, pyruvate kinase, and lactate dehydrogenase, which convert the produced pyruvate into lactate by oxidizing NADH to NAD + . The ATP hydrolytic activity was 3.7 times higher in the activated state ( Fig. 2A,C), whereas H + transport was 17.3 times higher in this state than in the basal state (Fig. 2B,C). The purity of Pma1p was equal in both activation states as determined by Coomassie Blue staining of SDS/PAGE gels and immunoblotting using an anti-Pma1p antibody (Fig. 2D,E). Assuming that one H + is transported per ATP hydrolyzed in the activated state [14], the apparent coupling ratio of the basal state was around 0.2 H + pumped per ATP hydrolyzed. When protein concentrations were adjusted between the two states to give the same ATP hydrolytic activity, similar results were obtained ( Fig. 2A-C). The exact coupling ratio cannot be determined with certainty as we do not know whether all Pma1p molecules become activated following the addition of glucose [10]. Furthermore, it is possible that part of the H + pumping activity observed for Pma1p in the basal state is due to a fraction being in the activated state [14,21]. Pma1p stability To test the stability of the Pma1p preparation, we incubated the purified protein for up to 3 weeks at 4°C and analyzed the degradation using immunoblot with anti-Pma1p antibody. As phosphate analogues such as aluminum fluoride (AlF x ) and beryllium fluoride (BeF x ) inhibit Pma1p [21], we incubated the Pma1p in the basal state with either AlF x , which locks the protein in the E2P conformation, or with ADP-AlF x , which locks Pma1p in the E1P conformation. The protein was stable for at least 3 weeks both with and without inhibitors, and no degradation products were identified (Fig. 3). Discussion Stabilization of Pma1p in the basal state does not require accessory proteins Stabilization of the autoinhibited state of PM H + -ATPase may require association with other proteins. Indeed, acetylated tubulin has been shown to interact with Pma1p in the basal state only and the addition of acetylated tubulin can even inactivate Pma1p in vitro [22,23]. We therefore expected that acetylated tubulin would co-purify with Pma1p in its basal state. The absence of acetylated tubulin from the purified autoinhibited Pma1p protein demonstrates that acetylated tubulin is not essential for keeping Pma1p in the basal state. Alternatively, acetylated tubulin may be involved in rearranging or mediating the oligomerization of Pma1p in the membrane. A number of contaminating proteins were still present in the purified Pma1p preparation but in very low amounts to compared to Pma1p. Interaction with an external protein partner can therefore not be a requirement for keeping Pma1p in the basal state. Regulation of transport coupling ratios is an intrinsic property of the PM H + -ATPase In this study, we have shown that the yeast H + -ATPase can be purified without the use of affinity tags and that both activation states of the protein can be stably maintained using this protocol. This allows, for the first time, for the kinetic properties of both regulatory states of the purified PM H + -ATPase to be compared directly. In a previous study employing reconstituted plasma membrane vesicles and not purified protein, it was suggested that the basal state of Pma1p has a low coupling ratio between ATP hydrolysis and protons pumped, whereas the glucoseactivated state has a high coupling ratio [14]. Using the purified native preparations reported here, we were able to confirm this finding and demonstrate that activation of the PM H + -ATPase causes an approximately fivefold increase in its H + pumping efficiency. Recently, the kinetics of the plant PM H + -ATPase AHA2 at single molecule level were analyzed, and the transport properties of the wild-type and a C-terminally truncated pump were compared [32]. However, as both the wild-type and mutant proteins were tagged, and even carried different tags attached to different parts of the molecule, which may interfere with its catalytic properties [32], the results are hard to interpret. The method presented in this work allows for direct comparison of the properties of native PM H + -ATPase in both activation states and could be useful for future single molecule measurements. Perspectives for future structural studies Progress in our understanding of the structure of fungal PM H + -ATPases has been sparse since 1998, when a 2D structure at 8 A resolution was published [30]. Although several factors may have contributed to this slow rate of progress, difficulties in obtaining defined conformational states are likely to have been among these. Several other members of the P-type ATPase family have been crystallized [4], and at present, more than 100 X-ray crystal structures are available in the Protein Data Bank. All crystallized P-type ATPases have been stabilized with inhibitors shown to lock the P-type ATPases in a defined conformation of the catalytic cycle. The metal fluorides magnesium fluoride, beryllium fluoride, and aluminum fluoride function as phosphate analogues and have been useful in crystal studies, especially of the SERCA Ca 2+ -ATPase [46,47]. As these fluorides also inhibit Pma1p in both activation states [21], they may prove useful for the crystallization of the pump protein purified in a defined regulatory state as described here. The fungal PM H + -ATPase has been heralded as a novel target for fungicides; however, only a few potent inhibitors have been reported to date and all of these inhibit other P-type ATPases too [e.g., refs. [48][49][50]. A high-resolution crystal structure will not only provide more detailed knowledge of the structure and autoinhibitory function of fungal PM H + -ATPase, but will also be useful in the search for potent and specific inhibitors of this essential fungal enzyme. The Pma1p purification protocol described here brings us a step closer to determining the high-resolution structure of the fungal PM H + -ATPase in both activation states. Supporting information Additional Supporting Information may be found online in the supporting information tab for this article: Fig. S1. Cells were harvested during stationary growth of YAK2 cells. Table S1. Full list of LC-MS/MS identified proteins in samples with isolated Pma1p.
2018-05-11T22:37:44.601Z
2018-03-25T00:00:00.000
{ "year": 2018, "sha1": "431e73db43736cefd330655b0f12d1e4a27d128b", "oa_license": "CCBY", "oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.12413", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "431e73db43736cefd330655b0f12d1e4a27d128b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
1223192
pes2o/s2orc
v3-fos-license
The Contribution of Transposable Elements to Expressed Coding Sequence in Arabidopsis thaliana The goal of this study was to assess the extent to which transposable elements (TEs) have contributed to protein-coding regions in Arabidopsis thaliana. To do this, we first characterized the extent of chimeric TE-gene constructs. We compared a genome-wide TE database to genomic sequences, annotated coding regions, and EST data. The comparison revealed that 7.8% of expressed genes contained a region with close similarity to a known TE sequence. Some groups of TEs, such as helitrons, were underrepresented in exons relative to their genome-wide distribution; in contrast, Copia-like and En/Spm-like sequences were overrepresented in exons. These 7.8% percent of genes were enriched for some GO-based functions, particularly kinase activity, and lacking in other functions, notably structural molecule activity. We also examined gene family evolution for these genes. Gene family information helped clarify whether the sequence similarity between TE and gene was due to a TE contributing to the gene or, instead, the TE co-opting a portion of the gene. Most (66%) of these genes were not easily assigned to a gene family, and for these we could not infer the direction of the relationship between TE and gene. For the remainder, where appropriate, we built phylogenetic trees to infer the direction of the TE-gene relationship by parsimony. By this method, we verified examples where TEs contributed to expressed proteins. Our results are undoubtedly conservative but suggest that TEs may have contributed small protein segments to as many as 1.2% of all expressed, annotated A. thaliana genes. Introduction Transposable elements (TEs) are a ubiquitous feature of plant genomes. In maize, for example, TEs comprise 60-80% of the genome (SanMiguel et al. 1996;Messing et al. 2004). The proportion is lower, but still substantial, in compact genomes like those of rice and Arabidopsis thaliana. TEs represent 29% of the rice genome (Messing et al. 2004) and 10% of the 125-Mb Arabidopsis genome (Arabidopsis Genome Initiative 2000). TEs are traditionally categorized into two groups based on their mode of transposition. Class I elements, or retrotransposons, copy and paste to a new location via an RNA intermediate, which then reintegrates into the genome at a new location after reverse transcription. Class II elements are DNA transposons. DNA transposons excise out of their chromosomal location as DNA and reinsert elsewhere in the genome. Maize and other grasses contain predominantly class I elements. In contrast, class I TE activity is apparently suppressed in Arabidopsis (Wright and Voytas 1998), with DNA transposons approximately equaling retrotransposons in copy number (Wright and Voytas 1998;Arabidopsis Genome Initiative 2000). The roles of TEs in genome evolution are varied (Le Rouzic et al. 2007) but many are harmful to genome function. Common examples include insertional inactivation of genes (Greene et al. 1994) and DNA rearrangement via ectopic recombination (Kazazian 2004;Bennetzen 2005). Nonetheless, a subset of TE-mediated events is adaptive: in Drosophila, for example, TE insertions have contributed to enhanced insecticide resistance, either by affecting gene expression or by changing gene structure (Schlenke and Begun 2004;Aminetzach et al. 2005). Similarly, a ''domesticated'' TE-derived transposase domain contributed directly to two vertebrate proteins, RAG1 and RAG2, that are central to the immune system of jawed vertebrates (Kapitonov and Jurka 2005). The insertion of TE sequence fragments into open reading frames (ORFs) of vertebrate genes may be a general phenomenon (Nekrutenko and Li 2001). Consistent with this conjecture, TEs share sequence similarity with thousands of human protein-coding sequences (Britten 2006), many of which remain functional (Wu et al. 2007). The contribution of TEs to plant genes is not yet clear, but some TE-based phenomena have been well documented. For example, reverse transcription of mRNA transcripts by class I transposons has generated more than 1000 retroposed genes in rice, many of which have recruited exons from flanking regions to produce functional genes (Wang et al. 2006). TEs also capture and shuffle gene fragments (Jiang et al. 2004;Brunner et al. 2005;Lai et al. 2005). Maize helitrons, for example, capture and move gene fragments to the extent that *20% of genes (or gene fragments) differ in location between two maize lines (Lai et al. 2005;Morgante et al. 2005). Additionally, in A. thaliana, helitrons proliferated after the acquisition of exon fragments (Hollister and Gaut 2007). Many of the gene fragments captured by TEs are expressed (Jiang et al. 2004;Brunner et al. 2005;Lai et al. 2005), fueling speculation that TE-mediated gene shuffling can lead to novel genes. While it is clear that TEs can capture gene fragments, there are few direct examples that TE sequences have contributed to functional plant genes. One exception is the domestication of a hAT-like transposase by the DAYSLEEPER gene in Arabidopsis (Bundock and Hooykaas 2005), but the genome-wide extent of TE incorporation into functional genes remains unknown. Plant genomes possess not only TEs but also an abundance of gene duplications. Duplicated genes provide functional redundancy, a potential template for evolutionary innovation and a comparative context to infer the incorporation of TE-like sequence in individual genes (Gotea and Makalowski 2006). Plants are a particularly rich system in this respect. All plant genomes studied to date exhibit evidence of ancient whole-genome duplication events in their evolutionary past, including relatively small genomes like that of A. thaliana (Adams and Wendel 2005). In Arabidopsis, duplicated chromosomal regions retain *25% of their genes as duplicates (Blanc et al. 2003), and a similar proportion of Arabidopsis genes (*16%) have been duplicated as a result of local, tandem duplication events (Zhang and Gaut 2003). An important consequence of this extensive duplication is large gene families. Plants possess more gene families-with more members per gene family, on average-than other eukaryotes (Lockton and Gaut 2005). In this study we exploit gene family data from Arabidopsis to assess the possibility that TEs have contributed to expressed peptides. To achieve this, we search for TErelated sequences in expressed sequence tags (ESTs) of Arabidopsis and verify that the TE-related sequence had a genomic counterpart in an annotated protein-coding region. However, there is an inherent difficulty with this approach: when there is clear evidence that a TE is homologous to a portion of a coding sequence, it is difficult to discriminate whether the TE contributed to the coding region or acquired the coding fragment, as commonly occurs with helitrons and other TEs. To address this uncertainty, we examine gene family data. In the comparative and phylogenetic context of gene families, one can use parsimony arguments to infer whether a subset of gene family members contains a unique insertion consistent with contribution from a TE. We find evidence for TE homology to expressed regions in more than 2000 genes and demonstrate that TE insertion events have led to the formation of TE-gene chimeras. Genomic, EST, and TE Sequences We downloaded three types of Arabidopsis sequences from TAIR (The Arabidopsis Information Resource; http:// www.arabidopsis.org). All three sequence types were based on Arabidopsis thaliana genome release 8. The first type was genomic ''Seq'' gene sequences, which consist of 5 0 and 3 0 untranslated regions (UTRs), introns, and exons; the second was coding sequence (CDS, or exon-only sequences); and the third was peptide sequences. The Seq data contained 30,271 annotated sequences; the CDS and protein data each contained 29,161 sequences ( Fig. 1). In addition, we downloaded A. thaliana UniGenes. UniGene Build No. 49 was downloaded via NCBI's Entrez Web site. Our database consisted of 25,693 UniGene sequences. Our TE database was comprised of sequences derived from a BLASTn query (1e-20 cutoff, no repeat filtering) against the A. thaliana release 5 genome. TE queries were tabulated from three sources: (i) TEs described in a previous survey of 17 Mb of the A. thaliana genome (Le et al. 2000); (ii) A. thaliana TEs found in TIGR's repeat database; and (iii) all GenBank ORFs annotated as transposase-related in the Arabidopsis genome. Our final TE database consisted of 3079 nonredundant TE sequences in the Arabidopsis thaliana genome. The TE sequences ranged in length from a 65-base mariner-like TE fragment to a 15.8-kb MULE. The mean length of our TE sequences was 1134 bases. Candidate Identification To identify genes that consist in part of TE-like sequence, we implemented a decision tree based on a BLAST search among TE, UniGene, CDS, and genomic sequences (Fig. 1). In this initial BLAST, the TE, CDS, and UniGene FASTA sequences were combined into a single database. The Seq file was used as the query in a tBLASTx (Altschul et al. 1997), with repeat filtering off, against the TE, CDS, and UniGene subject database. BLAST results were parsed to find Seq sequences that hit, at an e-value \1e-10, all three types of sequences (TE, CDS, and UniGene) in the subject database. These data were further parsed to find BLAST alignments in which all three types of subject sequences aligned to a common region of the Seq sequence, so that a TE was found to overlap expressed (UniGene), exonic (CDS) sequence. Seq sequences that did not meet this criterion were not studied further. Each comparison among sequence types provided some information. For example, the BLAST alignment between Seq and Unigene confirmed that the UniGene was not an EST cloning artifact and confirmed gene expression; the alignment among Seq, UniGene, and CDS confirmed exon/intron boundaries; the alignment of TE with Fig. 1 Flowchart giving an overview of the methods used in this study: 30,271 Seq sequences were queried against a BLAST subject database of 57,875 TE, UniGene, and CDS sequences. Subsequently, the number of individual BLAST alignments was whittled down based on different criteria: gray numbers represent the number of BLAST alignments rejected from further analysis at each step UniGenes confirmed expression of a TE-like sequence; and the alignment of TE with annotated Seq data confirmed that the TE-like region is found in genomic sequence. Gene Ontology Analysis For each genomic Seq query that successfully hit a TE, CDS, and UniGene sequence, we assessed its classification according to Gene Ontology (GO) (Ashburner et al. 2000) to determine if any biases in molecular function existed. The A. thaliana GO Slim database (Berardini et al. 2004) was downloaded from TAIR, and only entries that corresponded to the function of our genes of interest were parsed. These data were compared to the distribution of GO Slim functional categories for the whole genome using 2 9 2 chi-square contingency tables. Gene Family Identification and Evolution When there is homology between a TE and a coding region, one cannot infer the direction of the TE event. Did the TE contribute to the gene or, conversely, did the TE acquire a copy of the gene fragment? To address this question, we used gene family data. The phylogenetic distribution of the TE on a gene family should allow one to distinguish TE insertion from acquisition, using parsimony arguments. For these analyses, we relied on the Arabidopsis high-stringency gene family data set of Rizzon et al. (2006). These gene families were defined by a homology criterion of pairwise identities C50% over C90% of the peptide sequence; paralogues were grouped using the single-linkage criterion, resulting in 10,542 genes clustered into 3544 gene families. For each gene family, we took the following steps. We first assembled each gene family as a FASTA file of Seq genomic sequences and then identified the location of the TE-like region in the originally identified ''TE gene'' from the initial BLAST. Each Seq sequence represents an unspliced genic region (including introns, exons, and UTRs), as found on the chromosome. Because TE activity takes place at the chromosomal level, we aimed to identify TE-like regions in Seq sequences. We used tBLASTx to compare the region of strong TE homology to the Seq sequence of all other paralogues in an attempt to identify further TE-like regions. Each resulting e-value was recorded. As sequence divergence among paralogues is a confounding factor in the identification of TE insertions in gene families, we devised a BLAST resampling procedure to determine if TE-like regions were atypical relative to the other genic regions. To do this, 100 coding sequence fragments were randomly chosen from the gene family member that was originally identified as containing a TElike region. These random fragments were the same length as the TE-like region but did not overlap with it. Each fragment was used as a tBLASTx query against the entire gene family, using identical criteria as in the initial BLAST. Coding sequence fragments that hit a paralogue at an e-value less than the previous TE-region tBLASTx value for that same paralogue were considered a successful hit and recorded. Paralogues with more successful hits had higher BLAST resampling scores. In summary, high blast resampling values indicated that the non-TE regions were more similar among paralogues than the TE region, suggesting that the putative TE insertion into protein-coding regions represented a region of aberrant sequence evolution. We subsequently aligned all peptides within gene families using T-Coffee (Notredame et al. 2000) with default parameters. The TE-like regions of the genes were excluded from the aligned peptides, as these could bias both phylogenetic analyses and our inferences. Alignments were visually inspected and hand-adjusted, then employed to construct Poisson-corrected neighbor-joining trees with 1000 bootstrap replications, using MEGA v3.1 (Kumar et al. 2004). Exon Sequences Containing TE-Related Fragments We queried a database of 57,875 TE, CDS, and UniGene sequences with 30,271 genomic Seq sequences. Of the 30,271 BLAST queries, 5738 hit to all three types of sequence in the subject database (Fig. 1). These 5738 results were further parsed to find BLAST alignments in which TE, CDS and UniGene sequences overlap, aligning to the same region of the Seq sequence-2373 alignments passed this criterion, leaving 3365 to be rejected. None of the 2373 alignments involved genes functionally annotated as a TE or a pseudogene. Of the rejected alignments, in 2472 cases only the TE hit the Seq query, with no evidence of exon overlap or gene expression, suggesting that the TElike sequence was found in an intron. In 835 rejected alignments, the TE sequence hit the Seq query and overlapped with CDS, but not its corresponding UniGene, perhaps indicating either a match to a pseudogene or an erroneous structural gene call. For a further 58 rejected results, the TE aligned to the Seq query and a UniGene, but not its CDS, likely indicating a match to a UTR. The remaining 2373 BLAST hits possessed the expected gene structure and a TE in at least one expressed exon (Fig. 1, Table 1); they comprised our dataset for further analysis (see Supplementary Table S1 for a comprehensive list). For the 2373 alignments, the Seq genomic sequence matched the TE sequence, with a mean alignment length of 833 bases and a mean BLAST e-value score of 2.65e-12. A subset of 201 alignments showed strong TE sequence homology across 90% or more of the length of the gene. This suggests that, rather than contributing a small segment to the gene as with the majority of alignments, these 201 TEs may have been involved in TE domestication events. Of the 2373 genes, 162 had stop codon overlapping the TE-related sequence, suggesting that TEs may have either contributed additional 3 0 exon sequence or truncated the gene product by contributing a stop codon. These 162 genes remain expressed, and at least several are functionally well characterized: for example, DET3 (At1g12840) (Schumacher et al. 1999), PGP4 (At2g47000) (Terasaka et al. 2005), and HYD1 (At1g20050) (Topping et al. 1997). Of the 2373 ''TE genes,'' 125 were annotated as alternatively spliced. In 43 cases, the TE was found to overlap the gene's splice junction, raising the possibility that, of 2373 putative TE contributions to genes, 43 contributed both protein-coding sequence and an alternative splice site. We also examined the chromosomal locations of the 2373 genes and compared their distribution across chromosomes to that of our TE database. We found no bias toward any chromosome for these putative TE-gene chimaeras (data not shown). Figure 2 shows the proportions of TEs involved in the 2373 putative TE-gene chimeras compared to all TEs in our TE database. Perhaps the most striking observation is the statistically significant bias against helitron-like sequences within exons; helitrons represent 18.9% of the TEs in our database and only 2.4% of exon hits. Helitrons have been shown to capture gene fragments (Lai et al. 2005;Hollister and Gaut 2007). If exon capture commonly leads to novel gene formation, the signal of remnant helitrons is not discernible in our data. However, helitron TE sequence is similar only at the 5 0 and 3 0 ends, varying considerably internally (Kapitonov and Jurka 2001). Thus, an intrinsic bias against finding helitrons may exist in our BLAST analysis. Mariner-like class II transposon sequences are also significantly underrepresented in exons. Members of the mariner TE family have a 5 0 -TA-3 0 target site, so it may not be surprising that these TEs are not often found in GC-rich, gene-rich regions of the genome. En/Spm elements are significantly overrepresented in exons. TEs in the En/Spm superfamily are known to preferentially insert into hypomethylated gene-rich regions of plant genomes (Kunze and Weil 2002), to the extent that they are used as plant mutagens (T-DNA) (Wisman et al. 1998;Krysan et al. 1999). Another surprising result is the significant overrepresentation of copia-like LTR retrotransposons in the putative chimeric gene dataset compared to our whole TE database. In contrast, there is a significant bias against gypsy-like LTR retrotransposons. While Wright et al. (2003) estimated roughly equal numbers of copia-and gypsy-like retroelements in the A. thaliana genome, our TE database contains considerably more gypsy-like than copia-like TEs (468 and 116, respectively). Both of these observations may suggest that the bias toward copia-like and against gypsy-like elements in coding regions may not be a biological phenomenon, but the result of a deficiency of copia sequences in our original TE database and an excess of gypsy sequences. Lending support to the veracity of this result, however, copia-like elements have been identified previously as having an insertion preference near genes in maize, while gypsy-like elements have been observed to preferentially insert into other repetitive elements (Bennetzen 1996). This discussion of copia and gypsy make the important point that these comparisons could be sensitive to the method of genome-wide TE identification that was used to compile the original TE database. Our initial compilation of a genome-wide TE query database was conservative with respect to method (using BLASTn as opposed to tBLASTn or other repeat-finding criteria) and stringency (using BLASTn hit e-values\1e-20). As a result, our TE database used in the Seq-Unigene-CDS-TE blast comparison was smaller, in terms of the number of identified TE sequences, than previous estimates of the genome-wide complement of TEs in A. thaliana (Arabidopsis Genome Initiative 2000; Wright et al. 2003). However, our use of this database also ensures that our results are conservative, with respect both to the number of genes found to have TE homologies and to the believability of results. Even so, our trends are comparable. Qualitatively, the trends in Fig. 2 remained unaffected using the genome-wide percentage TE estimates based on Wright et al. (2003), except for the aforementioned copia and gypsy result. For example, Wright et al. (2003) estimated that helitrons and SINE elements comprise *23% and *3% of genomic TEs, respectively, whereas we estimate *20% and *7%, respectively. Functional Biases of Genes with Homology to TEs The ORFs of functional TEs encode a narrow range of functions. For example, in order to transpose successfully, a class II DNA transposon only needs to bind and cut both its terminal inverted repeats and its target site using a single transposase enzyme. One might expect, therefore, that chimeras between TEs and genes would also encompass limited function. An example is the human SETMAR protein, which is a chimera between a mariner class II TE and a previously existing protein (Cordaux et al. 2006). The function of SETMAR is unknown, but it appears that the TE contributed a transposase domain and, consequently, a new DNA-binding function to SETMAR. Following this example, one could predict that exons with TE-like sequences may be enriched for binding functions. Accordingly, we assessed GO functions for genes containing TE-like sequences (Fig. 3). Of all 15 GO Slim functional categories, 10 categories were significantly over-or underrepresented for genes with TE-like sequences compared to all genes in the Arabidopsis genome. Meeting our expectations, the ''transcription factor activity'' GO Slim category was significantly overrepresented for putative TE-gene chimeras. Contrary to our prediction, however, neither ''nucleic acid binding'' nor ''DNA or RNA binding'' functions were significantly over-or underrepresented. Most significantly overrepresented were both the ''kinase activity'' and the ''transferase activity'' functions. Both kinase and transferase genes are known to form large gene families in Arabidopsis (Meyers et al. 1998;Frova 2003). Perhaps this functional redundancy permits the acquisition of TE sequence with few detrimental consequences. Also, ''TE genes'' were significantly overrepresented in the ''nucleotide binding'' and ''receptor binding/activity'' functional categories. ''TE genes'' were significantly underrepresented in the ''transporter activity'' GO Slim functional class. This group of functions encompasses proteins which facilitate transmembrane transport-a function not commonly associated with TEs. Also underrepresented were the three ''catch-all'' categories of ''molecular function unknown,'' ''other enzyme activity,'' and ''other molecular functions.'' Putative TE-gene chimeras were also poorly represented in the ''structural molecule activity'' GO Slim category, with only 1 of the 907 genes in this category demonstrating homology to TE-related sequences. If the ''structural molecule activity'' category consists primarily of conserved housekeeping genes, these genes could be more sensitive to perturbation by TE insertion than genes in other functional groups. Gene Family Data Help Discriminate Between TE Insertion and Co-option Thus far we have described 2373 examples of homology between TEs and expressed exonic sequence. But with homology data alone, we cannot infer the direction of the relationship. That is, did TEs contribute sequence to exons, thus providing potentially adaptive material, as has been widely argued (Britten 2006;Cordaux et al. 2006), or did TEs co-opt genic sequence, as has been demonstrated previously (Jiang et al. 2004;Lai et al. 2005)? Although the direction of sequence relationship can be difficult to decipher, gene family phylogenies can provide insight (Gotea and Makalowski 2006). With gene family data and a phylogenetic context, there is the possibility to infer directionality using parsimony arguments. We compared each of the 2373 Seq to TE-UniGene-CDS homologues to determine if they belonged to gene families. Of the 2373 genes, 1928 were single-copy (Fig. 1), which provided no information as to directionality, as they are not present in a gene family, and were thus not considered further. For each of the remaining genes, the gene families in which they belonged were assembled into 391 unique gene families and aligned. For each of these 391 multigene families, we characterized the distribution of the TE-like region and, after determining the length of the TE-like region, performed our BLAST resampling test. This resampling test compares randomly chosen, non-TE fragments of the same length as the TE-like region. In 155 cases, the BLAST resampling test could not be performed because the TE region was longer than the flanking gene regions. BLAST-based searches for regions of TE homology in the remaining 236 genes revealed that every paralogue of 191 gene families contained the same TE-like sequence; these gene families were discarded from further consideration as no phylogenetic inference regarding TE insertion or cooption could be made, leaving 46 gene families under consideration. In 39 cases low BLAST resampling scores suggested that the TE-like region was not out of the ordinary with regard to divergence among gene family members. Thus, in these cases we could not clearly identify the TE region as a unique insertion. This led us to conclude that sequence evolution among the gene family members was responsible for the result, and not a TE insertion event (see Fig. 4a for an example). Seven gene families remained on which to perform further analyses. Given the seven gene families that met our strict requirements, we employed two additional criteria to discriminate between TE insertion or gene sequence acquisition. The first employed parsimony arguments: a clade of ''TE-gene'' chimeras within a gene family with many paralogues that lack the TE-like sequence argues strongly for a TE insertion event. Second, we examined each alignment by eye for either an insertion or an unusually divergent region at the location of the TE-like sequence (as identified by the original BLAST). For example, the large, 22-paralogue cytochrome P450 gene family contained only a single paralogue (At3g20950) in which 363 base pairs (bp) of a Copia-like element perfectly matched the start of the gene, contributing an intron, and extending into the second exon (Supplemental Fig. 1). This TE-like region exists as an insertion only in paralogue At3g20950. The other 21 paralogues in this gene family do not possess this same sequence. Moreover, the BLAST resampling results also indicated that the TE-like region is atypical with regard to sequence divergence. Thus one can infer that At3g20950 is an example of a single TE insertion into a coding region of an expressed gene (Fig. 4b). We applied our additional criteria to all seven of the remaining gene families. A second example of TE insertion was found in one paralogue (At1g74290) of an ''esterase/ lipase/thioesterase'' gene family (Fig. 4c), where a single paralogue in the nine-member gene family was found to contain a high sequence similarity to an Ac-like TE. For the remaining five gene families, we were unable to conclude that either ''TE contribution'' or ''TE co-option'' was the cause of the pattern of TE-like regions on the phylogenies. Most (four or five) of these gene families were twomember gene families, in which parmisony arguments are impossible to apply ( Table 1). One of these five additional gene families was four paralogues in size, in which two putative TE-gene chimeras formed a single clade. Two other paralogues formed their own clade, thus making it unclear whether an ancient TE insertion or co-option was responsible for the pattern of TE-like sequence on the tree (Fig. 4d). Although our parsimony arguments cannot differentiate between TE acquisition and TE insertion in these five examples, these may still represent true TE contributions to coding sequence. In these cases, the availability of an outgroup sequence, such as A. thaliana's sister species Arabidopsis lyrata, would facilitate this distinction. Additionally, although parsimony arguments based solely on distributions of TE-regions on phylogenetic trees cannot be made in these five examples, one may argue that, since exon capture by TEs is a relatively rare event in comparison with TE insertion via transposition, TE insertion may be the most parsimonious conclusion. Implications of the Methods and Results Overall, we found 2373 genes where coding sequence, ESTs, and TEs showed strong BLAST identity to the same genomic region. At the level of sequence homology, then, we provide evidence that TE-like sequences are present in expressed protein-coding sequences in 7.8% of Arabidopsis annotated genes. We caution that some of these could be expressed transcripts that do not contribute to the proteome, but at present the number of confirmed proteins is not sufficient to provide an unbiased genome-wide analysis at the protein level (Gotea and Makalowski 2006). We then addressed the question of directionality and found only a handful of gene families with compelling evidence for a TE insertion event. These few cases likely do not represent the full extent of TE contribution to exons in A. thaliana and, as such, likely underestimate the true picture. What factors led us to believe that we underestimated the number of TE-gene chimeras? First, as mentioned above, we began with a TE query database that was compiled using conservative methods. Second, our phylogenetic analyses are biased against older gene families, as only young gene families tended to pass our BLAST resampling test, which rejected overly divergent paralogues. Third, our phylogenetic methods were amenable only to Arabidopsis genes within gene families. Further inferences about the direction of TE-gene homologies for singleton genes may be possible from multispecies analysis (e.g., Gotea and Makalowski, 2006); to this end, ongoing genome sequencing of additional Brassica taxa will provide a valuable resource for deciphering the contributions of TEs to annotated protein-coding regions. Finally, we used gene family data based on stringent parameters (C50% BLASTp identity over C90% of the sequence), and only 34.8% of Arabidopsis genes were included in gene families under this definition (Rizzon Conclusions are based on parsimonious events, assuming equal probability of excision and insertion, as well as consideration of the mode of TE replication. The underlined gene was originally identified in the initial BLAST as possessing a putative expressed TE in CDS. To the right of each locus tag are two numbers: first, the TE vs. gene family tBLASTx e-value; second, the result of the BLAST resampling (as a percentage). The text below each figure describes the inference drawn. Gene family names and biological functions are given in Table 1 J Mol Evol (2009) 68: 80-89 87 et al. 2006). TE-gene chimeras are likely to be assigned more often to the *65% of genes that are not in a gene family. The reason is that a TE insertion changes the sequence of its peptide, making it less similar to its homologues and resulting in its exclusion from a gene family. To examine the effect of gene family definitions on our study, we repeated the phylogenetic analyses using lowerstringency gene family definitions (paralogues with C30% identity over C70% of the peptide) (Rizzon et al. 2006). BLAST resampling of the originally-identified TE genes, however, often showed very low match proportions, making it difficult to interpret whether the TE-like region was unique. After repeating the phylogenetic analysis with lowstringency gene families, it became clear that our use of high-stringency paralogues led to fewer inferences of TE insertion events, but limited false positives. There is no doubt that TEs are major contributors to the evolution of plant genomes (Jiang et al. 2004;Bennetzen 2005;Brunner et al. 2005;Lai et al. 2005). It is also clear that chimeric constructs are relatively common in plants, particularly when TEs acquire portions of coding regions (Jiang et al. 2004;Wang et al. 2006). Thus far, however, the extent of TE contribution to expressed and putatively functional proteins has not been assessed. Despite the conservative nature of our analysis, we found compelling evidence for TE insertion into expressed protein-coding sequences. Our results reside between two extremes, represented by human studies, which claim that more than 1000 proteins contain TE sequence (Nekrutenko and Li 2001;Britten 2006), and Drosophila melanogaster, which seems to possess very few expressed TE-gene chimeras (Lipatov et al. 2005). With very few exceptions (e.g., Gotea and Makalowski, 2006;Bundock and Hooykaas 2005), the directionality of these relationships (contribution or co-option of genic regions by TEs) has not been determined. We have found only a handful of cases for which the evidence of TE contribution to a coding region is strong but expect that larger plant genomes, with correspondingly larger TE complements, contain more evidence for TE contributions to coding regions. Even so, the contribution of TEs to TE-gene chimeras may not be small in Arabidopsis. We found that 15% (7 of 46) of the examined multigene families provided compelling evidence for incorporation of TE sequence into coding regions. If this proportion is representative, then *361 of our initial set of 2373 ''TE genes'' represent TE contributions to coding regions, representing *1.2% of all annotated A. thaliana proteins.
2017-08-02T18:35:47.480Z
2009-01-03T00:00:00.000
{ "year": 2009, "sha1": "de240d93cdc5e60d60590a26cef9e670c7f85cff", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00239-008-9190-5.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "0a27ef62d6a7ad600a0b4e2535aa9c34a09250ea", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
237937436
pes2o/s2orc
v3-fos-license
Inhibitors of Lipoxygenase and Cyclooxygenase-2 Attenuate Trimethyltin-Induced Neurotoxicity through Regulating Oxidative Stress and Pro-Inflammatory Cytokines in Human Neuroblastoma SH-SY5Y Cells Trimethyltin (TMT) is an environmental neurotoxin that mediates dopaminergic neuronal injury in the brain. In this study, we characterized the toxic mechanism and possible protective compounds against TMT-induced neurotoxicity in human dopaminergic neuroblastoma SH-SY5Y cells. Antioxidants such as melatonin, N-acetylcysteine (NAC), α-tocopherol, and allopurinol alleviated TMT toxicity. Apoptosis induced by TMT was identified by altered expression of cleaved caspase-3, Bax, Bcl-2, and Bcl-xL through Western blot analysis. The iron chelator deferoxamine ameliorated the alteration of apoptosis-related proteins through TMT exposure. TMT also induced delayed ultrastructural necrotic features such as mitochondrial swelling and cytoplasmic membrane rupture; NAC reduced these necrotic injuries. Esculetin, meloxicam, celecoxib, and phenidone decreased TMT toxicity. Elevation of the pro-inflammatory cytokines IL-1β, TNF-α, and NF-ĸB and reduction of the antioxidant enzymes catalase and glutathione peroxidase-1 (GPx-1) were induced by TMT and ameliorated by inhibitors of LOX and COX-2 enzymes. Both NMDA and non-NMDA antagonists attenuated TMT toxicity. The free calcium ion modulators nimodipine and BAPTA/AM contributed to neuronal survival against TMT toxicity. Inhibitors of the phosphoinositide 3-kinase/protein kinase B/mammalian target of rapamycin pathway, an autophagy regulator, decreased TMT toxicity. These results imply that TMT neurotoxicity is the chief participant in LOX- and COX-2-mediated apoptosis, partly via necrosis and autophagy in SH-SY5Y cells. Introduction Trimethyltin (TMT) represents a group of organotin compounds that are commonly used as biocides or stabilizers in agricultural and industrial fields [1,2]. Accordingly, TMT is considered a risk factor for food safety and may cause toxicological issues in humans and ecosystems [3]. The accumulation of TMT in the brain has been known as correlated with delayed excitability in the central nervous system, and the behavioral abnormalities derived from TMT intoxication mainly exhibit tremor and convulsion [3]. However, despite the execution of various challenges on the mechanistic investigation of TMT, the exact etiological causes and preventive strategies are not established until now. Although TMT induces neuronal damage in the various brain regions such as the hippocampus, piriform/entorhinal cortex, amygdala, olfactory bulb, and pyramidal cells of the neocortex in animals, only a few studies have explored the mechanism of TMT-induced toxicity or survival strategies of dopaminergic neurons in these regions. Mignini et al. [4] reported that TMT decreased dopamine (DA) receptors and DA transporters in the hippocampus, followed by cognitive dysfunction. TMT has also been reported to decrease DA turnover in the caudate nucleus, a portion of the basal ganglia [5], as well as DA concentration in the nucleus accumbens [6]. Notably, dopaminergic neurons exhibit selective vulnerability via a range of oxidative stress factors [7]; therefore, neurotoxicity studies of TMT using the human dopaminergic neuroblastoma SH-SY5Y cell line are needed to establish suitable preventive strategies for neurodegenerative diseases. The neuropathological symptoms evoked by TMT are also associated with neurobehavioral abnormalities, such as seizures, hyperactivity, deficits in learning and memory [8], and aggression. TMT can increase intracellular calcium levels via excitotoxicity in spiral ganglion cells; nifedipine, an L-type calcium channel antagonist, decreases intracellular calcium accumulation [9]. TMT leads to the breakdown of homeostasis of intracellular calcium concentration via internal stores, such as the mitochondria and endoplasmic reticulum (ER) in human neuroblastoma SH-SY5Y cells [10]. Excitatory glutamate receptors such as N-methyl-D-aspartate (NMDA) and αamino-3-hydroxy-5-methyl-4-isoxazolepropionic acid/kainate (AMPA/KA) receptors are distributed in SH-SY5Y cells [11][12][13]. TMT treatment has been shown to result in neuronal necrosis; the anti-inflammatory agent dexamethasone failed to inhibit neuronal damage in the mouse hippocampus [14]. However, autophagy and caspase-dependent apoptosis occur in hippocampal regions as a result of TMT-induced neuronal injury [15]. Autophagic vacuoles were present in neurons after TMT exposure [16], against which the autophagy activators rapamycin and lithium protected neuronal cells. Glutamate receptor-mediated excitotoxicity may be an inducing factor for necrosis through intracellular calcium overload in SH-SY5Y neuroblastoma cells [12]. Gunasekar et al. [17] reported that TMT induced apoptosis or necrosis in a dose-dependent manner in cerebellar granule cells. Concurrently, they suggested that various oxidative stresses including protein kinase C (PKC) activation, overproduction of nitric oxide (NO) and hydrogen peroxide, and overstimulation of metabotropic glutamate receptors may be involved in necrotic death induced by TMT exposure. Previously, we reported that TMT selectively increased protein kinase C delta (PKCδ) expression through various oxidative stresses in the hippocampus [18]. TUNEL-positive apoptosis and various oxidative injuries, such as increases in malodialdehyde (MDA), protein carbonyl, and reactive oxygen species (ROS), were found in the hippocampal area. Apoptosis induced by TMT depends on the balance between NF-kB and MAP kinases in SH-SY5Y cells [19]. Accordingly, we explored the types of cell death after TMT exposure in human dopaminergic neuroblastoma SH-SY5Y cells. In this study, we characterized the dose-toxicity effects of TMT using lactate dehydrogenase (LDH), MTT, and MDA assays; the alteration of protein expression on pro-apoptosis and antiapoptosis factors such as cleaved caspase-3, Bax, Bcl-2, and Bcl-xL; ultrastructural cell changes; changes in pro-inflammatory cytokines such as IL-1β, TNF-α, and NF-kB and antioxidant enzymes such as catalase and GPx-1; potential neuroprotectants such as antioxidants and inhibitors of lipoxygenase (LOX) and/or cyclooxygenase-2 (COX-2), glutamate receptor blockers, free calcium ion modulators, iron chelators, pan-caspase inhibitors, and PI3K/Akt/mTOR autophagy signaling pathway inhibitors; ultrastructural changes in transmission electron microscopy (TEM) morphology induced by TMT in human dopaminergic neuroblastoma SH-SY5Y cells. Cell Culture and TMT Treatment The human neuroblastoma SH-SY5Y cell line was purchased from the Korean Cell Line Bank (KCLB) of Seoul National University (Seoul, Korea) and cultured in DMEM containing 10% FBS, as well as an antibiotic-antimycotic solution containing 100 µg/mL streptomycin, 100 U/mL penicillin, and 0.25 µg/mL Fungizone. The cells were incubated in a humidified atmosphere with 5% CO 2 at 37 • C. The cells were harvested using 0.25% trypsin EDTA and were sub-cultured into 100 mm culture dishes. Grown cells were seeded into 100 mm dishes and 24-and 96-well plates. TMT was dissolved in dimethyl sulfoxide (DMSO). Each control was added to the same volume of DMSO as a vehicle-treated group. Cells were exposed to TMT for 12 h for TEM observation and for 24 h for other analyses. Esculetin, meloxicam, celecoxib, and phenidone were pre-incubated for 2 h prior to TMT treatment and maintained in co-exposure with TMT and test chemicals for 24 h. Other test chemicals were co-exposed with TMT for 24 h. Toxicity was evaluated at the highest concentration of each test chemical. Photomicrography of Cultured Cells Representative photomicrographs in vehicle control treated or 1, 20, or 30 µM TMTtreated cells were taken using an inverted microscope (Olympus, Tokyo, Japan). Lactate Dehydrogenase (LDH) Activity Measurement Cells were seeded in 24-well plates at a cell density of 2.5 × 10 5 cells/well. After 1 day, the growth media were changed to DMEM without FBS. To determine the moderate toxic concentration of TMT, cells were treated with 0.3, 1, 3, 5, 10, 20, 30, 50, 100, or 150 µM of TMT for 24 h. Cell injury was quantitatively estimated based on LDH release from damaged cells into the culture medium. The toxicity of each chemical was evaluated using LDH assays. LDH activity was measured at a wavelength of 340 nm using a kinetic program on a VersaMax Eliza Reader (Molecular Devices, San Jose, CA, USA). MTT Assay Cells were seeded in a 96-well culture plate at a density of 1 × 10 4 cells/well and cultured in DMEM with 10% FBS and antimycin at 37 • C in a 5% CO 2 incubator. After 2 days, cells were transferred to DMEM without FBS and exposed to TMT and/or rescuing chemicals in a 200 µL volume and then incubated for 20 h. Next, the MTT solution was added and the cells were incubated at 37 • C for 3 h. After discarding the culture medium, 100 µL of dimethyl sulfoxide (DMSO) was added and the cells were incubated for 30 min to dissolve formazan crystals. The viability of the cells was estimated at 570 nm using a VersaMax microplate reader (Molecular Devices). Western Blot Analysis Treated cells at a density of 3 × 10 6 cells/100 mm dish containing TMT alone or co-exposed to TMT and deferoxamine were harvested, lysed in a lysis buffer, and subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). The blots were incubated with the designated primary antibodies. Horseradish peroxidase-conjugated species-specific IgGs were used as secondary antibodies. The blots were incubated with an enhanced chemiluminescence substrate (Thermo Fisher Scientific) and exposed to film. Enzyme-Linked Immunoassay (ELISA) To measure the activities of IL-1β, TNF-α, NF-kB SOD, and GPx-1, a human enzymelinked immunoassay (ELISA) assay kit (CUSABIO, Hubei Province, China) was used according to the manufacturer's instructions. Briefly, IL-1β and TNF-α activities were assessed by collecting 100 µL of media from SH-SY5Y cells. To assess the activities of NF-kB, SOD, and GPx-1, SH-SY5Y cells were treated with Pierce TM RIPA Buffer (Thermo Fisher Scientific, Waltham, MA, USA) and Halt TM Protease Inhibitor Cocktail (Thermo Fisher Scientific, Waltham, MA, USA), and sonicated for 3 min ON and 30-s OFF cycles. The cycle was repeated three times. After sonication, the cells were centrifuged at 12,000 rpm for 20 min at 4 • C, and 100 µL of supernatant was transferred to a new tube. Next, 100 µL of media (IL-1β, TNF-α) or supernatant (NF-kB, SOD, GPx-1) and standard were added to specific-antibody-coated 96-well microplates and incubated at 37 • C in a 5% CO 2 incubator for 2 h. All supernatant was removed, 100 µL of 1× biotin antibody was added to each well, and the plates were incubated at 37 • C in a 5% CO 2 incubator for 1 h. Each well was then washed three times with 1× wash buffer, 100 µL 1× horseradish peroxidase-avidin was added, and the plates were incubated at 37 • C in a 5% CO 2 incubator for 1 h. Each well was washed five times, 90 µL of TMB (3,3 ,5,5 -tetramethylbenzidine) substrate was added, and the plates were incubated at 37 • C in a 5% CO 2 incubator for 25 min in the dark. Following the addition of 50 µL of stop solution, the activities of IL-1β, TNF-α, NF-kB, SOD, and GPx-1 were estimated at 450 nm using a VersaMax microplate reader (Molecular Devices). Protein Assay To measure the amount of protein from SH-SY5Y cells, a bicinchoninic acid (BCA) Protein Assay Kit TAKARA BIO, Inc., Nojihigashi, Japan) was used according to the manufacturer's instructions. The supernatant was obtained from sonicated and centrifuged cells, and 100 µL of supernatant, standard, and working solution were added to a 96-well microplate and incubated at 60 • C for 1 h. After 1 h, the amount of protein was measured at 562 nm using a VersaMax microplate reader (Molecular Devices). TEM Observations Prior to TEM observations, the cultured cells were washed with 0.1 M phosphatebuffered saline (PBS) and fixed with a mixture of 1% paraformaldehyde and 4% glutaraldehyde overnight at 4 • C. Cells were post-fixed in 1% osmium tetroxide in the same buffer and dehydrated with ethanol and propylene oxide. Subsequently, the samples were embedded in Epon-812 resin and ultra-thin sections were obtained using an ultra-cut microtome (Leica Co., Greenwood Village, CO, USA). Finally, sections were stained with uranyl acetate and lead citrate and subjected to TEM visualization (LEO912AB, Carl Zeiss, Oberkochen, Germany). Statistical Analyses All statistical analyses were conducted using the SAS 9.4 program. Statistical analyses consisted of one-way analysis of variance (ANOVA) tests and Tukey's multiple comparison test, at a significance level of p < 0.05. All experimental data are expressed as means ± standard error of the mean (SEM). All experiments were performed at least three times with similar results. Dose-Toxicity of Trimethyltin (TMT) TMT cytotoxicity assayed using MTT ( Figure 1A) and LDH ( Figure 1B) displayed dosedependent decreases and increases, respectively, at concentrations between 0.3 and 150 µM. Malondialdehyde (MDA) showed significant dose-dependent increases at concentrations of TMT greater than 3 µM ( Figure 1C). Significant suppression of quantitative viability was observed at TMT concentrations of 1-3 µM of TMT. We selected 10 µM of TMT as an experimental control for rescue studies with test chemicals based on moderate LDH release following TMT treatment. Photomicrographs of Representative Cell Morphologies Representative cellular images induced by TMT treatment are shown in Figure 2A-D. There were no morphological changes at 1 µM TMT ( Figure 2B), compared to the control ( Figure 2A). Morphologically, delayed loss or injury to neurites and cell body swelling were distinct at 10 µM TMT ( Figure 2C). Most cell bodies were shrunken and lysed at 30 µM TMT ( Figure 2D). The median toxic dose (TD 50 ) was quantified based on LDH activity at a concentration of 10 µM. Therefore, we selected 10 µM as the experimental control concentration for all neuroprotective studies. Melatonin, a pineal gland hormone in the brain, has protective effects at concentrations between 100 and 200 µM ( Figure 3A). NAC (a glutathione precursor) and α-tocopherol attenuated TMT toxicity at concentrations of 0.3-2 mM and 50-300 µM, respectively ( Figure 3B,C). Allopurinol, a hydroxyl radical scavenger, also ameliorated TMT-induced neurotoxicity at 300 µM ( Figure 3D). Deferoxamine Reversed Altered Expression of Cleaved Caspase-3, Bax, Bcl-2, and Bcl-xL Induced by TMT Western blot analysis revealed that the expression of cleaved caspase-3, Bcl-2, Bcl-xL, and Bax proteins in SH-SY5Y cells was altered by TMT treatment with or without deferoxamine. The exposure of cells to TMT significantly upregulated the pro-apoptotic proteins cleaved caspase-3 and Bax and downregulated the antiapoptotic proteins Bcl-2 and Bcl-xL ( Figure 4A,B). Among these proteins, cleaved caspase-3 showed greater alteration. The iron chelator deferoxamine significantly recovered to the control state upon altered expression of apoptosis-related proteins. NAC Improved TMT-Exposed SH-SY5Y Cells Observed via TEM Ultrastructural observations of SH-SY5Y cells in the control group showed generally healthy organelles with intact mitochondria and cytosolic membranes ( Figure 5A,B). TMT induced typical necrotic damage with mitochondrial swelling ( Figure 5D) and cytosolic membrane rupture ( Figure 5C) at 12 h after exposure. Co-exposure of cells to TMT and NAC showed undamaged mitochondria and the appearance of autophagolysosomes ( Figure 5F). NAC Improved TMT-Exposed SH-SY5Y Cells Observed via TEM Ultrastructural observations of SH-SY5Y cells in the control group showed generally healthy organelles with intact mitochondria and cytosolic membranes ( Figure 5A,B). TMT induced typical necrotic damage with mitochondrial swelling ( Figure 5D) and cytosolic membrane rupture ( Figure 5C) at 12 h after exposure. Co-exposure of cells to TMT and NAC showed undamaged mitochondria and the appearance of autophagolysosomes ( Figure 5F). Dextrorphan and CNQX Attenuated TMT-Induced Neurotoxicity in SH-SY5Y Cells CNQX, a competitive AMPA/KA receptor antagonist, showed a significant neuroprotective effect against TMT toxicity at concentrations of 30-100 µM ( Figure 9A). Dextrorphan, a non-competitive NMDA receptor antagonist, also had a protective effect against TMTinduced toxicity at a concentration of 50 µM ( Figure 9B). Discussion In this study, we discovered that ROS production and subsequent neuroinflammation induced by TMT may be correlated with the eicosanoid pathways via both LOX and COX-2 enzymes. The inhibition of LOX by esculetin, COX-2 by meloxicam, and both LOX and COX-2 by phenidone may represent an important strategy for the maintenance of neuronal viability from TMT-induced brain injury. TMT decreased the expression of dopamine receptors D1 and D2, as well as dopamine transporters, inducing impairment of spatial reference memory in rat hippocampal areas [4]. Thus, most dopaminergic neurons in the hippocampus may be target sites vulnerable to exposure to TMT derived from various industrial organotin compounds. In apoptotic injury, the cleaved caspase-3 protein exhibited eightfold overexpression due to TMT exposure, the highest overexpression among all apoptosis regulators including Bax, Bcl-2, and Bcl-xL proteins. Therefore, cleaved caspase-3 apoptosis protein is the most sensitive factor in TMT toxicity among SH-SY5Y cells. These results suggest that caspase may be a more excellent parameter than LDH. In antiapoptosis proteins, decreased expression of Bcl-2 and Bcl-xL, which play important roles in inhibiting mitochondria-dependent apoptosis, was observed following TMT treatment. The iron chelator deferoxamine inhibited these alterations of apoptosis and antiapoptosis proteins evoked by TMT, which suggests that the production of hydroxyl free radicals and MDA via Fenton reactions may be closely involved in TMT-induced apoptosis. In these results, there is necessary further study to clarify the exact profiles of the molecular mechanism of TMT neurotoxicity using quantitative real-time polymerase chain reaction (qPCR) assay. In a previous study, melatonin was found to ameliorate TMT-mediated neuroinflammation in vivo [20]. The antioxidant ascorbate improved TMTinduced seizures by regulating glutathione homeostasis and various oxidative stresses including MDA [21]. Similarly, we confirmed that melatonin, a singlet oxygen radical scavenger, and NAC, a glutathione precursor, as well as the radical scavengers α-tocopherol and allopurinol, inhibited TMT-induced neuronal injury. In contrast to these results, TMT has been reported to trigger necrosis and autophagy based on ultrastructural observations in vivo [22]. This necrosis was chiefly localized to neurons but not glial or endothelial cells. In microstructural TEM studies using SH-SY5Y cells, we demonstrated that neuronal cell injury morphology induced by TMT was characterized by delayed typical necrosis, indicated by mitochondrial swelling and cytoplasmic membrane rupture. These microstructural necrotic findings induced by TMT improved with exogenous glutathione supplements such as NAC. Notably, autophagolysosomes were found in the group co-treated with TMT and NAC. These results suggest that the pattern of TMTinduced neuronal cell death mainly shows apoptosis, with partial necrosis and autophagy components in human dopaminergic neuroblastoma SH-SY5Y cells. TMT-induced swelling of SH-SY5Y cells was observed in phase-contrast images. In other in vivo studies, TMT was found to induce extensive necrosis, such as extensive neuronal edema, lysosome accumulation, and myelinoid membranous bodies at pyramidal neurons in the neonatal rat hippocampus [23]. Some studies have reported increases in COX-2 expression in the CA1 region of the rat hippocampus [24,25]. Houng et al. [26] also reported that indomethacin, a COX inhibitor, alleviated TMT-induced neuronal injury in the dentate gyrus of mice. In the present study, we demonstrated, for the first time, that the LOX inhibitor esculetin, as well as COX-2 inhibitors and an inhibitor of both LOX and COX-2-namely, meloxicam, celecoxib, and phenidone-protected SH-SY5Y cells from TMT-induced neurotoxicity. TMT increases the expression of pro-inflammatory cytokines such as interleukin-1β (IL-1β), tumor necrosis factor-α (TNF-α), and nuclear factor-kB (NF-kB) in neurons, glia, and microglia after TMT-induced hippocampal injury [19,[27][28][29][30][31][32]. In these studies, IL-1β, TNF-α, and NF-kB dose-dependently increased after TMT exposure. Pre-treatment with meloxicam or esculetin significantly reduced the increase in pro-inflammatory cytokines after TMT exposure. Notably, after TMT exposure, meloxicam and esculetin each potently reversed the elevation of IL-1β and TNF-α to the control state. In epilepsy models, meloxicam diminished IL-1β and TNF-α levels [33]. Esculetin and meloxicam showed similar inhibitory tendencies toward the pro-inflammatory transcription factor, NF-kB. TMT reduced the antioxidant enzyme activities of SOD, CAT, and GPx, including GSH levels, in the rat brain [34,35]. CAT and GPx-1 activity induced by TMT exposure decreased dose-dependently in our culture system. Esculetin and meloxicam showed significant reversal effects. Glutamate receptors, including NMDA and non-NMDA receptors, participate in excitotoxic injury in SH-SY5Y cells [12]. Neuronal injury induced by TMT treatment was ameliorated by antagonists of non-NMDA and NMDA receptors. These results suggest that excitotoxic neuronal injury may partly contribute to TMT-induced cell death. However, we observed that 0.5 mM NMDA, 0.3 mM KA, and 5 mM glutamate did not induce neuronal injury in SH-SY5Y cells ( Figure S1 in Supplementary Materials). These results suggest that the distribution and function of glutamate receptors in SH-SY5Y cells are insufficient to induce excitotoxic injury and are consistent with those of other studies that have reported excitotoxicity at glutamate concentrations of > 20 mM [12]. TMT also stimulated calcium-mediated glutamate release in brain-slice cultures, while nifedipine, an L-type calcium channel blocker, and MK-801, a non-competitive NMDA channel blocker, did not relieve the glutamate efflux [36]. However, we observed that another L-type calcium channel blocker, nimodipine, and an intracellular calcium chelator, BAPTA-AM, significantly ameliorated TMT-induced neurotoxicity. This discrepancy may partly be due to the difference between the high concentrations (0.1-1 mM) of TMT used by Dawson et al. [36] and the 10 µM concentration used in the present study. Recently, Rakshit et al. [37] reported that deferoxamine, an iron-chelator, protects neuronal cells from 6-hydroxydopamine-induced apoptosis and autophagy in SH-SY5Y cells. In the present study, deferoxamine inhibited TMT-induced neurotoxicity and apoptosis, whereas the pan-caspase inhibitor z-VAD-fmk suppressed TMT-induced neuronal injury. Signaling of the PI3K/Akt/mTOR pathway plays an important role in neuronal survival or neurodegeneration via autophagy or apoptosis processes [38,39]. In human neuroblastoma cells, relatively high activation of the PI3K/Akt/mTOR signaling pathway has been reported [40]. In this study, we showed that rapamycin, an mTOR inhibitor and autophagy activator, contributes to neuronal survival against TMT-induced neurotoxicity. By contrast, we observed that autophagic LC3 intensity did not change within 24 h after 10 µM TMT treatment, which differed from the pharmacological results (data not shown). Autophagy inhibitors severely exacerbated TMT-induced neurotoxicity in neuronal cell cultures, in contrast to our results [16]. In another study, a potent autophagy inducer, rapamycin, protected PC12 cells through mTOR inhibition [41], indicating that autophagy contributes to the increased viability of cells during neuronal damage. These experiments have been executed by only one cell line. Accordingly, further comparative studies through other cell lines, such as animal or human neuronal cell lines will be necessary in the near future. Conclusively, these results demonstrate that TMT-induced neurotoxicity is involved in LOX-and COX-2-mediated apoptosis, and may participate in necrosis or autophagy via calcium-mediated oxidative stress and pro-inflammatory cytokines. Conclusions In this study, TMT showed caspase-dependent apoptosis and calcium-dependent necrosis. Some antioxidants, LOX and COX-2 inhibitors, chelators of iron and calcium, calcium channel blockers, NMDA and AMPA/KA receptor blockers, and PI3K/Akt/mTOR pathway modulators could be potential candidate compounds for therapy of TMT intoxication. In particular, inhibitors of LOX and COX-2 exhibited neuroprotective effects via regulating oxidative stress and pro-inflammatory cytokines in human dopaminergic neuroblastoma SH-SY5Y cells.
2021-09-25T16:03:36.019Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "adb85b7421ed995afa2fd383a95ce255d7cf08f8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/11/9/1116/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83ef77f532ddf9e5174367e2fd7603cceb775258", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
15185954
pes2o/s2orc
v3-fos-license
Evidence and Value: Impact on DEcisionMaking – the EVIDEM framework and potential applications Background Healthcare decisionmaking is a complex process relying on disparate types of evidence and value judgments. Our objectives for this study were to develop a practical framework to facilitate decisionmaking in terms of supporting the deliberative process, providing access to evidence, and enhancing the communication of decisions. Methods Extensive analyses of the literature and of documented decisionmaking processes around the globe were performed to explore what steps are currently used to make decisions with respect to context (from evidence generation to communication of decision) and thought process (conceptual components of decisions). Needs and methodologies available to support decisionmaking were identified to lay the groundwork for the EVIDEM framework. Results A framework was developed consisting of seven modules that can evolve over the life cycle of a healthcare intervention. Components of decision that could be quantified, i.e., intrinsic value of a healthcare intervention and quality of evidence available, were organized into matrices. A multicriteria decision analysis (MCDA) Value Matrix (VM) was developed to include the 15 quantifiable components that are currently considered in decisionmaking. A methodology to synthesize the evidence needed for each component of the VM was developed including electronic access to full text source documents. A Quality Matrix was designed to quantify three criteria of quality for the 12 types of evidence usually required by decisionmakers. An integrated system was developed to optimize data analysis, synthesis and validation by experts, compatible with a collaborative structure. Conclusion The EVIDEM framework promotes transparent and efficient healthcare decisionmaking through systematic assessment and dissemination of the evidence and values on which decisions are based. It provides a collaborative framework that could connect all stakeholders and serve the healthcare community at local, national and international levels by allowing sharing of data, resources and values. Validation and further development is needed to explore the full potential of this approach. Background The objective of any healthcare intervention is to improve health; preventive measures, non-pharmacological and pharmacological treatments, and medical procedures are among the numerous available options. Decisionmaking in healthcare is a complex process taking place along a continuum that moves from evidence generation to deliberation on each particular intervention and communication of the resultant decision. Evidence-based medicine and evidence-informed health policymaking rely on evidence generated by developers of healthcare interventions, at least in the initial stages of the life cycle of an intervention. Evidence quantity, quality, usability and accessibility have been identified as hindrances to informed policymaking, [1] highlighting the disconnect between those who need evidence to make a decision and those who generate this evidence. Beyond evidence, decisionmaking requires value judgment. [2,3] Tunis argues that controversy around decisions may stem from the absence of shared views about the role of evidence versus judgment in evidence-based healthcare policies. [3] Frequent controversy surrounding drug coverage variation across jurisdictions with similar levels of economic development, values and political systems [4][5][6][7] highlights a need for rational and transparent approaches to decisionmaking. Surveys have recognized a need for fair and explicit healthcare decisionmaking processes that are more defensible. [8,9] Such processes should fulfill two main functions. Firstly, they should support the complex deliberative process that requires simultaneous consideration of multiple factors such as clinical benefit, [10] level of innovativeness, [6,10] quality of clinical evidence, [4,10] quality of dossier [i.e., organization, accuracy of information presented], [10] cost-effectiveness, [10,11] price and budget impact, [6,10] value judgments, [10] and colloquial evidence [anything that establishes a fact or gives reason for believing something]. [12] Without an explicit process to structure such complex deliberation, decisionmakers are likely to resort to intuitive and subjective approaches, potentially missing important information. [13] Secondly, such processes should help legitimize the decision by ensuring that conditions for 'accountability for reasonableness' (A4R) are met by structuring the deliberative process to make rationale and principles on which decisions are based explicit and ultimately publicly available. Within the A4R framework, availability to public scrutiny is a necessary prerequisite to legitimizing decisions. [14] As suggested by Dhalla and Laupacis, transparency in all areas of healthcare policymaking, including availability of data and decisionmaking rationales, is likely to raise public confidence in the process and may ultimately lead to better decisions. [15] Several approaches have been published for making healthcare coverage decisionmaking more consistent, rational and transparent. [16][17][18][19] For example, the Cancer Care Ontario Policy Advisory Committee developed a tool that supports the deliberative process by presenting structured synthesized information on various aspects of the drugs considered. [17] A number of UK Health Authorities have developed explicit multicriteria models to facilitate prioritization decisions. [18] These attempts highlight growing awareness in those at the forefront of decisionmaking, and others in the field, of the need for a more holistic approach that goes beyond reliance on costeffectiveness criteria. [20][21][22][23] In this context, we hypothesized that healthcare decisionmaking could be facilitated by structuring access, consideration and communication of the evidence and the value judgments on which it is based. The objective of this study was to develop a practical framework to facilitate decisionmaking by supporting the deliberative process, permitting access to relevant evidence, and enhancing effective communication of decisions. Methods Extensive analyses of the literature and of current decisionmaking processes were performed to identify steps leading to decisions, as well as the components of the thought processes underlying decisions. Needs and methodologies available to support such processes were identified to lay the groundwork on which to build the EVIDEM framework. Review of decisionmaking processes in jurisdictions worldwide [10, was performed to explore the continuum from evidence generation to decision, to communication of decision. Processes for drug coverage decisions were used as a model since they are often the most structured and explicit in healthcare decisionmaking; however, all analyses were performed from the perspective of facilitating decisions for any type of healthcare intervention. Based on this review, the current steps flow as follows: (1) manufacturers/innovators generate data with experts and submit evidence to the decisionmaking body following specific requirements; (2) assessors & reviewers collect and appraise evidence (quality assessment), prepare a report for a decision committee (synthesized evidence) and may incorporate stakeholder opinion; (3) a committee makes a decision based on that report and stakeholder opinion; (4) the decision is made public with rationale for decision; an appeal process may be in place. Drawing from this analysis, the following needs were identified: ‫ؠ‬ systematic and explicit consideration of all key elements of decision during the deliberative process; [17,18] ‫ؠ‬ each committee member's perspective needs to be captured and values shared in the committee; [16,45] ‫ؠ‬ relevant evidence in a digested, unbiased and systematic format; [16,17] ‫ؠ‬ data on quality of evidence in a structured system for all types of evidence considered; [18] and ‫ؠ‬ transparent, understandable, and acceptable communication of decision. [15,41] These needs were all considered in developing the framework. Analysis of the literature revealed that decisionmaking can broadly be subdivided into scientific judgment and value judgment. [2,3] Scientific judgment relies on globally accepted standards defining the quality of evidence. Such technical judgment can be applied using a system in which the elements of quality are explicitly identified and quantified (scored). Scientific technical judgments are not highly dependent of the evaluator (compared to value judgments) and can be standardized. A number of quality standards, country specific guidelines, checklists and instruments are available to assess the quality of various types of evidence (e.g., CONSORT, [46] CHEC, [47] STROBE [48], QUOROM [49], MOOSE [50],. GRADE [51,52], QHES [53,54] and others [24][25][26][27][29][30][31][32][33][34][35][36][37][38][39][40][55][56][57][58][59][60][61][62][63][64][65][66][67]). While these provide a rigorous scientific basis for quality assessment of evidence, additional elements were identified that could integrate scientific judgment into a practical approach to healthcare decisionmaking. These include: ‫ؠ‬ streamlining quality assessment for all types of evidence; ‫ؠ‬ distinguishing between quality of reporting, and relevance and validity of evidence; ‫ؠ‬ providing the rationale behind scoring for full transparency; and ‫ؠ‬ using systematic deliberative processes to collaboratively evaluate the quality of evidence. Analysis of the literature on quantifiable tools for value judgments considered in decisionmaking pointed to multicriteria decision analysis (MCDA). MCDA structures the deliberative process by breaking down a problem into the components expected to impact the value of an option, and by quantifying them using a scale with defined anchors. [13,68] MCDA explores value judgment from two standpoints: the value system of the evaluator with regard to the importance of each value components (weights) and the actual performance of an intervention (scores). A value estimate is obtained by combining weights and scores using simple or complex mathematical models. MCDA is widely used to support decisions in environmental engineering, agriculture, and marketing [13] and is a promising approach to healthcare decisionmaking [69][70][71][72][73][74]. Review of decisionmaking processes revealed that not all value components usually considered in decisionmaking are readily quantifiable. [10,[61][62][63][64][65][66][67] A commonly shared direction of scoring is needed to define low and high ends of a scale to make quantification meaningful. In general, components defining the intrinsic value of an intervention are quantifiable from a universal standpoint, while extrinsic or system-related components are not readily quantifiable or quantification scales depend on specific local considerations. For example, when considering the intrinsic value component "improvement of efficacy", it is generally agreed that, all else being equal, an intervention that brings major efficacy improvement has a higher value than one with minor improvement. However, components such as historical context, stakeholder pressure, population priorities and access, and ability of a healthcare system to make appropriate use of intervention, factors often critical in healthcare decisions, [41,75,76] do not have consistent impact on how an intervention is valued. For these components, what constitutes increase or decrease in value requires definition during deliberation at the jurisdictional level and on a case-by-case basis. Consideration of extrinsic components is easier once intrinsic value components have been defined. To facilitate value judgments related to a healthcare intervention, following needs were identified: ‫ؠ‬ disentangle intrinsic and extrinsic value components; [75,76] ‫ؠ‬ develop a simple and rigorous system that applies MCDA from a pragmatic standpoint based on actual thought processes; ‫ؠ‬ provide practical access to the evidence on which value judgments are based; and ‫ؠ‬ provide a practical method for decisionmakers to provide feedback to data producers and all other stakeholders. Thus identified, these needs were used to develop the EVI-DEM framework, processes and tools. Framework A practical framework was developed structuring and making more shareable what is currently being done around the world. It was based on three main principles: ‫ؠ‬ Support deliberative process by disentangling and quantifying when possible scientific judgment (quality of evidence) and value judgment (intrinsic and extrinsic value of intervention); ‫ؠ‬ Facilitate access to relevant evidence over the life cycle of a healthcare intervention using a collaborative structure; and ‫ؠ‬ Enhance communication of decisions using transparent tools. The framework structures the context of decisions for a healthcare intervention in a given setting into seven modules ( Figure 1). The centerpiece of the framework is the MCDA Value Matrix (module 5) which is both a quantifi-cation tool for the intrinsic value of an intervention and a portal to evidence (synthesized [module 3] with electronic links to full text source information [module 2]) and to data on quality of evidence (module 4 -Quality Matrix). Extrinsic value is considered in module 6 and communication of the decision is module 7. Applying the full framework from the early stage of development of a healthcare intervention requires a collaborative approach (module 1) in which all stakeholders are involved, i.e., decisionmakers, experts, data assessors and data producers. The result of the process is an EVIDEM record, modules of which can be shared in a web-based collaborative database for transparency and application by other decisionmaking groups or individuals. The modular aspect facilitates access to evidence and decisions, updates, and database development. Value of intervention -Value Matrix A MCDA Value Matrix (VM) was developed to include the value components usually considered in policy decisionmaking. MCDA was selected as a methodological model for the VM for its versatility, transparency and ease of Conceptual framework for healthcare decisionmaking in a given setting Figure 1 Conceptual framework for healthcare decisionmaking in a given setting. -EVIDEM Team Healthcare intervention in a given setting application by a wide range of stakeholders. Value components that can not be readily incorporated into a matrix were not included and were listed as extrinsic components for consideration at the jurisdictional level or on a caseby-case basis (e.g., equity, historical context, stakeholder pressure, population priorities and access, ability of healthcare system to make appropriate use of intervention) (module 6). -Quality of evidence The VM (module 5) was designed to address the key question: What is the value of a healthcare intervention with respect to its intrinsic characteristics? In other words, what does it bring to the health of society (for the jurisdiction being considered)? Such a question involves probing the value system of decisionmakers (weights) and assessing the healthcare intervention based on evidence available using defined scales (scores). The value estimate of an intervention is the combination of weights and scores. Components of decisionmaking identified in the analysis of current decisionmaking processes were specifically defined and structured to fulfill MCDA methodological requirements. [68] These are: • Completeness: all currently-understood components defining the intrinsic value of an intervention are included; • Non-redundancy: all components are necessary, important and there are no duplicates; • Mutual independence: scoring of each component is independent from scoring of all other components (i.e., scores for each component can be assigned without considering scores for other components); and • Operationality: each component is defined unambiguously; data on which to base the evaluation is available; the numerical scale follows a shared sense of direction. Fifteen components were thus defined and grouped into four clusters; scoring directions were defined from a societal perspective ( Figure 2) [68]. The first cluster assesses the impact of the quality of evidence on the value of an intervention (e.g., how the relevance and validity of evidence impacts the value of an intervention). This is not to be confused with the assessment of the quality of evidence, which is performed separately using the Quality Matrix (QM, see below). One key principle of EVIDEM is that reasoning is facilitated and made more objective by disentangling these distinct concepts (quality of evidence based on scientific standards versus the value assigned to the quality of evidence). The first cluster was broken down into three components corresponding to the three criteria of the QM. The disease impact cluster was broken down into two components: disease severity (D1) and size of affected population (D2). It was assumed that an intervention for a very severe disease has more value than an intervention for a mild disease (D1) and that an intervention that benefits a large number of patients has more value than one that benefits a small number of patients (D2). The intervention cluster was broken down into seven components. The first (I1) explores the impact of clinical guidelines. Clinical guidelines serve multiple functions for numerous groups and have become ubiquitous. [77] They can have considerable impact on practice and perceived value of an intervention. [78] It was assumed that guidelines represent current consensus and that strong (e.g., Class I) [79] recommendations for the intervention under consideration or for a similar intervention (e.g., a product structurally related [80]) would result in a high value score. The second component assesses the impact of limitations of current interventions (I2) on the value of a new intervention. The concept of improvement of medical service, used by the Commission de la Transparence in France, [81] was used to define three key components of the value of an intervention: efficacy and effectiveness (I3); safety and tolerability (I4); and patient reported outcomes, convenience and adherence (I5). Assessing these components required clearly defining which existing medical services and medical practices the new treatment is meant to replace or complement. Data for these existing services provides the evaluator with an evidence-based frame of reference for components I3 to I5. Components I6 "Public health interest" and I7 "Type of medical service" capture the nature of the health benefit of the intervention respectively at the population level and at the individual level. The economics cluster was broken down into three components to explore the impact of covering a new intervention on health plan budgets (E1), on other spending (E3), and its cost-effectiveness (E2). To ensure non-redundancy and to be in line with standard budget impact modeling practices, components E1 and E3 respectively were defined to cover financial impact of intervention only (limited to the cost of intervention and potential savings in replacement of existing interventions) and all other economic impacts (such as those resulting from changes in hospitalization, adverse events, disability, equipment maintenance cost). The latter is usually explored in economic evaluations, which are made more useful to decisionmakers by reporting disaggregated cost-consequence information. [82] The economic evaluation component (E2) assesses the value of an intervention based on costeffectiveness ratios obtained from the analytic perspectives (e.g., healthcare system, societal). Although this Value Matrix -definitions of components and scoring scales The VM was then designed to be self-contained, with an emphasis on practicality (Figure 3). It contains: • A weighting scale (1 to 5) to capture the value system of each evaluator independent of the healthcare intervention under scrutiny; standard deviation of weights for each VM component (W x ) among a group of evaluators can be used to support discussion among evaluators; • Synthesized evidence for the healthcare intervention under scrutiny prepared using a standardized methodology to minimize bias (see below); • A scoring scale (0 to 3) with defined anchors and scoring guidelines; it includes four scoring options to stimulate thought processes and avoid loss of information with a middle score, and zero to allow for exclusion of a component that does not bring any value (e.g., safety less than current practice); standard deviation of scores for each VM component (S x ) can further stimulate deliberative process among evaluators; • A comments section for decisionmakers to provide feedback to the producers of evidence; includes a prompt to indicate whether low score is due to data limitation, providing a way to capture and communicate data needs; • A simple MCDA linear model to capture a value estimate (V) of the intervention for each evaluator: Where W x is the weight for each VM component Value Matrix -assessment of the intrinsic value of a healthcare intervention Access to evidence -synthesized and full text Access to high level synthesized evidence is necessary to focus the thought process on key elements of decision but should be complemented by easy access to full text sources for those who want to access more details. To ensure minimally biased evidence is available to stakeholders, a methodology was developed to synthesize this evidence for each component of the VM (module 3). The principal objective was to provide the information necessary and sufficient to score each component with access as needed to full text sources. A template with instructions was developed for each component of the VM indicating where and how to find evidence (search algorithms, biomedical and economic databases, registries, manufacturer, health technology assessment reports, Cochrane reviews, etc.), what to report and how (i.e., standard format). For full traceability, electronic links to full text sources were integrated into module 2. For example, to assess "disease severity", data to be identified and reported included disease acuteness, morbidity (disability, quality of life) and mortality, as well as disease stages or subtypes that differentiate therapies and target populations. Besides extracting study results, key elements used to define their validity are also reported, such as, number of patients included in pivotal trials, follow-up duration for safety data, key model features for economic evaluations and sources used for budget impact projections. For the quality of evidence cluster, quality scores for each type of evidence are provided by criterion of quality assessment (quality scores are obtained via an explicit process described below), providing decisionmakers with structured access to results and rationale of quality assessment for each type of evidence. Quality of evidence -Quality Matrix The QM was designed to quantify the quality of evidence generated for a healthcare intervention; it is grounded in current evidentiary requirements of healthcare decisionmaking bodies and derives from numerous existing tools and instruments to assess quality of evidence. The QM streamlines quality assessment of all types of evidence, disentangles criteria of quality, and provides access to a rationale for each score attributed via a deliberative process. The QM was designed with an emphasis on practicality and includes ( Figure 6): • Three criteria of quality assessment (columns); • 12 types of evidence currently required (rows); • For each cell of the QM: ‫ؠ‬ Questions or instruments based on global standards ‫ؠ‬ Prompt for evaluator to provide rationale for score ‫ؠ‬ Scoring scales Five elements defining quality were identified and clustered in three criteria ( Figure 6): • Q1 Adherence to the requirements established by the decisionmaking body to which evidence is submitted; • Q2: Completeness of reporting, as prescribed by reporting guidelines, and consistency with cited sources and throughout the document; this criterion can be applied to individual studies or to a high-level document (e.g., dossier) that includes several studies; • Q3: Relevance of evidence to the decisionmaking body and validity of evidence, with respect to scientific standards and methodological guidelines in applicable fields of research. Evidence concerning the disease and its management was broken down into three types: disease description (#1), current treatment patterns including practices and guidelines (#2), and impact of new intervention on therapy (#3). Epidemiology data included standard metrics and risk factors (#4). Information on the new intervention was broken down into four types: characteristics of intervention (#5), efficacy and safety data obtained from clinical trials (#6), patient reported outcomes (PRO) data (#7) and effectiveness data from trials and registries (#8) For the last type of evidence, identification of effectiveness used the criteria defined by the US Agency for Healthcare Research and Quality (AHRQ). [83] Data on current interventions that the new intervention is projected to replace or complement was captured in a separate component (Comparator intervention data # 9) including efficacy, safety, PRO and effectiveness data, and characteristics. Economic data was broken down into three types of evidence: price and price justification (#10); economic evaluation including impact of the new intervention on healthcare utilization and costs, and on society (#11); and impact of reimbursing the new intervention on the health plan budget (#12). Value Matrix comparative scale Figure 5 Value Matrix comparative scale. Intervention for a rare disease with minimal improvement in efficacy and major safety issues, resulting in major increases in healthcare spending Interpretation of the VM scale (% maximum value estimate) 100% 0% 25% 50% 75% For each type of evidence contained in the QM, instructions, questions, and for the most complex types of evidence, specific instruments were developed. They were derived from current tools (e.g., GRADE, CHEC, etc.) to streamline scoring processes across types of evidence, distinguishing criteria of quality (e.g., reporting versus validity) while keeping the whole system practical. For example, for type of evidence "Economic evaluation", two 11-dimension instruments were developed: 1) an instrument to assess the completeness and consistency of reporting of the study; and 2) an instrument to assess the relevance and validity of study design and results ( Figure 7). A scoring scale with defined anchors was developed and full transparency requires that each score be justified by the investigator. Rationale and scores are reviewed by another investigator and validated by experts through deliberative process until consensus is reached. Comments, rationale and scores are all integrated into the QM for full traceability. Aggregated quality scores are esti-mated as a percentage of maximum score by criterion, by type of evidence or for the whole QM. Discussion The EVIDEM framework was tailored to reflect the thought process underlying decisionmaking and to fit the continuum from data generation to decision to communication of decision. It supports decisionmaking and deliberative processes by structuring, segregating and providing transparent access to evidence (incorporating quality assessment), while facilitating communication about value judgments and data needs among stakeholders. The instruments developed to operationalize the EVIDEM framework are rooted in existing processes and instruments; however, they integrate the essential components of decisionmaking into a comprehensive and cohesive structure. The VM draws on the flexibility and comprehensiveness of MCDA while disentangling extrinsic from intrinsic value components, and providing structured access to the evidence on which those value judgments are Quality Matrix -assessment of quality of evidence for a healthcare intervention Figure 6 Quality Matrix -assessment of quality of evidence for a healthcare intervention. Figure 7 Scoring see Figure 7 12 Budget impact Budget impact analyses if intervention reimbursed (impact on health plan/drug plan) *Requirements established by the decisionmaking body g Definition of effectiveness trial based on the AHRQ criteria based. Unlike some earlier applications of MCDA, [69,74] the VM does not require complicated mathematical models or computation, but rather serves as a communication tool among and between stakeholders. Specific instruments developed for the QM draw on existing instruments in each respective field of research. These often combine in one instrument dimensions pertaining to quality of reporting and to relevance and validity of a study (e.g., for economic evaluations [47,54,56]). QM instruments disentangle quality of reporting (Q2 completeness and consistency) from relevance and validity (Q3), requiring the reviewer to focus specifically on each aspect of quality, bearing in mind that relevance and validity require good reporting practices to be fully evaluated. Because results of quality assessment are highly dependent on the assessor, rather then on the instrument, it was suggested by Gerkens et al, [84] that assessors should reach a consensus on scores, which is required when applying the QM instruments. The EVIDEM framework needs to be tested in context, validated and further developed through iterative collaborative processes. In a proof of concept approach, the system was pilot-tested using historical cases in the Canadian context with the objective of assessing feasibility, practicality and value to end users. The Canadian Value Panel convened for the pilot study indicated that the VM with embedded synthesized data would be highly useful as a support for healthcare decisionmaking, to guide discussion and share values among decisionmakers, at both the policy and clinical levels, by systematically assessing strengths and weaknesses of healthcare interventions in a comprehensive and structured fashion. Quality Matrix -assessing quality of economic evaluations Figure 7 Quality Matrix -assessing quality of economic evaluations. Practical use of this approach faces significant challenges. Among these are uptake by decisionmaking bodies; this will only happen if the new process is perceived as facilitating and simplifying their task, rather than adding complexity. The EVIDEM framework was designed to create a simple and practicable series of freely accessible tools that could be easily integrated into existing processes, while providing a common ground. In addition, integration of EVIDEM records into a web-based collaborative database is intended to provide a platform to all stakeholders for easy access to high level data on evidence available for healthcare interventions, as well as to value estimates. Another major challenge will be the bringing together of data producers and those who make decisions. There are issues of trust and bias that need to be surmounted to provide the collaborative environment that this process would need. This would permit the 360 degree transparency as envisioned by Dhalla & Laupacis. [15] The framework was designed to be of use to a variety of healthcare decisionmakers. Several applications are envisioned ( Figure 8). Retrospectively, the approach can be used to explore the context of past decisions, assess the quality of evidence available for a healthcare intervention at a point in time, and validate the process in a given jurisdiction (Figure 8 -Application axis). Prospectively, it can be used to evaluate new interventions and to maintain a transparent record of evidence and decisions over its entire the life cycle. Several studies assessing healthcare decisionmaking processes in various regions of the world have highlighted the importance of transparency and fairness. [8,85,86] A number of initiatives have been implemented globally to increase transparency in access to both evidence and rationale for policy decisions. In Canada, the Common Drug Review recently implemented a transparency initiative. [87] while in the UK, the National Institute for Excellence is now providing full access to manufacturer dossiers on their web site. [88] However, current processes for coverage decisions are generally organized in such a way that decision rationales cannot easily be shared among members of the decision committee, let alone members of the public. Using an approach such as EVIDEM to make and communicate decisions could represent a significant step towards a more accountable and transparent process. Better understanding of the rationale behind decisions by all stakeholders could in turn enhance the legitimacy and acceptability of decisions. [5,14,15] Similar reasoning could apply to decisionmaking at the individual level; patients and their healthcare team could use such an explicit framework to assist consideration of all the components of complex decisions. Another aspect of healthcare decisionmaking, which requires further development, is extrinsic or systemrelated value judgments (Figure 8 -application axis). These may be critical in decisions and require focused discussion and elicitation of preferences or consensus building at the jurisdictional level. One study applying an MCDA approach to healthcare priority-setting in Ghana identified extrinsic factors such as 'age of target group' and 'poverty reduction' as critical factors through discussion with stakeholders and local policymakers. [72] Research in this area is essential to identify and structure systemrelated factors in decisions, which will be easier if predicated on transparent assessment of the intrinsic value of interventions. Several features are integrated in the EVIDEM framework to facilitate communication between those who generate data and those who need data to make decisions. Through iterative processes, the framework can help define evidentiary needs of decisionmakers and be used as a planning tool for researchers and developers of new interventions, to ensure that the data that is generated addresses the needs explicitly defined (Figure 8 -Collaborative axis). Knowledge transfer and exchange (KTE), an interactive process between research users and research producers, aims to increase the likelihood that evidence will be used in practice and policy decisions. [89] A recent review suggests that this field of research, still in its infancy, has yet to identify KTE strategies that best support health policy decisionmaking. [89] Finally, the EVIDEM framework can also be used for educational purposes to explore the thought processes underlying healthcare decisionmaking and the concepts that define quality of evidence. Conclusion Healthcare decisions have to be made in the context of a plethora of information, without easy access to all the necessary information and without an explicit decisionmaking framework. This often results in poor transparency and controversial decisions. The EVIDEM framework provides a comprehensive transparent structure grounded in global standards and local needs. The proposed framework is a step to organizing evidence and streamlining processes on a collaborative approach. This framework should not be viewed as a formula but rather as an aid to ensuring that all important data is considered and that rationales and values underlying a decision may be shared. It supports deliberative processes [12,90] allowing decisionmakers to combine all types of evidence and values, and increases the likelihood of making solid decisions. Validation and further development through collaborative and synergistic efforts is necessary to explore the value of this framework in practice. This type of systematized and shareable approach for data access and Potential applications of the EVIDEM framework Figure 8 Potential applications of the EVIDEM framework.
2014-10-01T00:00:00.000Z
2008-12-22T00:00:00.000
{ "year": 2008, "sha1": "c98b2094403c6f95e9a3e2457687afa2ac362c63", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-8-270", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71ede2db9a45beaf7ecba1cad180b401b3be5785", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257653622
pes2o/s2orc
v3-fos-license
An Ultrasonic Target Detection System Based on Piezoelectric Micromachined Ultrasonic Transducers In this paper, an ultrasonic target detection system based on Piezoelectric Micromachined Ultrasonic Transducers (PMUTs) is proposed, which consists of the PMUTs based ultrasonic sensor and the sensor system. Two pieces of 3 × 3 PMUTs arrays with the resonant frequency of 115 kHz are used as transmitter and receiver of the PMUTs-based ultrasonic sensor. Then, the sensor system can calculate the target’s position through the signal received by the above receiver. The static and dynamic performance of the proposed prototype system are characterized on black, white, and transparent targets. The experiment results demonstrated that the proposed system can detect targets of different colors, transparencies, and motion states. In the static experiments, the static location errors of the proposed system in the range of 200 mm to 320 mm are 0.51 mm, 0.50 mm and 0.53 mm, whereas the errors of a commercial laser sensor are 2.89 mm, 0.62 mm, and N\A. In the dynamic experiments, the experimental materials are the targets with thicknesses of 1 mm, 1.5 mm, 2 mm and 2.5 mm, respectively. The proposed system can detect the above targets with a maximum detection error of 4.00%. Meanwhile, the minimum resolution of the proposed system is about 0.5 mm. Finally, in the comprehensive experiments, the proposed system successfully guides a robotic manipulator to realize the detecting, grasping, and moving of a transparent target with 1 mm. This ultrasonic target detection system has demonstrated a cost-effective method to detect targets, especially transparent targets, which can be widely used in the detection and transfer of glass substrates in automated production lines. Introduction Target detection is widely used in the fields of autonomous driving [1,2], target tracking [3,4], obstacle avoidance [5,6], and so on. Among various target detection methods, the method of ultrasonic sensors based on Piezoelectric Micromachined Ultrasonic Transducers (PMUTs) has attracted much attention due to its small size, low cost, low power consumption, compatible with Complementary Metal Oxide Semiconductor (CMOS) process and simple system structure, which is an economical method to install on the robotic manipulator for detecting glass substrates in the automated production line. One of the presence detection solutions for glass substrates is to use a set of throughbeam photoelectric sensors. This method has the disadvantages of high cost, high installation accuracy requirements, and large interference from environment light. In order to reduce the interference of the above factors on the results, other sensors with better performance could be used as an alternative. Currently, the sensors used for target detection mainly include electromagnetic sensors, optical sensors, and ultrasonic sensors. Electromagnetic sensors are mostly used for dynamic detection because they are not affected by weather conditions and can obtain the relative position information of the detected targets through the Doppler property [7]. That said, detection results are prone to interference from metal objects such as conveyor belts and robotic manipulators. This means that the electromagnetic sensors are not suitable for industrial production lines [8]. Optical sensors include visual sensors [9][10][11], laser sensors [12][13][14], and infrared sensors [15,16]. Chen et al. proposed a stereo vision algorithm for the object pose estimation using point cloud data from multiple stereo vision systems [10]. Based on the visual sensor, this algorithm can accurately calculate the pose of targets, which can guide the subsequent movement and grasping of the robot. Han et al. proposed a control system of robotic manipulator grasping based on binocular visual sensors [17]. The Canny edge detection algorithm and inverse kinematics were used to solve the motion of the robotic manipulator, which improved the success rate of the robotic manipulator for grasping different targets. Visual sensors are too sensitive to environmental lighting. In addition, complex detection algorithms affect the real-time performance of these systems. In order to avoid the effect of environmental lighting, visual sensors can be replaced by laser sensors. Compared with visual sensors, laser sensors have longer detection distance and are more resistant to interference from to environmental lighting. Andry et al. used a 2D laser sensor fixed on a robotic manipulator to detect different targets, which proved the reliability of laser sensors under low contrast condition [18]. Although the laser sensors have excellent performance, higher accuracy comes at a significant expense. In addition, laser sensors cannot detect transparent targets. On the other hand, unlike environmental lighting, Kutila et al. pointed out that the performance of LiDAR (A device that uses laser and works like radar) decreased by 25% in harsh environments [12]. Compared to other optical sensors, infrared sensors play a more auxiliary role in the detection process. Lee et al. pointed out that infrared sensors can be used together with laser sensors in target tracking because they can provide effective nighttime visibility [15]. Compared with the above sensors, ultrasonic sensors are cheap and robust in a variety of environmental conditions. In autonomous driving, Jiménez et al. used ultrasonic sensors to replace laser sensors, which effectively reduced the cost [1]. Li et al. used a linear ultrasonic sensor array with an improved Kalman filter algorithm to achieve dynamic tracking of targets with different materials and shapes [3]. The above ultrasonic sensors are usually based on the bulk piezoelectric transducers. Although they have high output power, they are complex to manufacture and incompatible with the CMOS fabrication process [19]. PMUTs are generating sustained interest as a method to overcome the limitations of conventional ultrasonic sensors [20]. The PMUTs-based ultrasonic sensors are fabricated by Micro Electro Mechanical System (MEMS) technology, which makes the sensors low cost, physically small, and compatible with the CMOS fabrication process. These features make PMUTs-based ultrasonic sensors easy to integrate with other systems and widely used in gesture recognition [21], robot safety control [22], flow meters [23], and ranging [24]. In our previous work, an ultrasonic proximity-sensing skin based on PMUTs is proposed [25]. This skin can detect targets within 300 mm and 60 degrees in front, which contains two PMUTs-based receivers and a PMUTs-based transmitter. In the experiment, the skin is installed on the front end of the robotic manipulator to detect targets. As the robot manipulator moves toward the target point, if the target is within the dangerous distance, the robot manipulator will stop in an emergency to avoid collision. In this work, only a yellow transparent plate is used. The effects of color, transparency, and motion state of the targets are not investigated further. In this paper, an ultrasonic target detection system based on PMUTs is proposed. Compared with our previous work, not only the effects of color, transparency, and the motion state of the targets on the detection results are investigated, but also the minimum resolution of the proposed system is investigated. The proposed system consists of the PMUTs-based ultrasonic sensor and the sensor system. The PMUTs-based ultrasonic sensor consists of a transmitter and a receiver, both of which are PMUTs arrays with a resonant frequency of 115 kHz. The received ultrasonic signals can be used to calculate the position of the targets through the sensor system. The static and dynamic performance of the proposed system are investigated, as well as a comparison with a commercial laser sensor. In the static experiment, the effects of targets with different colors and transparencies are investigated in the range of 150 to 250 mm. Black and white targets are used to verify the effect of color on the sensor. White and transparent targets are used to verify the effect of transparency on the sensor. In addition, the detection accuracy of the proposed system is also investigated. Compared with the laser sensor, the detection results of the PMUTs-based ultrasonic sensor is more stable, and the maximum error is 0.572 mm. In the dynamic experiment, the influence of the targets in motion is investigated to verify the effectiveness of the proposed system when targets move with the conveyor belt. Additionally, the minimum resolution of the system is measured to be about 0.5 mm. Finally, in order to investigate the communication function, the proposed system is combined with a robotic manipulator for the comprehensive experiment. In the comprehensive experiment, the UR3 robot manipulator is used to complete the detection, grasping, moving and reset of the 1 mm thickness target through information interaction under the control of the proposed ultrasonic target detection system. Based on the modularization and miniaturization of the proposed system, a glass substrate detection method with low cost, small size, and proper performance is realized. The Method of Time of Flight In the pulse echo method for ultrasonic sensors and laser sensors, Time of Flight (ToF) is the most widely used target detection method [26]. The transmitter emits a set of pulses, which are reflected to the receiver after encountering a target. Pulses can be absorbed, reflected, and transmitted on surfaces of different materials. Ignoring the influence of absorption, whether the target will be detected is related to the reflectivity and transmittance of the material. When the reflected echoes are received, there is a time delay between them and pulses. Therefore, the distance between the target and the sensor is related to the difference between the arrival time of the echoes and the emission time of the pulses, which is called the ToF. The distance L can be calculated by the following formula: where v is the acoustic velocity in the air and T is the ToF. It is worth noting that major sources of data fluctuation can be found in additive noise affecting the acquired ultrasonic signal, shape distortion of the received echo, and dependence on temperature of the propagation velocity [27]. Reflectivity and Transmittance The reflectivity and transmittance of ultrasonic waves are mainly related to the acoustic impedance. The acoustic impedance Z is expressed as: where ρ is the density of the material and c is the acoustic velocity in the materials. The reflectivity r u of ultrasonic waves is expressed as: where Z 1 is the acoustic impedance of material 1 and Z 2 is the acoustic impedance of material 2. The transmittance t u of ultrasonic waves is expressed as: The above formulas show that the reflectivity and transmittance of ultrasonic waves are only related to the acoustic impedance. Since the acoustic impedance is an inherent constant of materials and is independent of color and transparency, the ultrasonic sensors are suitable for detecting the targets with different color and transparency. Similarly, the reflectance r l and transmittance t l of light waves are expressed as: where n 1 is the refractive index of material 1 and n 2 is the refractive index of material 2. The reflectivity and transmittance of light waves are related to the optical properties of materials. The optical properties of materials are affected by many factors, such as the colors and transparency. Therefore, even the same materials will show different results. Structure of the Ultrasonic Sensor Based on PMUTs The PMUTs are fabricated on Silicon-On-Insulator (SOI) wafer [25]. As shown in Figure 1, the PMUTs diaphragm is comprised of five parts: a Si elastic layer, a piezoelectric Aluminum Nitride (AlN) layer, Molybdenum (Mo) layers for the top and bottom electrodes and oxide. The use of Mo as the electrode material can improve the structural properties of AlN thin films [28]. The mutual conversion of electrical energy and acoustic energy can be realized by this structure. When an AC signal is applied to the electrodes, the piezoelectric AlN layer will generate a transverse internal stress due to the piezoelectric effect, which drives the diaphragm to vibrate periodically and generate ultrasonic waves. On the contrary, the electrodes will detect electronic signals when the diaphragm is hit by ultrasonic waves. Geometric and performance parameters of the PMUTs array are investigated in our previous work. The first resonance frequency of the central element is about 115 kHz, the −3 dB bandwidth is 0.908 kHz, and the Q is 126.65. Besides, the displacement sensitivity of the central point is 13.2 nm/Vpp, which is measured through LDV under the sweep signal with the frequency from 1 kHz to 1 MHz [29]. The above formulas show that the reflectivity and transmittance of ultrasonic waves are only related to the acoustic impedance. Since the acoustic impedance is an inherent constant of materials and is independent of color and transparency, the ultrasonic sensors are suitable for detecting the targets with different color and transparency. Similarly, the reflectance rl and transmittance tl of light waves are expressed as: 2 l n t , n n (6) where n1 is the refractive index of material 1 and n2 is the refractive index of material 2. The reflectivity and transmittance of light waves are related to the optical properties of materials. The optical properties of materials are affected by many factors, such as the colors and transparency. Therefore, even the same materials will show different results. Structure of the Ultrasonic Sensor Based on PMUTs The PMUTs are fabricated on Silicon-On-Insulator (SOI) wafer [25]. As shown in Figure 1, the PMUTs diaphragm is comprised of five parts: a Si elastic layer, a piezoelectric Aluminum Nitride (AlN) layer, Molybdenum (Mo) layers for the top and bottom electrodes and oxide. The use of Mo as the electrode material can improve the structural properties of AlN thin films [28]. The mutual conversion of electrical energy and acoustic energy can be realized by this structure. When an AC signal is applied to the electrodes, the piezoelectric AlN layer will generate a transverse internal stress due to the piezoelectric effect, which drives the diaphragm to vibrate periodically and generate ultrasonic waves. On the contrary, the electrodes will detect electronic signals when the diaphragm is hit by ultrasonic waves. Geometric and performance parameters of the PMUTs array are investigated in our previous work. The first resonance frequency of the central element is about 115 kHz, the −3 dB bandwidth is 0.908 kHz, and the Q is 126.65. Besides, the displacement sensitivity of the central point is 13.2 nm/Vpp, which is measured through LDV under the sweep signal with the frequency from 1 kHz to 1 MHz [29]. As shown in Figure 2a, the ultrasonic sensor based on PMUTs consists of two pieces of 3 × 3 PMUTs arrays as transmitter and receiver, respectively. The PMUTs arrays are integrated on a 0.5 mm thick Flexible Printed Circuit Board (FPCB) by wire bonding. The size of each piece of PMUTs array is 4 mm × 4 mm and the center distance between two pieces of PMUTs arrays is 15 mm. The size of the entire sensor is 10 mm × 47 mm × 1.7 mm. As shown in Figure 2b, the distance between the transmitter and the receiver is s. In the process of detection, when the target is placed parallel to the sensor, the path of the ultrasonic waves is an isosceles triangle. According to Equation (1), the distance L can be calculated by the following formula: As shown in Figure 2a, the ultrasonic sensor based on PMUTs consists of two pieces of 3 × 3 PMUTs arrays as transmitter and receiver, respectively. The PMUTs arrays are integrated on a 0.5 mm thick Flexible Printed Circuit Board (FPCB) by wire bonding. The size of each piece of PMUTs array is 4 mm × 4 mm and the center distance between two pieces of PMUTs arrays is 15 mm. The size of the entire sensor is 10 mm × 47 mm × 1.7 mm. As shown in Figure 2b, the distance between the transmitter and the receiver is s. In the process of detection, when the target is placed parallel to the sensor, the path of the ultrasonic waves is an isosceles triangle. According to Equation (1), the distance L can be calculated by the following formula: Design of the Sensor System As shown in Figure 3, the sensor system consists of the hardware system and the software system. The hardware system includes the function generator, the amplifier circuit board, the data acquisition board, and the Personal Computer 1 (PC1). The function generator excites the transmitter of the PMUTs-based ultrasonic sensor to emit ultrasonic signals. Then, the received ultrasonic signals are amplified by the amplifier circuit board. The data acquisition board converts the analog signal into a digital signal. PC1 acts as a data processing center on which the software system runs. A typical control period of the hardware is as follows. First, the PMUTs-based ultrasonic sensor is excited by a burst signal (115 kHz, 5Vpp, 10 sinusoidal). At the same time, the function generator sends a synchronization signal to the data acquisition board (MCC USB-2020). Then, the received ultrasonic signals are amplified by the amplifier circuit board. After acquisition by the data acquisition board, the ultrasonic signals are sent to PC1 (LabVIEW 2016). The software system is used to calculate the position of the targets in real time. As shown in Figure 3, the software system includes the data acquisition program, the filter program, the demodulation program and the ToF calculation program. First, the received signals are converted into signal clusters through the data acquisition program with a sampling frequency of 1 MHz. Second, the wavelet filter and the Butterworth bandpass filter are used to remove noise in the filter program. The wavelet filter is used to filter out the noise near the resonant frequency, and the Butterworth bandpass filter is used to filter out the noise outside the resonant frequency. Then, the echo signals with high signal-tonoise ratio can be obtained. Third, the envelope of the ultrasonic echo signals is extracted by the Hilbert transform in the demodulation program. Finally, the ToF can be calculated through an improved peak detection algorithm. Compared with the traditional peak detection algorithm, noise has less influence on the improved algorithm. Therefore, the detection accuracy is higher. Figure 4 shows consecutive echo signals, and the envelope curve of the ultrasonic signals received. As shown in Figure 4, the actual ToF is the time from the start of the burst signal to the zero-crossing point of the echo signal (ToF). Since Design of the Sensor System As shown in Figure 3, the sensor system consists of the hardware system and the software system. The hardware system includes the function generator, the amplifier circuit board, the data acquisition board, and the Personal Computer 1 (PC1). The function generator excites the transmitter of the PMUTs-based ultrasonic sensor to emit ultrasonic signals. Then, the received ultrasonic signals are amplified by the amplifier circuit board. The data acquisition board converts the analog signal into a digital signal. PC1 acts as a data processing center on which the software system runs. A typical control period of the hardware is as follows. First, the PMUTs-based ultrasonic sensor is excited by a burst signal (115 kHz, 5V pp , 10 sinusoidal). At the same time, the function generator sends a synchronization signal to the data acquisition board (MCC USB-2020). Then, the received ultrasonic signals are amplified by the amplifier circuit board. After acquisition by the data acquisition board, the ultrasonic signals are sent to PC1 (LabVIEW 2016). it is difficult to detect the zero-crossing point, in the traditional peak detection algorithm, the ToF is approximate to the time from the end of the burst signal to the peak value of the echo signal (T1). Due to the influence of noise, the peak value of the echo signal is still unstable in some situations. However, the ascent phase of the echo signal has the characteristic of monotonically increasing and is less affected by noise. Therefore, in the improved algorithm, the T1 is approximate to the time from 0.7 of the end of the burst signal to 0.7 of the peak value of the echo signal (T2). It should be noted that 0.7 is an empirical value. When the threshold is set to 0.7, more stable detection results can be obtained. In addition, when the echo signal is distorted or saturated, false detections caused by indistinct envelope peaks can be avoided using an improved peak detection algorithm. The detailed pseudocode of the improved peak detection algorithm is shown in Algorithm 1. The software system is used to calculate the position of the targets in real time. As shown in Figure 3, the software system includes the data acquisition program, the filter program, the demodulation program and the ToF calculation program. First, the received signals are converted into signal clusters through the data acquisition program with a sampling frequency of 1 MHz. Second, the wavelet filter and the Butterworth bandpass filter are used to remove noise in the filter program. The wavelet filter is used to filter out the noise near the resonant frequency, and the Butterworth bandpass filter is used to filter out the noise outside the resonant frequency. Then, the echo signals with high signal-to-noise ratio can be obtained. Third, the envelope of the ultrasonic echo signals is extracted by the Hilbert transform in the demodulation program. Finally, the ToF can be calculated through an improved peak detection algorithm. Compared with the traditional peak detection algorithm, noise has less influence on the improved algorithm. Therefore, the detection accuracy is higher. Figure 4 shows consecutive echo signals, and the envelope curve of the ultrasonic signals received. As shown in Figure 4, the actual ToF is the time from the start of the burst signal to the zero-crossing point of the echo signal (ToF). Since it is difficult to detect the zero-crossing point, in the traditional peak detection algorithm, the ToF is approximate to the time from the end of the burst signal to the peak value of the echo signal (T1). Due to the influence of noise, the peak value of the echo signal is still unstable in some situations. However, the ascent phase of the echo signal has the characteristic of monotonically increasing and is less affected by noise. Therefore, in the improved algorithm, the T1 is approximate to the time from 0.7 of the end of the burst signal to 0.7 of the peak value of the echo signal (T2). It should be noted that 0.7 is an empirical value. When the threshold is set to 0.7, more stable detection results can be obtained. In addition, when the echo signal is distorted or saturated, false detections caused by indistinct envelope peaks can be avoided using an improved peak detection algorithm. The detailed pseudocode of the improved peak detection algorithm is shown in Algorithm 1. Interaction with The Robotic Manipulator The proposed system also contains interfaces to interact with other systems. As shown in Figure 5, the motion of the robotic manipulator can be controlled through communicating with the Robot Operating System (ROS). ROS runs on the Personal Computer 2 (PC2) based on the Linux system. PC2 communicates with the PC1 through TCP/IP. After PC1 sends a location message to PC2 through the ROS for LabVIEW, the MoveIt! controls the robotic manipulator (UR3) to move according to the solved optimal motion path [30]. At the same time, the motion states of the robotic manipulator can be displayed graphically in real time by Rviz. The ROS for LabVIEW is a set of VIs for communicating with ROS applications (VI is the smallest unit of execution in LabVIEW), Interaction with The Robotic Manipulator The proposed system also contains interfaces to interact with other systems. As shown in Figure Experiment Setup The characteristics of the proposed system are investigated by static experiments, dynamic experiments (including the comparison with a commercial laser sensor), and comprehensive experiments. Based on the laboratory environment, the acrylic plates are selected as experimental materials. In the static experiments, the effects of color and transparency on the detection results Experiment Setup The characteristics of the proposed system are investigated by static experiments, dynamic experiments (including the comparison with a commercial laser sensor), and comprehensive experiments. Based on the laboratory environment, the acrylic plates are selected as experimental materials. In the static experiments, the effects of color and transparency on the detection results are investigated. The experimental setup of the PMUTs-based ultrasonic sensor is shown in Figure 6a. The PMUTs-based ultrasonic sensor is placed on one side of the coordinate paper with an accuracy of 1 mm. The acrylic plates are placed on the other side. The thicknesses of black, white, and transparent acrylic plates are all 5 mm. Black and white acrylic plates are used as experimental targets to compare the effect of color, whereas white and transparent acrylic plates are used as experimental targets to compare the effect of transparency on experimental results. The acrylic plates are moved from 200 mm to 320 mm in 5 mm steps. Similarly, the experimental setup of the laser sensor (MyAntenna L1s-40) is shown in Figure 6b. The relevant parameters of the laser sensor are shown in Table 1. Experiment Setup The characteristics of the proposed system are investigated by static experiments, dynamic experiments (including the comparison with a commercial laser sensor), and comprehensive experiments. Based on the laboratory environment, the acrylic plates are selected as experimental materials. In the static experiments, the effects of color and transparency on the detection results are investigated. The experimental setup of the PMUTs-based ultrasonic sensor is shown in Figure 6a. The PMUTs-based ultrasonic sensor is placed on one side of the coordinate paper with an accuracy of 1 mm. The acrylic plates are placed on the other side. The thicknesses of black, white, and transparent acrylic plates are all 5 mm. Black and white acrylic plates are used as experimental targets to compare the effect of color, whereas white and transparent acrylic plates are used as experimental targets to compare the effect of transparency on experimental results. The acrylic plates are moved from 200 mm to 320 mm in 5 mm steps. Similarly, the experimental setup of the laser sensor (MyAntenna L1s-40) is shown in Figure 6b. The relevant parameters of the laser sensor are shown in Table 1. In the dynamic experiments, the effects of targets in motion on the detection results are investigated by simulating the motion of the targets on the conveyor belt. As shown in Figure 7, based on the laboratory environment, a black acrylic plate is used as the bottom plate instead of the conveyor belt, and a black, white, or transparent acrylic plate is placed on the front side. The stacked acrylic plates are placed on a linear guide. The linear guide is placed parallel to the sensors, which allows the acrylic plates to move horizontally relative to the sensors. The thicknesses of the three kinds of acrylic plates are about 1 mm, 1.5 mm, 2 mm, and 2.5 mm. Measured by a vernier caliper, for black acrylic plates, the actual thicknesses are 1.08 mm, 1.69 mm, 1.84 mm, and 2.42 mm. For white acrylic plates, the actual thicknesses are 1.00 mm, 1.73 mm, 1.80 mm, and 2.57 mm. For transparent acrylic plates, the actual thicknesses are 0.97 mm, 1.39 mm, 1.86 mm, and 2.31 mm. With the movement of the linear guide, the detection point of the sensors is transferred from the upper surface of the black bottom plate (the lower surface of the targets) to the upper surface of the targets. Then, the thicknesses of the acrylic plates can be detected by the difference between the detection results of the sensors. In addition, the minimum resolution of the proposed system is discussed by using the transparent acrylic plates with small thickness. Static Experiment Results Usually, the detection result cannot be determined by a single result. It is a common to take the average value of multiple groups of results. In order to get a comprehensive performance, the detection results are evaluated by Root Mean Square Error (RMSE). The RMSE is the error between the measurements and the true values which represents the accuracy of the detection [33]. The formula of RMSE is as follows: , n (8) where yi are the true distance values, which are measured by coordinate paper. The fi are the average values of detection results under this distance. According to the data, the experiment results are shown in Figure 9. For black, white, and transparent acrylic plates, the RMSEs of the PMUTs based ultrasonic sensor are 0.51 mm, 0.50 mm, and 0.53 mm, and the RMSEs of the laser sensor are 2.89 mm, 0.62 mm, and not applicable. For the PMUTs based ultrasonic sensor, the maximum difference between different RMSEs is 0.03 In order to investigate the feasibility of the proposed system to cooperate with other systems, the comprehensive experiments are designed by combining the proposed system and a robotic manipulator. The working environment in an automated production line has been simulated in the laboratory. A transparent acrylic plate with a thickness of 1 mm needs to be grabbed by a robotic manipulator from the starting point and placed to the target point according to a fixed path. The devices of the comprehensive experiment are shown in Figure 8. The experimental process is as follows: initially, the robotic manipulator stops above the starting point; the PMUTs-based ultrasonic sensor placed at the end of the robotic manipulator detects surroundings in real time and returns distance values to PC1; when transparent acrylic plate is detected on the starting point, the distance values will be changed; then the robotic manipulator is controlled to grab the transparent acrylic plate and move to the target point at a speed of 0.1 m/s. Static Experiment Results Usually, the detection result cannot be determined by a single result. It is a common to take the average value of multiple groups of results. In order to get a comprehensive performance, the detection results are evaluated by Root Mean Square Error (RMSE). The RMSE is the error between the measurements and the true values which represents the accuracy of the detection [33]. The formula of RMSE is as follows: Static Experiment Results Usually, the detection result cannot be determined by a single result. It is a common to take the average value of multiple groups of results. In order to get a comprehensive performance, the detection results are evaluated by Root Mean Square Error (RMSE). The RMSE is the error between the measurements and the true values which represents the accuracy of the detection [33]. The formula of RMSE is as follows: where y i are the true distance values, which are measured by coordinate paper. The f i are the average values of detection results under this distance. According to the data, the experiment results are shown in Figure 9. For black, white, and transparent acrylic plates, the RMSEs of the PMUTs based ultrasonic sensor are 0.51 mm, 0.50 mm, and 0.53 mm, and the RMSEs of the laser sensor are 2.89 mm, 0.62 mm, and not applicable. For the PMUTs based ultrasonic sensor, the maximum difference between different RMSEs is 0.03 mm, which means that colors and transparency have little effect on the detection results. Corresponding to the RMSEs, the detection curves of the PMUTs-based ultrasonic sensor almost overlap. On the other hand, for the laser sensor, the detection curve of the black acrylic plates is obviously shifted from the baseline. This is caused by the weakening of the received signals strength due to the absorption of light waves by the black targets. For transparent acrylic plates, the RMSE is not applicable since the laser sensor cannot detect the results on transparent targets. This is because most of light waves are transmitted due to the high transmittance. As shown in Table 2, 80.321% of the light waves penetrated the transparent acrylic plates. Only 19.679% of the light waves were reflected, which is not enough to be used to calculate the targets position. The static experiments show that colors and transparency of the targets cannot affect the detection results of the proposed system. In contrast, the laser sensor cannot accurately detect black and transparent targets because light can be absorbed by black targets and penetrate transparent targets. Micromachines 2023, 14, 683 11 of 16 mm, which means that colors and transparency have little effect on the detection results. Corresponding to the RMSEs, the detection curves of the PMUTs-based ultrasonic sensor almost overlap. On the other hand, for the laser sensor, the detection curve of the black acrylic plates is obviously shifted from the baseline. This is caused by the weakening of the received signals strength due to the absorption of light waves by the black targets. For transparent acrylic plates, the RMSE is not applicable since the laser sensor cannot detect the results on transparent targets. This is because most of light waves are transmitted due to the high transmittance. As shown in Table 2, 80.321% of the light waves penetrated the transparent acrylic plates. Only 19.679% of the light waves were reflected, which is not enough to be used to calculate the targets position. The static experiments show that colors and transparency of the targets cannot affect the detection results of the proposed system. In contrast, the laser sensor cannot accurately detect black and transparent targets because light can be absorbed by black targets and penetrate transparent targets. Dynamic Experiment Results In the dynamic experiments, as shown in Figure 10, the changes in detection curves represent the thicknesses of the targets. When using the black acrylic plates as the detected targets, the detection results are shown in Figure 10a,b. For the PMUTs-based ultrasonic sensor, the difference between the detection values and the actual thicknesses of the black acrylic plates are 0.03 mm, 0.05 mm, 0.03 mm and 0.01 mm, respectively, and the errors are 2.78%, 2.96%, 1.63% and 0.41%, respectively. For the laser sensor, since the minimum resolution is 1 mm, the acrylic plates with thicknesses of 1.5 mm and 2.5 mm cannot be distinguished. Therefore, the detection values are 1.0.mm, 2.0 mm, 2.0 mm, and 2.0 mm. When the black acrylic plates are replaced by the white acrylic plates, the detection results are shown in Figure 10c,d. For the PMUTs-based ultrasonic sensor, the difference between the detection values and the actual thicknesses of the white acrylic plates are 0.04 mm, 0.04 mm, 0.05 mm and 0.07 mm, while the errors are 4.00%, 2.31%, 2.78% and 2.72%, respectively. In contrast, the detection values of the laser sensor increase as the thicknesses increases. Because the changes in thicknesses are less than the RMSE of the black acrylic plates as the bottom plate, the detection values do not decrease, but increase. When using the transparent acrylic plates as the detected targets, the detection results are shown in Figure 10e,f. For the PMUTs-based ultrasonic sensor, the difference between the detection values and the actual thicknesses of the transparent acrylic plates by 0.01 mm, 0.02 mm, 0.02 mm and 0.05 mm, and the detection errors are 1.03%, 1.44%, 1.08% and 2.16%, respectively. For the laser sensor, the detection values do not change significantly. The fluctuation of 1 mm might be caused by the interference of the environmental noise. The light waves penetrate the transparent acrylic plates and only the light waves reflected from the black acrylic plates are detected by the laser sensor. The minimum resolution of the PMUTs-based ultrasonic sensor is further investigated. As shown in Figure 11, when using a transparent acrylic plate with an actual thickness of 0.50 mm, the detection value of the proposed system is 0.51 mm. The dynamic experiments demonstrate that the proposed system can detect targets with high precision. Even if the targets are in motion, the detection results are not affected by the color or transparency. In the experiments, the maximum detection error of the proposed system is 4.00% for targets with the thicknesses of 1 mm, 1.5 mm, 2 mm and 2.5 mm. At the same time, the minimum resolution is about 0.5 mm. In contrast, the laser sensor only has a minimum resolution of 1 mm. Therefore, the acrylic plates with thicknesses of 1.5 mm and 2.5 mm cannot be detected. In addition, the laser sensor cannot detect transparent targets. values and the actual thicknesses of the transparent acrylic plates by 0.01 mm, 0.02 mm, 0.02 mm and 0.05 mm, and the detection errors are 1.03%, 1.44%, 1.08% and 2.16%, respectively. For the laser sensor, the detection values do not change significantly. The fluctuation of 1 mm might be caused by the interference of the environmental noise. The light waves penetrate the transparent acrylic plates and only the light waves reflected from the black acrylic plates are detected by the laser sensor. The minimum resolution of the PMUTs-based ultrasonic sensor is further investigated. As shown in Figure 11, when using a transparent acrylic plate with an actual thickness of 0.50 mm, the detection value of the proposed system is 0.51 mm. The dynamic experiments demonstrate that the proposed system can detect targets with high precision. Even if the targets are in motion, the detection results are not affected by the color or transparency. In the experiments, the maximum detection error of the proposed system is 4.00% for targets with the thicknesses of 1 mm, 1.5 mm, 2 mm and 2.5 mm. At the same Comprehensive Experiment Results The comprehensive experimental results are shown in Figure 12 time, the minimum resolution is about 0.5 mm. In contrast, the laser sensor only has a minimum resolution of 1 mm. Therefore, the acrylic plates with thicknesses of 1.5 mm and 2.5 mm cannot be detected. In addition, the laser sensor cannot detect transparent targets. Figure 11. Detection results of the PMUTs-based ultrasonic sensor when using transparent acrylic plate with thicknesses of 0.501 mm and the changes of the detection signals. Comprehensive Experiment Results The comprehensive experimental results are shown in Figure 12 As shown in Figure 12, the distance values fluctuate in the whole process. This may be caused for two reasons. First, when the robotic manipulator is stationary, the fluctuations are mainly related to environmental noise. Second, when the robotic manipulator is in motion, the fluctuations are mainly related to the vibrations of the motors. At the same time, when the PMUTs-based ultrasonic sensor is fixed on the robotic manipulator, small vibrations of the robotic manipulator in motion can be detected by the above sensor. These signals will be coupled into the detection signals. The response time of the whole system is about 330 ms, including the burst period of 50 ms and the 89 ms for calculating the optimal path and controlling the robotic manipulator. According to user manual, the robot manipulator has a response time of 159 ms. The above factors account for 90.303% of the total response time, the rest of response time may come from the transmission delay of the local area network. The comprehensive experimental results show that the proposed ultrasonic target detection system can successfully detect transparent targets and guide the robotic manipulator to complete practical production tasks such as grasping and moving. Conclusions In this paper, an ultrasonic target detection system based on PMUTs is proposed, which can obtain the precise position of the targets. This proposed system consists of the PMUTs-based ultrasonic sensor and the sensor system. The sensor system can interact with the robotic manipulator through TCP/IP. The characteristics of the proposed system under static and dynamic conditions are characterized, and a comparison is made with a commercial laser sensor. Even with different colors, transparencies and motion states, the proposed system can accurately detect the position of the targets. In the static experiments, the static location errors of the proposed system are 0.51 mm, 0.50 mm and 0.53 mm for black, white, and transparent targets, respectively, in the range of 200 to 320 mm. In contrast, the RMSEs of the laser sensor are 2.89 mm, 0.62 mm, and not applicable. Since light waves can be absorbed by black targets, there is a significant difference in the accuracy of the two sensors. In addition, the detection results of transparent targets cannot be compared. In the dynamic experiments, for the targets with thickness of 1 mm, 1.5 mm, 2 mm and 2.5 mm, the maximum detection error of the proposed system is 4.00%. In contrast, the laser sensor cannot distinguish targets with thicknesses of 1.5 mm and 2.5 mm due to the minimum resolution of 1 mm. Unlike the laser sensor, the minimum resolution of the proposed system is about 0.5 mm. Besides, the comprehensive experiments verify the feasibility of the proposed system in cooperation with the robotic manipulator. In the comprehensive experiments, the system is required to detect a transparent target with a As shown in Figure 12, the distance values fluctuate in the whole process. This may be caused for two reasons. First, when the robotic manipulator is stationary, the fluctuations are mainly related to environmental noise. Second, when the robotic manipulator is in motion, the fluctuations are mainly related to the vibrations of the motors. At the same time, when the PMUTs-based ultrasonic sensor is fixed on the robotic manipulator, small vibrations of the robotic manipulator in motion can be detected by the above sensor. These signals will be coupled into the detection signals. The response time of the whole system is about 330 ms, including the burst period of 50 ms and the 89 ms for calculating the optimal path and controlling the robotic manipulator. According to user manual, the robot manipulator has a response time of 159 ms. The above factors account for 90.303% of the total response time, the rest of response time may come from the transmission delay of the local area network. The comprehensive experimental results show that the proposed ultrasonic target detection system can successfully detect transparent targets and guide the robotic manipulator to complete practical production tasks such as grasping and moving. Conclusions In this paper, an ultrasonic target detection system based on PMUTs is proposed, which can obtain the precise position of the targets. This proposed system consists of the PMUTs-based ultrasonic sensor and the sensor system. The sensor system can interact with the robotic manipulator through TCP/IP. The characteristics of the proposed system under static and dynamic conditions are characterized, and a comparison is made with a commercial laser sensor. Even with different colors, transparencies and motion states, the proposed system can accurately detect the position of the targets. In the static experiments, the static location errors of the proposed system are 0.51 mm, 0.50 mm and 0.53 mm for black, white, and transparent targets, respectively, in the range of 200 to 320 mm. In contrast, the RMSEs of the laser sensor are 2.89 mm, 0.62 mm, and not applicable. Since light waves can be absorbed by black targets, there is a significant difference in the accuracy of the two sensors. In addition, the detection results of transparent targets cannot be compared. In the dynamic experiments, for the targets with thickness of 1 mm, 1.5 mm, 2 mm and 2.5 mm, the maximum detection error of the proposed system is 4.00%. In contrast, the laser sensor cannot distinguish targets with thicknesses of 1.5 mm and 2.5 mm due to the minimum resolution of 1 mm. Unlike the laser sensor, the minimum resolution of the proposed system is about 0.5 mm. Besides, the comprehensive experiments verify the feasibility of the proposed system in cooperation with the robotic manipulator. In the comprehensive experiments, the system is required to detect a transparent target with a thickness of 1 mm and control the robotic manipulator to complete the subsequent grasping and moving operations. The experimental results demonstrate that the proposed system successfully accomplishes the specified task, which shows the potential in target detection especially for transparent targets. With the further development of technology, the glass substrates can be more widely used in smart phones and smart screens. Therefore, the thicknesses of the glass substrates will be further reduced. In order to improve the scope of application of the proposed system, improving the detection algorithm to improve the detection accuracy is the focus of the next work. On the other hand, the proposed system is only prototype systems. How to improve the integration of the hardware part in the sensor system is also an aspect that cannot be ignored.
2023-03-22T15:12:33.044Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "7b71503ee7eeff57a4d4f086118261c14317a6ec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/3/683/pdf?version=1679307866", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03c38d5e58e72a3d100cad5e66bd103342e8ba26", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
237145162
pes2o/s2orc
v3-fos-license
Free healthcare for some, fee-paying for the rest: adaptive practices and ethical issues in rural communities in the district of Boulsa, Burkina Faso ABSTRACT In Burkina Faso, in July 2016, user fees were removed at all public healthcare facilities, but only for children under 60 months of age and for “mothers”, i.e. for reproductive care. This study was conducted in five rural communities in Boulsa District (Burkina Faso) (1) to understand the perceptions and practices of stakeholders regarding compliance with eligibility criteria for free care and (2) to explore the ethical tensions that may have resulted from this policy. Semi-directed individual interviews (n = 20) were conducted with healthcare personnel and mothers of young children. Interviews were recorded and transcribed, and a thematic content analysis was conducted. The study reveals the presence of practices to circumvent strict compliance with the eligibility criteria for free access. These include hiding the exact age of children over 60 months and using eligible persons for the benefit of others. These practices result from ethical and economic tensions experienced by the beneficiaries. They also raise dilemmas among healthcare providers, who have to enforce compliance with the eligibility criteria while realizing the households’ deprivation. Informal adjustments are introduced at the community level to reconcile the healthcare providers’ dissonance. Local reinvention mechanisms help in overcoming ethical tensions and in implementing the policy. Introduction Despite significant progress over the past 20 years, maternal and child mortality remains a major public health problem in low-and moderate-income countries (United Nations Development Program, 2020). In Burkina Faso, a country ranked 182nd out of 189 on the Human Development Index, the under-5 mortality rate was estimated at 7600 per 100,000 live births in 2018, and the maternal mortality rate at 320 per 100,000 live births, compared to an average of 500 and 12 in high-income countries, respectively (Hug et al., 2019;WHO et al., 2015). Although this heavy burden of mortality is caused by a set of interrelated factors from different spheres (economic, social, biological, environmental, etc.), it is systematically more important in rural and deprived populations (Black et al., 2016;Denno & Paul, 2017;Okwaraji & Edmond, 2012) with poverty and rurality acting as fundamental causes, to use Link and Phelan's expression (Link & Phelan, 2002). The crux of the issue is that a large proportion of these maternal and child deaths could be prevented by primary care or standard therapeutic interventions (Claeson et al., 2003;Khan et al., 2006;Liu et al., 2015;Ng et al., 2014). Therefore, addressing the lack of access to high-quality primary healthcare is a priority of the Sustainable Development Goals, encapsulated in the Universal Health Coverage policy (Kieny et al., 2017;Pettigrew et al., 2015;Rutherford et al., 2010). Burkina Faso has taken various measures to improve financial accessibility to maternal and child healthcare. As early as 2006, it introduced the policy of subsidizing emergency obstetric and neonatal care, which reduced the price of reproductive health services by 80% (Ganaba et al., 2016). Around the same time, a few pilot projects were initiated in certain districts to abolish direct payment for children under the age of five at health facilities (Ridde, 2014). Finally, in 2016, the country scaled up to the national level the exemption from direct payment in public health facilities for maternal and child care (Gouvernement du Burkina Faso, 2016). This user fee exemption covers the costs of consultations, diagnosis, prescribed drugs and transportation to a referral hospital. It applies to all children under the age of five, regardless of the reason for the consultation, and for reproductive care (pre-and postnatal consultations, deliveries, cesareans, etc.). Scientific studies have shown, in Burkina Faso and elsewhere in sub-Saharan Africa, the positive impacts of free healthcare policies on population health. Such policies improve access to healthcare, decrease catastrophic health spending and health inequalities, shorten the time before consultation, and reduce self-treatment practices as well as the proportion of home deliveries (Bassani et al., 2013;De Allegri et al., 2012;Druetz et al., 2015;Dzakpasu et al., 2014;Hatt et al., 2013). Evidence also suggests that the abolition of direct payment improves certain morbidity indicators and reduces neonatal mortality (El-Khoury et al., 2012;McKinnon et al., 2015;Qin et al., 2018). Implementation studies reveal broad acceptability and support from communities and health personnel, in addition to debunking certain myths regarding their feasibility Carasso et al., 2012;Druetz et al., 2017;Ridde et al., 2014;Tama et al., 2018). However, they also highlight pressures that can affect the quality of care, healthcare providers' motivation, health system weaknesses, and the fundamental issue of sustainability (Ansu-Mensah et al., 2021;Brunet-Jailly, 2018;Witter, 2009). Despite these numerous studies, little knowledge has been gathered on the ethical issues surrounding free policies. In particular, while some writings suggest that the eligibility criteria are sometimes difficult to meet and can even give rise to dilemmas or tensions in the community, this dimension remains unexplored by rigorous qualitative studies (Nimpagaritse & Bertone, 2011 ;Pardeshi, 2014). How do the actors perceive the limitations of the user fee exemption, and how have they appropriated this policy through their practices? This problem becomes even more important as evidence accumulates regarding the extent of the needs of certain categories of ineligible individuals, notably school-aged children (Clark et al., 2020). The objective of this study is twofold: (1) to understand the perceptions and practices of health personnel, beneficiaries, and caregivers with regard to compliance with the eligibility criteria for free healthcare and (2) to explore the ethical tensions that have arisen from it and the coping mechanisms settled at the community level. Methods A cross-sectional qualitative study was carried out in the Boulsa health district in Burkina Faso to understand the ethical issues related to compliance with the eligibility criteria for free healthcare. We conducted semi-structured individual interviews with the main beneficiaries and providers of free care: mothers of young children (n = 10) and members of the healthcare staff (n = 10). A COREQ checklist (consolidated criteria for reporting qualitative research) is available in Additional File 1. Interviews were first conducted with healthcare staff until saturation was reached. After reviewing personal notes and discussions between the interviewer and other team members, the interview guide for mothers was reworked. To balance the volume of information collected between the two categories of participants, ten more interviews were conducted with mothers; again, saturation had been reached. The collection took place between September and December 2018, while the implementation of the free service was in a routine phase (two years after its introduction). At the time of their enrolment, participants were unknown to the research team. Study site This study was conducted in five rural communities in the Boulsa District: Niega, Bonam, Boala, Yarcé and Zambanga. The district was selected for convenience, as it is a rural district in a safe and accessible area where staff from the health district authorities were known and trusted by the research team, and where there had not been a pilot project of free healthcare before the introduction of the national policy in July 2016. The Boulsa District is located in the North-Central region, about 100 km from the capital, and has a population of approximately 210,000 served by 19 primary health centres (Ministère de la Santé, 2017). The five communities were purposively selected after consultation with the district's authorities, based on the proximity to the nearest primary health centre with maternity services. Recruitment of participants In each primary health centre, the head nurse was approached and, after briefly outlining the purpose of the study, we sought his/her consent. We then approached a dedicated maternal health staff member (one midwife or auxiliary midwife per health centre) for recruitment. In each of the five communities, two households were purposefully identified using the assistance of the local community health worker. The eligibility criteria for the households were that they well established in the community (resided there for several years) and had at least one child under 5 years of age and one mother above 18 years old. In each household, the head was informed of the project and, upon approval, an adult mother of a child under five was approached to proceed with recruitment. All approached individuals agreed to participate and gave consent. Data collection For the health staff, interviews were conducted at the health centre in a consultation room, while the mothers were interviewed in their homes. All interviews were individual and semi-structured, with an interview guide specific to the type of participant that was developed for this study (Additional File 2). Interviews were conducted in a room or a remote location that guaranteed the confidentiality of the respondents. The guide was slightly enriched as the data were collected, particularly with respect to sub-questions used to reopen the discussion or to explore a theme in greater depth. Interviews with caregivers lasted approximately 30 min, while those with mothers lasted between 30 and 60 min. They were all conducted by a single female interviewer with vast experience in qualitative research (A Bila) and who speak the local language. Interviews were conducted in French or in Mooré, depending on the participant's preference. They were recorded, transcribed verbatim and translated into French (in the case of those conducted in Mooré). The transcripts were validated by a second person who listened to the original audio recordings. Field notes were taken during the interviews and used during the transcripts' validation to add non-verbal information to the material. Transcripts were not returned to participants. Analyses A thematic content analysis was carried out on all the material (Miles et al., 2014). We developed a common coding grid and independently coded the transcripts line by line (Additional File 3). We identified the dominant categories within the collected data and defined them as themes. Initial themes were discussed with team members and reformulated as necessary; some emerging themes that were accepted as significant were added. Double coding was performed to ensure that the themes were understood in the same way. The rare discrepancies (n < 10) were discussed and reconciled after clarification. QDA Miner software was used to facilitate the analyses. The participants did not provide feedback on the findings. Ethical considerations This study has been approved by the Ethics Committee for Health Research in Burkina Faso (Deliberation #2018-6-075) and has received research authorization from the Ministry of Health of Burkina Faso (#2018-3032/MS/SG/DGESS/DSS). All participants provided written consent after the researcher explained the objectives of the study, read the consent form, and ensured that the information was understood. The researcher also presented the research institutions and their role in the project. The audio recordings and transcripts were stored on a computer with secure access. Participation was not remunerated. Results A total of 20 individual interviews were conducted, 10 with healthcare providers and 10 with mothers (as beneficiaries and caregivers). The main sociodemographic characteristics of the participants are summarized in Table 1. Most of the interviewed mothers (7/10) were illiterate and work at their homes (6/10). They all lived in small dwellings made of mud, which is common in rural Burkina Faso, and had no visible access to electricity. Knowledge about the exemption's eligibility criteria All providers and beneficiaries are aware of the existence of the free healthcare policy. They all recognize that this policy applies to all pregnant women as well as to children from 0 to 5 years old. However, two ambiguities remain with regard to the target populations. On the one hand, mothers do not know exactly whether they are entitled to free postpartum care, nor until when; the official limit (free postpartum care up to 42 days after delivery) is difficult to comprehend. On the other hand, there is ambiguity regarding children, as some caregivers think that free care includes children aged 5 years, while others think that it excludes them and in fact only concerns children aged 0-59 months. Also, some mothers claim that free care is universal in nature; i.e. it covers all types of care for the target population. Health personnel confirms this misinterpretation and point out that, according to the guidelines, free care is universal only for children aged 0-5 years, and not for pregnant women. For the latter, only specific and pregnancyrelated care is free. Circumventing practices from beneficiaries As mentioned above, lack of knowledge of the strict eligibility criteria leads to situations where a patient may be denied free care when they consider in good faith to be entitled to it. But there are also several deliberate circumventing practices, reported by both health personnel and mothers, to extend the benefits of free care to situations where it is not normally applied. One of the most commonly reported practices is to hide the exact age of children aged 60 months or older in order to become eligible. This practice sometimes results in impersonation, when identification documents of another child under the age of 5 (e.g. younger sibling) are brought in as proof of age. Often they bring the child, you can see that the child is over five years old, but she says that the child is not even five years old; they don't bring the child's notebook, so it's difficult to define the child's age. (HP3) [If] they bring the little brother's health booklet and come and say it's his name, and it's his age. We can't know. (CG6) Another example is using an eligible person to receive a free consultation or medication for the benefit of another accompanying family member. The women who come to pick up the products, they're likely going to give them to their husbands, their co-wives or their children who are a little bit older but no longer eligible. Many people do that, bring the little one to the clinic when the big one is sick. (CG5) If the child's age goes up, you know he/she can no longer benefit. You can take advantage of the paracetamol from the small child to give to the big one. (CG1) Even if it's not true, you're going to sit down and close your eyes (laughs). And you hope that they give you the drug! And you take it to give it later to your children. (CG2) These practices are risky, however, because the treatment given to one beneficiary is not necessarily the same as the treatment that another family member should have. They don't know the dosage here. If it's a child under the age of five, if it's a small child, for example, it's one tablet twice a day, whereas if it's a child who's 6 or 7 years old, the child can't take one tablet twice, it's two tablets, twice a day. She doesn't know. She'll think if I take him there, I'll pay, but I have nothing to pay, so I have to take the one for whom the products are free, get the products and give them for his big brother. Sometimes it's not easy, she won't tell you it's to give to another child. (HP1) Some providers also report that beneficiaries go to several health centres in order to accumulate a larger supply of drugs that can be used either to treat other family members or to build up a stockpile of drugs that can be used later. Yes, some women here can go to several health centres, that is, they can get up and consult here in Boala and then go to Bonam. That's how they do it anyway, some women do it to get a lot of medication. (CG6) As we go along, we realize that there are people who come twice, three times a week or even people who change their CSPS [health centre] to get treatment to take the medication and give it to someone else. (HP2) Sometimes this practice stems from the lack of medicines available in a particular health centre, which leads some beneficiaries to visit different centres. Some of the circumventing practices are specific to women of childbearing age. For example, some women of childbearing age may take advantage of the free services when they are pregnant to deal with health problems that are not related to pregnancy, and some use the free visits for their young children as a pretext when what they really want is to receive confidential family planning advice. Yes, a woman is going to take advantage of her pregnancy to cure other illnesses, her old illnesses … anyways. She takes advantage of pregnancy to receive care. To heal. (GC5) To receive family planning, many [women] take advantage of free visits of their children to do so. (GC9) Motives of circumventing practices The main reason put forward for wanting to go beyond the strict threshold for receiving free healthcare is directly related to economic vulnerability. This situation concerns most households, and especially women, as they do not have income-generating activities and generally do not have control over the household's finances. Ultimately, the decision to visit a health centre is left to husbands. They are less inclined to spend on maternal and child healthcare than the mothers, who are the primary caregivers in the family. During the dry season and the hunger gap, households' ability to spend is even more limited due to the absence of harvests. It depends on the head of the household. If you want [to receive healthcare], but you have nothing and your husband is not going to give you anything, if you're concerned, you're going to argue and say that you're going to look for a solution by lying about the age of the child. You look for a solution to … to get the medicine. (GC10) Mothers argue that the free healthcare policy does not in itself raise any injustice since the same eligibility criteria apply to all households. However, the problem of access due to an inability to pay for healthcare shifts to older children, who are excluded not only from free healthcare but also from most childhood interventions, such as seasonal chemoprophylaxis for malaria. This lack of consideration for older children raises ethical issues. When it's over 5 years, you want it to be 4 or 3 years again. (CG6) It's a problem for us, because if they always say it's those who are under five years old who benefit … they choose some to receive, and others not to receive; that's always a problem for us. (CG3) For example, the malaria medicine they give here, if a child is over five years old, they leave and go take another one. But all children are going to get sick from malaria; they should help us with all the children. (CG1) Another reason associated with circumventing practices is the anticipation of no longer being eligible for free services. In the same vein, some fear that implementation issues, particularly in terms of drug supply, would prevent them from benefiting from free healthcare. This leads some people to accumulate a small stockpile of medicines at home in the event that free care is not always accessible. You take advantage of the pregnancy and you get treatment. The pregnancy is going to end, where are you going to go to get it fixed? As long as I'm free of this disease, even if it requires to say that it's only when I became pregnant that the disease came. (CG2) For example, if my child is sick, I go there to receive the medicine … The problem is when there's no medicine there. (CG3) There are people who go from CSPS [health centre] to CSPS. They can go here in the morning to consult, and since it's close by, they can go to two other health facilities on the same day to collect the products and store them. (HP9) Ethical Tensions Experienced by Healthcare Providers Healthcare providers acknowledge that the situation is not easy for them. On the one hand, they are subject to the hierarchy of the health system and obliged to enforce the rules, in this case, strict compliance with the eligibility criteria for free healthcare. On the other hand, they are moved by the financial difficulties encountered by community members, with whom they share their daily lives. They faced a tension of an ethical and professional nature in their decision to provide free or fee-paying care. We're executing personnel of the Ministry of Health, so when it's a case like that [of circumventing practices], it's a little bit very difficult, it's a little bit very difficult. (HP7) Me, personally, that's me, I applaud the user fee abolition policy. Why am I praising it today? Because it is when it went free that I understood how poor the people were. (HP5) In such a context of poverty, many caregivers report their own inability to strictly enforce eligibility criteria, especially among the most vulnerable households. Sometimes you look at someone, if you see that it's still not going well, you feel obliged to help, to include the patient in free healthcare so that they can benefit. Some patients, when they come, even five francs [0,01 USD], they don't have that. (HP1) A case of malnutrition like this, maybe the child is over 59 months, but physically he/she is not well … . Well, in these cases like this, it happens to bypass, I think in some situation it is legitimate to take appropriate measures. (HP9) Some also refer to their professional ethics, and justify their decision to extend free healthcare to ineligible patients in view of their commitment to alleviating the suffering of individuals. The mother insists that her child is five years old, you know the child is over five years old, but you can't leave the child suffering … and the mother hasn't brought anything [any money]. (HP10) Finally, many mentioned their close involvement in the community, and the need to maintain a good relationship with its members. Refusal to provide free care can lead to conflict or fear of having a bad reputation. In some cases, this motivates flexibility on the part of caregivers. This perspective is all the more present as caregivers recognize that circumventing practices are sometimes legitimate, as they are caused by flaws in the system (e.g. stock-outs of medications). If there's no notebook, it gets complicated (laughs) … It leads to arguments. Maybe even a fight, if she pretends that it's her child and that you have to provide care for free, and that's it! So much so that the health workers will end up treating for free. You have to end up treating her child for free to buy some peace. (CG5) Imagine, if there's a drug stock-out here, we say to them that we have to go elsewhere, since I can't deprive him since he's eligible for free healthcare … If I give him a prescription so that he can go and buy the medication in a pharmacy, it's as if the free healthcare policy is not effective in our health area. (HP5) Adaptive measures by healthcare providers Health staff have adopted various informal practices to overcome these tensions. One of them is to systematically demand the health booklet or even the birth certificate of young children, so that their age can be verified. Also, they may give only part of the treatment and require the patient to return several times to take the remaining doses, officially to ensure close monitoring of the patient's condition. In some cases, the administration of the treatment may even be directly observed. There are others even who bring the child, they know that the child is over five years old, but in order to take advantage, they will say that the child is under five. If we see that the age of the child will lead to too much discussion, we say to send the child's birth certificate so that we can have the discussion. (HP1) Now instead of one box, I give a blister [pack of pills] and schedule an appointment two days later to see if the child has taken the products; sometimes for the antibiotics, it is an 8-day treatment, so instead of giving two boxes, I give one box, I keep the other one here; she takes the first and she comes back later to take the other one. (HP2) However, healthcare personnel also adopt conciliatory practices. For example, they sometimes use the "ear technique" (i.e. excluding from free admission children who are capable of grasping the opposite ear with one hand while passing their elbow over their head) to facilitate the inclusion of small children in free admission even if they do not have their health booklet. They are also flexible with regard to the cut-off points for eligibility (59th month for children, 42nd day for postpartum), knowing that these have been chosen arbitrarily and are not easily identifiable by the population. They also emphasize raising awareness among mothers about free access, particularly with regard to the dangers of giving medicines to persons other than the patient, the importance of respecting the dosage and of bringing children quickly to a consultation in the event of fever. You treat these cases as if they were under five. These cases are children who are a few months older than five years, so you tell yourself it doesn't matter, with the difficult living conditions of the parents in the village, it's not often easy. (HP10) Often we find children who are over only by one month, we give care for free […] There is room for manoeuvre. (HP9) Discussion This study shows for the first time that the introduction of a policy of free healthcare raises ethical issues experienced by (non-)beneficiaries and providers, particularly with regard to compliance with eligibility criteria. Circumventing practices illustrate how difficult it is to justify, among the poorest communities, reserving free healthcare only for well-defined sub-categories of the population. While the very delimitation of these criteria, based on the priorities of international organizations, raises ethical issues and reflects power asymmetry, our approach here has rather consisted of exploring the experience of the actors involved in its implementation, at the interface of the patient, the caregiver, and the healthcare provider. Interviews confirm the presence of some circumventing practices that have been observed in other studies of direct payment exemption policies for healthcare (Druetz et al., 2015;Qiu et al., 2018;Ridde & Diarra, 2009). The mothers' motivation to employ these practices stems from three factors. The first and most significant explanation relates to the extreme economic vulnerability of some households, which simply do not have the means to cover the remaining fee-paying healthcare services. While some studies have attempted to quantify the money saved per household through free health care (Abdou Illou et al., 2015), it should be noted that the elimination of direct payment does not automatically build up a financial cushion in households, which could then be used to pay for the healthcare of ineligible persons. One cannot set aside resources that one does not have, and it is a mistake to think that free healthcare allows the poorest households to save money for the future. Second, circumventing practices are associated with low decision-making autonomy of mothers and their lack of control over household resources. These findings corroborate those of a recent systematic review, which highlighted the fact that mothers do not have control over financial resources in the household, and must therefore negotiate with the husband in order to pay the costs associated with healthcare (Beaujoin et al., 2021;Plouffe et al., 2020). Since the burden of caring for children rests mainly on them, mothers try to stretch the criteria for eligibility for free care rather than helplessly witness the progression of disease in their children. Finally, some participants mentioned a certain disagreement with the eligibility criteria that arbitrarily define time windows of life during which an individual can benefit from healthcare for free. Without understanding why a pre-existing health problem could not be treated free of charge during pregnancy, or why children over the age of five are no longer eligible for most health programmes, they feel justified in relaxing, or even extending, the eligibility criteria. Their concerns echo several studies that have raised the urgency of no longer neglecting the health needs of other vulnerable categories of the population, such as children aged 5-14 years or adolescents (Bhutta et al., 2019;Masquelier et al., 2018). Similarly, several feminist studies have highlighted the way in which the international community has equated women's health with maternal health and has reduced women to their wombs, particularly in the context of the Millennium Development Goals (Harman, 2012;Thomas, 2003;Tiessen, 2015). These issues are known to health providers, who perceive the lack of agency of the beneficiaries and are sensitive to the households' economic vulnerability. This situation places them in an ethical, even deontological dilemma as clinicians: they have a duty to treat and relieve the suffering of patients, but also to ensure that the official guidelines issued by the Ministry of Health are respected. This situation leads them to be flexible, even if it means that they have to be less compliant with the eligibility criteria. Studies have shown the presence of similar practices and dilemmas experienced by clinicians in high-income countries who, in the presence of vulnerable and uninsured patients, modify their reports to make them eligible for exemptions or reimbursements (Hurst et al., 2005;Weiner, 2001). It should be noted, however, that such ethical issues encountered by healthcare workers have rarely been studied in low-income countries, let alone in rural primary healthcare settings (Sippel et al., 2015). Our results suggest that clinicians are confronted with them in an even more blatant manner since they often reside in the community and share the living conditions of its members. In order to reduce these ethical tensions and to avoid conflicts with the community, clinicians have introduced conciliation practices (close follow-up of patients, directly observed treatment, etc.) which make it possible to limit circumventing practices without being totally inflexible. These modes of reinvention are important to allow the adaptation of the free policy to the local context, promote its acceptability, and optimize its effectiveness under routine implementation conditions on a national scale (Perez et al., 2011). However, the room for reinvention is limited and does not enable them to overcome a major problem associated with the implementation of free healthcare policies, namely the frequent stock-outs of medicines (Hatt et al., 2013). In a mutual cognitive process, these stock-outs justify both the beneficiaries' circumventing practices and the providers' flexibility in strictly complying with eligibility criteria. Ultimately, and although it raises ethical issues, the user fee abolition policy does not raise feelings of injustice or anger in the population. Its imperfections do not prevent the user fee removal policy from receiving the support of beneficiaries and health personnel, despite the increased workload of the latter. Its benefits on maternal and child health are unanimously recognized, which reinforces the expectations of its extension to other target populations (horizontal scale-up) or to other types of care such as family planning (diversification or functional scale-up) (Simmons et al., 2007). Abolishing direct payment in health facilities appears to be the most effective and feasible method of improving financial access to health services. Indeed, experiments of community-based insurance mechanisms proved unsuccessful to reduce out-of-pocket expenditures for health in rural populations in Burkina Faso (Fink et al., 2013). Also, initiatives that removed direct payment only for indigents faced numerous issues and were insufficient to ensure equitable access to healthcare in rural settings (Atchessi et al., 2016;Kadio et al., 2018). Although the cost of such a national policy of free access is not negligible (∼55 million USD in 2018), avenues can be explored to ensure its sustainable financing and improve universal health coverage Kutzin et al., 2016;Till et al., 2017). Some limitations of the study should be considered in interpreting the results. First, the study was conducted in a limited number of villages (5), all located in one district. As such, it does not claim to be representative of general perceptions and practices across Burkina Faso. However, the research team has been conducting research on the user fee abolition policy (and, before that, on pilot projects) for many years in Burkina Faso, and there is no indication that the problems related to the respect of eligibility criteria are different in other regions. To the best of our knowledge, the selected communities in the study area are representative of the rural setting in Burkina Faso. A social desirability bias and fear of negative repercussions may have affected the participants' responses during the interviews, particularly because of the sensitive nature of the subject (circumventing practices can be perceived as fraud) (Miles et al., 2014). To minimize bias, we used an experienced interviewer who speak the local language and was accustomed to dealing with sensitive topics. Also, participants were repeatedly assured of the confidentiality of their responses and the absence of potential negative repercussions. Conclusion Burkina Faso is one of the first countries in sub-Saharan Africa to have introduced a national policy of free healthcare for children under five and pregnant women. While considerably improving access to healthcare for a significant proportion of the population, financial barriers remain for those who are not eligible, which raises ethical issues for caregivers within the most vulnerable households and for providers. This study shows how these individuals have adapted their practices regarding compliance with eligibility criteria, leading to a local reinvention of the free healthcare policy. They also draw attention to the shifting burden of healthcare costs on children aged five years and older, who remain ineligible for many public health interventions. Data availability All audio recordings can be made available upon reasonable request by contacting the corresponding author. Authors' contributions TD and AB2 conceived the study. TD, FB, AB1 were involved in data collection. AB1 and TD analysed the data. AB1, CT, FB and TD interpreted results. AB1 and TD wrote the first draft. All authors read the first draft, contributed to improving it, and approved the final version. AB1 refers to A Bila, AB2 refers to A Bicaba. Disclaimers The views expressed in the submitted article are the authors' own and not an official position of the institution or funder.
2021-08-18T05:38:27.115Z
2021-08-13T00:00:00.000
{ "year": 2021, "sha1": "d434c3a093256d77f6d705f02068429433a4ce9f", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/11287462.2021.1966974?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d434c3a093256d77f6d705f02068429433a4ce9f", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
218582687
pes2o/s2orc
v3-fos-license
Parallel epidemics, or nearly so: Certainties and uncertainties about SARS-CoV-2 in Italy 2020 Elsevier B.V. All rights reserved. Between the end of the second and the beginning of beginning of 2020 caused by SARS-CoV-2 virus infection, the following century BC, the Greek philosopher Plutarch wrote ‘‘The parallel lives”, a series of twin biographies, each of which comparing a Greek and Roman prominent figure as for life events, vices and virtues. Now that our world is facing SARS-CoV-2 pandemics with its relevant country-specific timing and evolution pattern characteristics, we feel like repeating Plutarch’s parallelism experiment by analyzing factors eventually influencing its expression in different and far afield countries and by trying a head-to-head comparison between two main Italian regions. The history of COVID-19 pandemics is contributed to by a series of events occurring between the end of 2019 and the starting in Wuhan, China, and spreading throughout the world thereafter [1,2]. Event sequence The first COVID-19 infection case, referring to a 55-year-old man in Hubei province, was recorded on November 17, 2019 [1,2]. In the very beginning the agent was not identified as a new type of coronavirus, so that the news was reported by the Chinese government only on January 13, 2020 [3]. Immediately after the epidemics became apparent so that Hubei province and the town of Wuhan were isolated and some 60 million people underwent a strict, army-secured quarantine which guaranteed the outbreak switch-off in about two months. All Swabposi ve Ordinary wards Intensive Care Units Sh ou td ow n of s ch oo ls an d un iv er si es Lo m ba rd y iso la o n ru le s ex te nd ed to It al ya s a w ho le Co m m er ce s hu td ow n ex ce p or e ss en a lg oo ds pr ov id er s Sh ou td ow n of s ch oo ls an d un iv er si es Lo m ba rd y iso la o n ru le s ex te nd ed to It al ya s a w ho le Co m m er ce s hu td ow n ex ce p or e ss en a lg oo ds pr ov id er s Fig. 1 – Overall trend of the epidemic in Italy as of April 18, 2020 with respect to progressive Governmental restrictions (Source: Date from the Ministry of Health processed by the ISS, modified) [10] new day-to-day cases from February 24 to April 19 . total discharged (healed) total deceased Fig. 2 – New day-to-day (in yellow) SARS-CoV-2 swab-positive patients (left panel), death cases (right panel) and moving average of healed individuals on a continuous 5-day basis (middle panel, in blue) as of April 18, 2020 (Source: Data from the Ministry of Health processed by the ISS, modified). [10] (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 3 – Admission trends to ICUs (left panel; the arrow shows the peak level reached on April 3) and ordinary wards (right panel; the March 29 arrow shows the 29,010 individual peak level) (Source: Sky TG 24 on data from the Ministry of Health processed by the ISS, modified) [11]. 2 d i a b e t e s r e s e a r c h a n d c l i n i c a l p r a c t i c e 1 6 4 ( 2 0 2 0 ) 1 0 8 1 9 5 d i a b e t e s r e s e a r c h a n d c l i n i c a l p r a c t i c e 1 6 4 ( 2 0 2 0 ) 1 0 8 1 9 5 3 Italy recorded its first COVID-19 pandemic event on January 31, 2020, when two tourists coming from China were found to be virus-positive in Rome [4]. On February 20, new outbreaks were detected in Codogno, Lodi province, Lombardy region, in terms of 16 infected people, summing up to 60 on the following day and causing the first deaths soon after [5–8]. The patient identified as Case 1 was a 38-year old man from Codogno having respiratory tract infection signs. His wife and a close friend were also found COVID-19-positive [5] so that 13 municipalities in Lombardy and one in Veneto were immediately isolated to fulfil contagion containment procedures [9]. Figs. 1–3 depict the evolution pattern of the epidemic in Italy throughout April 18, 2020 showing some slight positive signals, despite an overall long-lasting trend. When referring to the initial cases recorded in Lazio and in Lombardy, the epidemic has consistently followed parallel trends in the North (quite turbulent) and in the Center-South of Italy (much slower) and now, when freezing the official picture derived from data of the Ministry of Health processed by the Istituto Superiore di Sanità (ISS) as of April 18, 2020 (Table 1), we detect a huge difference between the two Table 2 – Demographics from Lombardy and Lazio. (Source: Sta Region population surface de inhabitants km in Lombardy 10,060.574 23,863.65 42 Lazio 5,879.082 17,232.29 34 Table 1 – Absolute number of SARS-CoV-2 infected individuals as of April 18, 2020 in Lombardy compared to Lazio. (Source: Data from the Ministry of Health processed by the ISS, modified). [10] COVID-19 Italy Diabetes mellitus Comorbidities Chronically ill patients A B S T R A C T Ó 2020 Elsevier B.V. All rights reserved. Between the end of the second and the beginning of the following century BC, the Greek philosopher Plutarch wrote ''The parallel lives", a series of twin biographies, each of which comparing a Greek and Roman prominent figure as for life events, vices and virtues. Now that our world is facing SARS-CoV-2 pandemics with its relevant country-specific timing and evolution pattern characteristics, we feel like repeating Plutarch's parallelism experiment by analyzing factors eventually influencing its expression in different and far afield countries and by trying a head-to-head comparison between two main Italian regions. The history of COVID-19 pandemics is contributed to by a series of events occurring between the end of 2019 and the beginning of 2020 caused by SARS-CoV-2 virus infection, starting in Wuhan, China, and spreading throughout the world thereafter [1,2]. Event sequence The first COVID-19 infection case, referring to a 55-year-old man in Hubei province, was recorded on November 17, 2019 [1,2]. In the very beginning the agent was not identified as a new type of coronavirus, so that the news was reported by the Chinese government only on January 13, 2020 [3]. Immediately after the epidemics became apparent so that Hubei province and the town of Wuhan were isolated and some 60 million people underwent a strict, army-secured quarantine which guaranteed the outbreak switch-off in about two months. Italy recorded its first COVID-19 pandemic event on January 31, 2020, when two tourists coming from China were found to be virus-positive in Rome [4]. On February 20, new outbreaks were detected in Codogno, Lodi province, Lombardy region, in terms of 16 infected people, summing up to 60 on the following day and causing the first deaths soon after [5][6][7][8]. The patient identified as Case 1 was a 38-year old man from Codogno having respiratory tract infection signs. His wife and a close friend were also found COVID-19-positive [5] so that 13 municipalities in Lombardy and one in Veneto were immediately isolated to fulfil contagion containment procedures [9]. Figs. 1-3 depict the evolution pattern of the epidemic in Italy throughout April 18, 2020 showing some slight positive signals, despite an overall long-lasting trend. When referring to the initial cases recorded in Lazio and in Lombardy, the epidemic has consistently followed parallel trends in the North (quite turbulent) and in the Center-South of Italy (much slower) and now, when freezing the official picture derived from data of the Ministry of Health processed by the Istituto Superiore di Sanità (ISS) as of April 18, 2020 (Table 1), we detect a huge difference between the two regions. Demographic data is quite different and strongly in favor of Lombardy in terms of number of provinces, municipalities, inhabitants, geographical surface and population density (people/km 2 ) [12]. In greater detail, when looking at results from the two regional capitals, accounting per se for the highest population density and consequently most relevant contagion spread risk, a paradox becomes immediately apparent: despite having fewer inhabitants than Rome (1,352,000 versus 2,873,000), Milan has suffered from a fivefold infection rate so far (see Table 2). Which reason might underly such a clear difference? Despite discrepancies in timing, modality and rule severity, China and Italy adopted similar containment procedures, yet the spread of contagion was much faster in Northern than in Southern Italy (almost invariably comparable to Lazio). It is quite evident from Wuhan experience that the stricter the isolation rules for assessed or potentially infected people, the easier it is to prevent SARS-CoV-2 spreading. This was the case in Rome where, immediately after being diagnosed, the two Chinese infected persons were admitted into the Spallanzani Hospital, a 40-year-renowned research and care center for infectious diseases and all their travel-mates and contacts were identified and isolated into the Cecchignola military citadel on the outskirts of Rome. All over Lombardy, instead, due to the lack of symptoms of infected people for several days or even weeks, virus spread was much wider, thus reaching out to Veneto and Emilia-Romagna regions, and presumably required longer to develop earlier symptoms after being infected. This might be explained by the fact that the two Chinese cases in Rome were older than Case 1 in Codogno (i.e. 68 vs 38 years of age). The former followed an easily traceable tourist itinerary while the latter moved around quite easily and extensively among regions making it impossible to track down all contacts. Moreover, the evidence so far seems to indicate that, besides being burdened by higher mortality due to the frequent coexistence of other diseases, older subjects are more easily infected and severely ill and this might have caused weakness symptoms quite soon preventing the Chinese patients in Rome from moving around [13,14]. There are also anecdotical reports concerning a shorter incubation period in the elderly which might add to the above, yet further studies are still needed to validate such hypothesis. Are there any other possible explanations for accelerated coronavirus spread in Lombardy? Indeed, the most affected area in the region, including Milan and its hinterland, is quite rich in farms and industries asking for intensive daily commuting and in the early phases of the epidemic most activities were still in place and partially stopped only about two weeks later. In fact, beginning of March, after the outbreak had already shown its threatening potential, only ''crucial" activities were allowed to go on including food and drug chains, logistic support providers, general practitioner (GP) offices, which anyway involved some 40% of the working population [14]. So considerable train, metro and bus commuting, relatively late recognition and multiple outbreak sites might have favored such a large infection spreading pattern in Northern Italy. Two more observations might also be taken into account, albeit requiring further validation by controlled studies: (i) a large number of treatment-resistant pneumonia events occurring already in January were reported by GPs a posteriori, i.e. after SARS-CoV-2 outbreak, and might reflect the presence in the area of potentially unrecognized carriers responsible for the exponential spreading of coronavirus infection [15] and (ii) more severe fine dust pollution and less favorable hygrometric and barometric environmental conditions characterize the whole Po Valley as compared to the rest of Italy, as also apparent from European Space Agency reports [17,18]. Finally, as far as type 2 diabetes mellitus (T2DM) is concerned, we now have to underline that, as recently pointed out by two Italian groups [15,19] and reported by Chinese scientists directly involved in the management of COVID-19 infection [20,21], the disease is extremely frequent and associates with a higher mortality rate in hospitalized patients (Table 3) [22]. Such phenomenon, which was further confirmed in Italian patients by the most recent bulletin from the ISS, had become apparent, despite different prevalence estimates, since the very beginning of the epidemic in China [15]. Diabetes and uncontrolled glycaemia had already been reported as significant predictors of severity and deaths in patients infected with different viruses as well, including the 2009 pandemic influenza A (H1N1) [23], SARS-CoV [24] and MERS-CoV [25]. SARS-CoV-2 Infection in individuals with DM is expected to trigger higher stress conditions, with greater release of hyperglycemic hormones, e.g., glucocorticoids and catecholamines, leading to increased blood glucose levels and abnormal glucose variability [26]. On the other hand, according to a retrospective study from Wuhan some 10% COVID-19-positive patients with T2DM suffered at least one hypoglycemic event (<3.9 mmol/L) [27]. Notoriously, hypoglycemia mobilizes pro-inflammatory monocytes and increases platelet reactivity, thus contributing to a higher cardiovascular mortality in patients with DM [28]. In addition, hyperglycemia and insulin resistance enhance the build-up of glycosylation end products (AGEs) and proinflammatory cytokines causing severe oxidative stress and increased inflammation-related adhesion molecule release [29][30][31]. All this may in fact contribute to the mechanisms underlying the greater susceptibility to infections and the worse outcomes thereof observed in patients with DM [30]. Mostly based on in vitro studies, hyperglycemia associates with several immune system defects, including inhibited lymphocyte proliferative response to different kinds of stimuli [32,33], and impaired monocyte/macrophage and neutrophil functions [30]. In vitro studies have shown that pulmonary epithelial cells exposure to high glucose concentrations significantly increases influenza virus infection and replication, pointing to hyperglycemia-enhanced viral replication in vivo [34]. In animal models, structural lung changes have been related to diabetes, such as augmented vasculature permeability and collapsed alveolar epithelium [35]. Finally, abnormal delayed-type hypersensitivity reaction and complement activation dysfunction [36] have been reported in patients with DM. All of the above suggests that the greatest possible attention should be paid to the high risk for more severe lung involvement pending on people with DM during COVID-19 infection, due to a significantly lower forced vital capacity (FVC) and forced expiratory volume in one second (FEV1) in the presence of raised glucose levels [37]. So which pathogenetic elements underly higher susceptibility to COVID-19 disease and mortality rate in people with DM? The virus itself and its toxins precipitate a cytokine thunderstorm exacerbating hypercoagulation in patients with DM [20,34] who, by definition, face a clear-cut prothrombotic state [35], further aggravated by chronic low-grade inflammation causing atherosclerosis-related endothelial dysfunction and insulin-resistance associated arterial hypertension [36]. All above-mentioned phenomena are known to become more and more severe as age increases and this is in line with the observation that octogenarian individuals are mostly affected by SARS-CoV-2 disease [19]. With respect to that, an issue deserving special attention is the quite similar SASR-CoV-2 infection rate in people without or with DM [25] in front of a much greater severity of the disease as reported by the Italian ISS in the latter. At this point we also feel like underlining that some hypoglycemic treatment strategies might turn out to be protective. Dipeptidyl-dipeptidase-4 (DPP-4) was shown, in fact, to be the primary receptor of MERS-CoV [37]: the possibility that DPP-4 also acts as a receptor for SARS-CoV-2 warrants investigation and, should this be the case, in agreement with the hypothesis put forward by Iacobellis [38], DPP-4 inhibitors, wellrenowned treatment tools in T2DM, should be explored for their anti-viral potential in the human [15]. Opposite to that, it has been recently hypothesized that Sodium-Glucose-Transporter-2 inhibitors (SGLT-2i), Glucagon-Like-Peptide-1 Receptor Agonists (GLP-1RAs), Pioglitazone and even Insulin might induce an over-expression of the ACE2 receptor which was also found to bind to coronaviruses in the alveoli [39], and therefore increase the risk for a more severe expression of SARS-CoV-2 infection in people with diabetes [40]. Corticosteroid utilization also requires special attention with respect to COVID-19 disease severity in people with DM [41]. Acute lung damage and acute respiratory distress syndrome (ARDS) are partly due to the host immune response and corticosteroids were broadly used in SARS-CoV and MERS-CoV infections [42,43] due to their ability to suppress histologically proven virus-dependent lung inflammation with diffuse alveolar damage [44]. However, they also inhibit immunity and pathogen clearance and, in the face of very little benefit if so ever, have been associated with delayed viral RNA clearance or increased mortality and rate of complications, including diabetes, psychosis, and avascular necrosis [42]. Due to all the above, the interim guidance from the WHO on clinical management of severe acute respiratory infection advises against the use of corticosteroids upon suspicion of SARS-CoV-2 infection outside clinical trials [45], yet corticosteroids were extensively utilized before that in at least 34% of the large number of Northern Italian hospitalized patients [46]. At analysing the growth curve of coronavirus infection in Italy, the massive referral to the hospitals of so many people within a very a short period strikes the eye when compared to the small number of hospitalized patients in the South (Figura 4). In the stormy and overcrowded emergency conditions faced by health professionals at the start of the epidemic in the absence of any experience with treating COVID-19 disease, a broader recourse to high doses of corticosteroids most likely occurred than later on, when in fact better treatment strategies were identified and emergency departments were less congested. It can be assumed that the above mentioned logistic and environmental factors strongly affected clinical course and mortality rates of COVID-19-infected people and even more in those with DM. Indeed, Fig. 4 clearly shows that death rate was significantly higher in infected Northern Italy inhabitants than in their Southern counterpart (i.e. 17.2% vs 5.5%, respectively). According to current protocols, subject to the exceptions needed for treatment personalization, the use of corticosteroids is extremely limited if so ever. This might also help healthcare teams adopt an intensive, fully integrated therapeutic approach for their patients with DM, especially those in the ICUs, involving drugs expected to prevent at their best both hypoglycemic events and dramatic hyperglycemic spikes contributing to marked glycemic variability, i.e. a severe mortality risk driver [47] (see Fig. 5). Patient groups Lomb ardy -n (%) Lazio -n (%) A final report coming from TOSCA, a recent nationwide study on people with T2DM, known to affect mostly older patients who, in turn, are at higher risk for severe COVID-19 disease, shows that a high adherence to the Mediterranean diet is significantly more frequent among people living in the southern regions of Italy [48]. The latter, however, display a slightly higher phenolic acid and lignan intake due to a higher consumption of cereals and legumes and a slightly lower total polyphenol intake as flavonoids and stilbenes due to lower wine consumption [49]. This takes on a particular significance when comparing the absolute and relative death rate of subjects with T2DM in Lombardy (11,384 deaths; 56.9% of infected individuals) to that observed in Lazio (259 deaths; 1.3% of infected individuals). Might this add to the picture? At the moment, ours can only be taken as speculation but we strongly believe that also lifestyle components deserve further investigation with respect to COVID-19 disease. In conclusion, based on official Ministry of Health and ISS statements we are fully aware that: a strikingly different trend was observed for SARS-CoV-2 infection between Northern and Southern Italy; strict containment rules are the best way to prevent SARS-CoV-2 epidemics from spreading; territory-oriented medicine might play an effective guardian role and should therefore be listened to and strengthened; people with T2DM are more susceptible to SARS-CoV-2 infection, with twice as high mortality risk as metabolically healthy people; high amounts of polyphenol-derived antioxidants from cereals and legumes typical of the Mediterranean diet might help to stop SARS-CoV-2 infection from spreading. To borrow from Plutarch, the conditions described above are asymmetrically parallel. So, asymmetry proved to be detrimental to Italy, and especially its Northern part and to somewhat spare the Central and Southern areas so far, whose ancient Greek culture-derived lifestyle might have positively, albeit almost unexplainedly and inadvertently, influenced the susceptibility to coronavirus infection. Funding The paper was supported by a non-conditioning special grant of NYX startup, Naples, Italy Authorship: All authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and that it will not be published elsewhere in the same form, in English or in any other language, including electronically, and have given their approval for this version to be published. Authorship Contributions: SG, and FS created the paper and wrote it, AM provided the needed resuscitation experience and extensively reviewed the literature; all approved the final manuscript. Compliance with ethical standards: Ours was a spontaneous, unconditioned study. Ethical standard: This study was conducted in conformance with good clinical practice standards. The study was led in accordance with the Declaration of Helsinki 1975, as revised in 2013. Human and animal rights: This article does not directly use experimental data on humans or animals, but reports data derived from the literature. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-05-12T13:03:43.716Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "3649c1c9ea7df5bf33507851cf584a6fff966ce8", "oa_license": null, "oa_url": "http://www.diabetesresearchclinicalpractice.com/article/S0168822720304459/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3d3eff6403da0b1404193787eac9369fa3a57d1f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51909960
pes2o/s2orc
v3-fos-license
Clinical usefulness of transarterial chemoembolization response prior to liver transplantation as predictor of optimal timing for living donor liver transplantation Purpose Response to preoperative transarterial chemoembolization (TACE) has been recommended as a biological selection criterion for liver transplantation (LT). The aim of our study was to identify optimal timing of living donor liver transplantation (LDLT) after TACE based on the TACE response. Methods We performed a retrospective study to assess recurrence in 128 hepatocellular carcinoma (HCC) patients who underwent LDLT following sequential TACE from January 2002 to March 2015 at a single institute. Cox proportional hazard models and Kaplan-Meier analysis were utilized to estimate HCC recurrence and find optimal timing for LDLT. Results Seventy-three and 61 patients were divided as the responder and nonresponder, respectively. Multivariate analysis showed independent pre-liver transplantation (pre-LT) predictors of recurrence were larger sized tumor (>3 cm, P = 0.024), nonresponse to TACE (P = 0.031), vascular invasion (P = 0.002), and extrahepatic nodal involvement (P = 0.001). In the 3-month time difference between last pre-LT TACE and LDLT subgroup, TACE responders showed significantly higher adjusted hazard ratio (aHR) of recurrence free survival (aHR, 6.284; P = 0.007), cancer specific survival (aHR, 7.033; P = 0.016), and overall survival (aHR, 7.055; P = 0.005). Moreover, for overall patients and responder groups, the significant time difference between last pre-LT TACE and LDLT was 2 months in the minimum P-value approach. Conclusion In selected patients who showed good response to pre-LT TACE, a shorter time interval between TACE and LDLT may be associated with higher recurrence free survival, cancer specific survival, and overall survival. INTRODUCTION Since deceased donor liver transplantation (DDLT) for the treatment of small hepatocellular carcinoma (HCC) was first reported by Mazzaferro et al. [1] in 1996, liver transplantation (LT) has been considered the treatment option providing the best chance of a cure for unresectable HCC with liver cirrhosis. In most Asian countries, although there have been various issues regarding optimal tumor criteria selection for LT [2][3][4][5][6], an extreme shortage of deceased donors and the strong clinical needs for LT in patients with combined HCC and chronic HBV has led to the establishment of living donor liver transplantation (LDLT) as a practical alternative to DDLT [7,8]. Transarterial chemoembolization (TACE) is a key bridging Purpose: Response to preoperative transarterial chemoembolization (TACE) has been recommended as a biological selection criterion for liver transplantation (LT). The aim of our study was to identify optimal timing of living donor liver transplantation (LDLT) after TACE based on the TACE response. Methods: We performed a retrospective study to assess recurrence in 128 hepatocellular carcinoma (HCC) patients who underwent LDLT following sequential TACE from January 2002 to March 2015 at a single institute. Cox proportional hazard models and Kaplan-Meier analysis were utilized to estimate HCC recurrence and find optimal timing for LDLT. In the 3-month time difference between last pre-LT TACE and LDLT subgroup, TACE responders showed significantly higher adjusted hazard ratio (aHR) of recurrence free survival (aHR, 6.284; P = 0.007), cancer specific survival (aHR, 7.033; P = 0.016), and overall survival (aHR, 7.055; P = 0.005). Moreover, for overall patients and responder groups, the significant time difference between last pre-LT TACE and LDLT was 2 months in the minimum P-value approach. Conclusion: In selected patients who showed good response to pre-LT TACE, a shorter time interval between TACE and LDLT may be associated with higher recurrence free survival, cancer specific survival, and overall survival. and downstaging treatment for unresectable HCC in patients considering LDLT as well as patients on the waiting list for DDLT [9][10][11]. Response to TACE prior to LT has been suggested as a biological selection criterion for LT or a predictor of longterm outcome after LT [12][13][14][15]. However, the clinical impact of pre-LT TACE response has not been validated yet in LDLT recipients. In addition, for patients with advanced HCC who underwent TACE, the decision or optimal timing of LDLT as a definitive treatment option have not been well established. We identified predictors affecting recurrence of HCC after LDLT in patients undergoing TACE prior to LDLT and assessed whether pre-LT TACE poor response was the risk factor for recurrence of HCC in LDLT recipients, similar to the case of DDLT recipients. We also investigated the clinical usefulness of pre-LT TACE response in terms of determination for optimal timing of LDLT in patients who underwent TACE as a bridging and downstaging treatment for unresectable HCC. Study design and population We retro spectively assessed the data of 527 patients who underwent LDLT for HCC at single institution during the period between January 2002 and March 2015. Three hundred sixtyfive patients underwent the treatment for HCC prior to LDLT. Those patients who underwent liver resection (LR, n = 10), radiofrequency ablation (RFA, n = 29), and more than two modality of combined treatment such as RFA, LR, and RT (n = 192) were excluded from this study. Finally, 134 patients who underwent TACE only before LDLT were included in this study (Fig. 1). The following characteristics for these 134 patients were reviewed: demographic factors (age, sex, etiology, Child-Turcotte-Pugh grade, model for end-stage liver disease score, α-FP at the time of transplantation, and TACE numbers), radiologic factors (within or beyond Milan criteria based on tumor size and number using computer tomography, bilobar distribution), and pathologic factors (tumor differentiation, vascular invasion, intrahepatic metastasis, portal vein thrombosis, and tumor necrosis). In addition, TACE-associated factors were reviewed: numbers of TACE cycles, time-related variables such as diagnosis-LDLT time (monthly duration from diagnosis of HCC to LDLT), first TACE-LDLT time (monthly duration from initiation of TACE to LDLT), and last TACE-LDLT time (monthly duration from termination of TACE to LDLT). This study was approved by the Institutional Review Board (IRB) of Samsung Medical Center (approval number: 2014-11-060) and informed consent was waived by the IRB. Statistical methods Continuous data was represented as median with range. Categorical data was specified as numbers and percentages. Univariate and multivariate analyses for factors affecting recurrence of HCC following LDLT were conducted using a Cox proportional hazard model. In addition, Cox regression was used to calculate the adjusted hazard ratio (aHR) of recurrence free survival (RFS), cancer specific survival (CSS), and overall survival (OS) for each subgroup. The "minimum P-value" approach was used to determine the best cutoff timing for LDLT after TACE [16]. A P <0.05 was considered a statistically significant. Data handling and analyses were carried out using the IBM SPSS Statistics ver. 22.0 (IBM Co., Armonk, NY, USA). Table 1 shows the demographic and clinical characteristics in the study cohort of 134 patients. Median age at transplantation was 54 years (range, 30-77 years). Male and female were comprised of 114 (85%) and 20 patients (15%), respectively. Recipients who underwent ABO incompatible LDLT were 11 patients (8.2%). The most common cause of cirrhosis was HBV (86.6%). The majority of patients (72.4%) had HCC as defined by Milan criteria. The tumors in the explant liver of 23 patients were totally necrotic and thus were unable to be assessed for tumor differentiation. In the study cohort of 134 patients undergoing TACE prior to LDLT, median clinical We determined tumor response to TACE according to the last radiologic assessment before LDLT based on modified response evaluation criteria in solid tumors (mRECIST) criteria [17]. Forty- , P = 0.050; OS, P = 0.035) and PD (DFS, P = 0.008; OS, P = 0.010) patients, but were not significantly different from PR (DFS; P = 0.809, OS; P = 0.586) patients ( Fig. 2A, B). In order to perform univariate and multivariate analyses regarding the effect of TACE response on recurrence of HCC after LDLT, patients were divided into 2 groups according to tumor response to TACE prior to LDLT: responders (n = 73) and nonresponders (n = 61). Responders included patients with CR or PR. Nonresponders included patients with SD or PD (Fig. 1). Optimal cutoff timing for LDLT in HCC patients undergoing pre-LT TACE Of the 134 patients, 29 patients (21.6%) experienced an HCC recurrence. According to the monthly time from last TACE to LDLT, numbers of patients who experienced recurrence after LDLT were distributed as in Fig. 3A. Of 29 patients with HCC recurrence, 25 patients (86.2%) were recipients undergoing TACE 12 months before LDLT. In all patients, the most common time from TACE to LDLT based on HCC recurrence was 2 monthsspecifically more than 2 months but less than 3 months (Fig. 3A). We divided the 134 patients undergoing TACE prior to LDLT into 2 groups; the group who underwent pre-LT TACE in each month (group A) and the group who did not (group B). The "minimum P-value" approach, which was performed using a log-rank test for comparison of the RFS between groups A and B, was used to determine the best cutoff timing for LDLT after TACE. In all patients, the optimal timing of LDLT after TACE based on the significant difference between groups A and B was 2 months (P = 0.022) (Fig. 3A). In the 73 patient TACE responder group, more than 2 months but less than 3 months after TACE was the best cutoff timing based on the significant difference between groups A and B (P = 0.016) (Fig. 3B). However, there was no significant cutoff value between groups A and B according to the log-rank test (Fig. 3C). DISCUSSION Although the major issue regarding the optimal selection criteria of LT for HCC still remains, LT has become the effective treatment for selected patients with unresectable HCC. TACE is mainly used as bridging therapy for HCC patients awaiting LT to prevent tumor progression and wait-list dropout and improve posttransplant survival [9,10]. In addition, response to TACE has been proposed as an LT biological selection criterion for HCC because it may predict long-term outcome after LT [12][13][14]. These previous reports' cohorts were mostly patients undergoing DDLT. Thus, the first purpose of this study was to validate the clinical usefulness of the response to pre-LT TACE based on the estimation for risk of recurrence after LT in LDLT recipients. Univariate Cox regression analysis for DFS and OS showed TACE responder groups such as CR or PR showed significantly higher DFS and OS than TACE nonresponder groups such as SD or PD. Multivariate analysis for identification of risk factors affecting recurrence of HCC after LDLT showed nonresponse to TACE prior to LDLT was a significant independent factor as well as larger tumor size, microvascular invasion, and extrahepatic nodal involvement. This study demonstrated that the response to pre-LT TACE reflected HCC biology and could be used to estimate recurrence even in LDLT recipients in previous studies [12][13][14]18]. It is well-known that, for HCC patients listed for LT within Milan criteria, a delay LT over 6 to 12 month is a risk factor for tumor aggravation and wait-list dropout or interval dissemination with posttransplant tumor recurrence [9,19]. In most Asian countries that included our Korea, LDLT was developed as a practical alternative to DDLT because of unmet needs such as shortage of deceased donors and strong demand for LT. Determining the timing of LDLT is clinically crucial for HCC patients undergoing local treatment because this timing can be affected by physician subjective decisions as well as strong requests for LDLT. For this reason, we assessed whether LDLT optimal timing could be determined according to the response to pre-LT TACE. In subgroup analyses adjusted using a Cox proportional hazard model, TACE responders with single TACE had significantly higher aHR than aHR of all recipients. In addition, TACE responders with shorter time-related factors (i.e., diagnosis-LDLT time, first TACE-LDLT time, and last TACE-LDLT time) had significantly higher aHR than all recipients. This indicated that smaller numbers of TACE prior to LDLT and shorter time from TACE to LDLT tended to decrease the risk of recurrence in TACE responders. For recipients with last TACE-LDLT time within 3 months, DFS of responders was 6 fold higher than those of nonresponders. Using the minimum P-value approach with a log-rank test for comparison of the RFS between groups A and B, if last TACE-LDLT time exceeded 2 months, the recurrence rate could be increased significantly in TACE responders. Therefore we suggest that if unresectable HCC patients showed good response to TACE applied as a bridging therapy, LDLT may be performed within 2 months after TACE. This study's main limitation was its retrospective design. Also, a large number of patients undergoing other treatments such as RFA and RT were excluded to reduce confounding factors. Also, time-related factors including diagnosis-LDLT time, first TACE-LDLT time, and last TACE-LDLT time were not significantly associated with HCC recurrence after LDLT even in a univariate analysis of all patients. Finally, this study consisted of a small cohort, especially for patients with HCC recurrence. Hence, the minimum P-value approach was not enough to generalize results regarding optimal LDLT timing. In conclusion, mRECIST-defined TACE response could be used to estimate recurrence of HCC after LDLT in patients undergoing pre-LT TACE. For patients with a good response to TACE, shorter LDLT waiting times after pre-LT TACE may be associated with decreased risk for HCC recurrence after LDLT and increased graft and patient survival.
2018-08-14T19:01:43.785Z
2017-07-30T00:00:00.000
{ "year": 2017, "sha1": "40247ed0ac753f79f7c15ca08dc8c5de9f486c9e", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6073044?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "40247ed0ac753f79f7c15ca08dc8c5de9f486c9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9287495
pes2o/s2orc
v3-fos-license
Swine Influenza (H3N2) Infection in a Child and Possible Community Transmission, Canada Seropositivity to the same strain was demonstrated in the child and in multiple other community members. I nfl uenza A is endemic in a broad range of species, with avian and swine strains having the greatest potential for transmission to humans. Pandemics of infl uenza A occur when a major change occurs in the proteins of circulating strains of the virus. During the pandemics of the past century, this antigenic shift resulted from reassortment of human and avian strains or adaptation of avian viruses to facilitate person-to-person transmission (1). Avian infl uenza preferentially binds to sialic acid-galactose receptors with an α-2,3 linkage that is abundant on duck intestinal epithelium; human infl uenza preferentially binds to sialic acid-galactose receptors with an α-2,6 linkage that is abundant on human respiratory epithelium. The respiratory epithelium of swine contains both types of receptors and can potentially be simultaneously infected with avian and human infl uenza (2). Human infection with avian infl uenza subtype H5N1 is of great concern, with 194 deaths of 321 cases reported worldwide through August 16, 2007 (3). Swine infected with avian subtype H5N1 have been identifi ed in Vietnam (4), raising the possibility that swine could act as the "mixing vessel" that allows avian infl uenza (H5N1) to reassort with a human infl uenza strain, resulting in a virus with high pathogenicity and a high potential for person-toperson spread. Another theoretical mechanism for the origin of an infl uenza pandemic would be the adaptation of a swine strain that results in effi cient person-to-person transmission, although cross-protection by antibodies to recently circulating human strains may prevent this from occurring with swine infl uenza virus (SIV) H1 and H3 strains. Infection of humans with SIV was fi rst recognized in 1974 with an H1N1 strain (5); the solitary outbreak occurred in military recruits at Fort Dix, New Jersey, USA, in 1976 (6). Human infection with SIV subtype H3N2 was fi rst described in Europe in 1993 (7). The fi rst reported case of probable infection of a person in North America with a non-H1N1 subtype of SIV occurred in Ontario, Canada, in 2005 with an H3N2 strain detected in the respiratory tract of an adult with no serologic evidence of infection (8). We describe a case of SIV (H3N2) infection in a Canadian infant, confi rmed by viral isolation and serologic testing. Swine Infl uenza (H3N2) Infection in a Child and Possible Community Transmission, Canada Case Report A 7-month-old boy was admitted to the hospital on September 10, 2006, with a 3-day history of fever, rhinitis, and cough. He had had no previous contact with ill persons. The child was born at term and was hospitalized for 21 days at 5 weeks of age when he received ventilation for 6 days for pneumonia due to respiratory syncytial virus. He lived on a communal farm (90 occupants) with horses, cows, swine, sheep, dogs, cats, turkeys, geese, ducks, and chickens but had no direct contact with the animals. The swine were contained in barns and did not mix with the other animals. His household contacts did not work directly with animals, but his father occasionally spent time in the barns, and his uncle, who lived next door, worked in the swine barns. On admission, the child was afebrile with a heart rate of 120 beats/min, respiratory rate 56/min, and oxygen saturation of 85% on room air. Diffuse wheeze was noted. Chest radiograph results were unremarkable. Direct fl uorescent antibody testing on a nasopharyngeal aspirate was positive for infl uenza A, and the virus was isolated in rhesus monkey cell culture. The isolate was sent to the National Microbiology Laboratory for infl uenza subtyping as a requirement of the Canadian infl uenza surveillance program, where it was subsequently designated A/Canada/1158/2006. The child stayed in the hospital for 2 days and then made an uneventful recovery at home. A cough and rhinitis developed in his 19-month-old brother on the day the index patient was admitted to the hospital, but the brother was not assessed by a physician. Antigenic Analysis For the antigenic characterization of A/Canada/ 1158/2006, hemagglutination-inhibition (HI) assay was performed by using 4 hemagglutination units of virus, 0.7% v/v guinea pig erythrocytes, and postinfection fowl serum specimens for the currently circulating human Molecular Characterization All 8 RNA segments of A/Canada/1158/2006 were amplifi ed by reverse transcriptase-PCR (RT-PCR) and sequenced. A universal primer set for the full-length amplifi cation of all infl uenza A viruses was used for the RT-PCR (10). Viral RNA was extracted from 100 μL of tissue culture fl uid with the RNeasy Mini Kit (QIAGEN, Mississauga, Ontario, Canada). Viral RNA was amplifi ed in a OneStep RT-PCR reaction (QIAGEN) following the manufacturer's recommendations. Briefl y, 5 μL RNA was added to the RT-PCR mixture containing 2 μL QIAGEN OneStep RT-PCR enzyme mix, 10 μL 5× QIAGEN On-eStep RT-PCR buffer, 400 μmol/L dNTP, 0.6 μmol/L of each primer, and 10 μL Q-solution in a fi nal volume of 50 μL. The conditions used for the Gene Amp 97700 (Applied Biosystems, Streetsville, Ontario, Canada) thermocycler were as follows: 50°C for 30 min for reverse transcription, 95°C for 15 min for the activation of the HotStart DNA polymerase; then 35 cycles of 94°C for 20 s, 58°C for 30 s, 72°C for 4 min, followed by an extension of 10 min at 72°C. The PCR products were purifi ed by using QIAquick PCR purifi cation kit (QIAGEN) and sequenced on an ABI 377 Sequencer, using a fl uorescent dye-terminator kit (Applied Biosystems). The DNA sequences were assembled and analyzed with SEQMAN, EDITSEQ, and MEGALIGN programs in Lasergene (DNASTAR, Madison, WI, USA). Phylogenetic trees were generated by the neighbor-joining method using the MEGA program (11). Serologic Testing Once it became evident that A/Canada/1158/2006 was closely related to swine infl uenza viruses, HI was performed on serum specimens collected from the index patient, the symptomatic sibling, and both parents 29 days after the hospitalization. To further investigate the spread of SIV to humans, approval was then granted by the Health Research Ethics Board of the University of Alberta to obtain information and serum specimens from other members of the communal farm. The study team visited the farm 3 months after the hospitalization of the index patient and explained the study to the occupants. Serum specimens were then collected from the other 4 siblings of the index patient and 46 other occupants who lived in a total of 17 households. Participants provided the following data: age, exposure to swine (none, <1 hour/week, or >1 hour/week), and history of infl uenza-like illnesses (ILI; defi ned as cough and fever) in the preceding year. Serum samples were tested by using an HI assay against the currently circulating human strains A/New Caledonia/20/99 (H1N1), A/ Wisconsin/67/2005 (H3N2), and the isolate from the index patient, A/Canada/1158/2006. HI titers were defi ned as the reciprocal of the highest dilution of serum that completely inhibited hemagglutination of a 0.7% solution of guinea pig erythrocytes. Specimens were considered seropositive for infl uenza virus at a titer of >32. Swine Investigation The purpose of these investigations was to determine the extent of recent swine infl uenza in swine on the farm and to look for evidence of infection with the SIV strain isolated from the index child. The history of infl uenza or unexpected respiratory illness in the swine on the farm was obtained. Nasal swabs were obtained from grower pigs (4 to 16 weeks of age) and processed by RT-PCR for infl uenza A matrix gene. Serologic testing for infl uenza, using an ELISA for H1N1 and H3N2 strains and HI for A/Canada/1158/06, was performed on samples from grower-fi nisher pigs (12 weeks to 6 months of age). Five grower pigs that were doing poorly were killed and pulmonary autopsies were performed. All swine used in these investigations were on the farm at the time the index child was ill. Antigenic and Molecular Characterization of A/Canada/1158/06 Initial HI testing showed that the isolate was not inhibited by antiserum against recent (A/Wisconsin/77/2005 and A/New Caledonia/20/99) and past (A/Panama/2007/99 and A/Nanchang/933/95) human infl uenza A strains but was inhibited by antiserum against A/swine/Texas/4199-2/98 (H3N2) virus with HI titer of 128. These fi ndings indicate that the A/Canada/1158/06 virus was antigenically related to SIV ( Table 1). The results also indicate that the assay is specifi c because no cross-reactivity was observed between the human reference strain antiserum and the swine infl uenza viruses (Table 1) Nucleic acid identity between the HA and NA genes of A/ Canada/1158/06 and the current vaccine strain A/Wisconsin/67/05 was 90.9% and 94.6%, and the aa identities were 90.2% and 94.5%, respectively. Serologic Testing Seropositivity (HI titer >32) to A/Canada/1158/2006 was demonstrated in the index patient, the symptomatic sibling, 1 asymptomatic sibling, and both parents ( Table 2, household A). Three other siblings were seronegative. Four children from 2 other households were also seropositive ( Table 2, households B and C); the father from household B, 1 other child from household B, and the mother from household C were seronegative. The father from household C worked in the swine barn but was unavailable for testing. History of ILI within the preceding 12 months in seropositive participants was reported only for the index patient and for a 3-year-old girl from household C who was not hospitalized or tested for infl uenza virus during her illness. Seronegative results were obtained from another 20 adults (14 women and 6 men) and 19 children (8 girls and 11 boys) from 14 different households. For these households, swine exposure was reported as none for 9 adults and 7 children, <1 hour/week for 11 adults and 8 children, and >1 hour/ week for 4 children including 3 teenagers who worked in the swine barns. When serum samples from the 54 participants in the study were tested for HA-specifi c antibodies to the current human infl uenza A virus H3N2 and H1N1 subtypes, one of the patients who was seropositive for SIV at a titer of 32 had an identical titer for A/Wisconsin/67/2005 (H3N2) ( Table 2), and one of the adults who was seronegative for SIV had a titer of 32 for A/New Caledonia/20/99 (H1N1) (data not shown). All other persons tested were seronegative for the 2 human strains of infl uenza. Infl uenza (H3N2) was last documented in the swine herd in September 2005. The herd received breeding animals from a Manitoba herd, where swine infl uenza of an unknown subtype had recently been documented. Nasal swabs collected from 25 grower pigs ≈3 weeks after the index child was ill were negative for SIV. Serum specimens obtained from 10 grower-fi nisher pigs were all negative by ELISA for swine infl uenza (H1N1), but 4 were positive for swine infl uenza (H3N2) strains, with 1 of these 4 strains being seropositive for A/Canada/1158/2006 by HI assay (HI titer 32). Results of the lung autopsies all showed evidence of subacute bronchointerstitial pneumonia, varying from mild to moderate. Lesions typical for swine infl uenza were not noted, but an initial insult due to SIV could not be excluded. Discussion We describe an infant with virologic and serologic evidence of infection with SIV (H3N2) and an ILI. Serologic evidence of infection with the same strain was found in 4 of 7 household members and in 3 of 46 nonhousehold contacts, with only 1 of the seropositive patients having a history of an ILI within the preceding year, which demonstrated unrecognized human infection with SIV. This relatively high seroprevalence is in contrast to a recent outbreak of avian infl uenza (H7N3) in which seropositivity was not documented in 91 persons exposed to infected poultry, including 2 poultry workers from whom the virus was isolated (12). The difference in the apparent incidence of infection may be explained in part by the fact that culling of infected poultry occurred immediately; in our study, infection of swine was not recognized and long-term human exposure may have occurred. Infection of swine with human infl uenza viruses has been recognized for decades (2); in a recent US study, 22.8% of pigs were seropositive for human infl uenza viruses, although some may have had vaccine-induced im-munity (13). Swine infl uenza (H3N2) emerged in 1998 in the United States, where subtype H1N1 viruses had predominated for 60 years (2). The isolate from this current study is closely related to triple reassorting genotype viruses that spread rapidly throughout the US swine population and have HA, NA, and RNA polymerase (PB1) genes of human infl uenza virus lineage; nucleoprotein, matrix, and nonstructural genes of classic swine infl uenza (H1N1) lineage; and RNA polymerase (PA and PB2) genes of North American avian virus lineage (8). However, triple reassortant SIV was not documented in swine in Canada until 2005 (8), which makes it unlikely that human cases occurred before that year and that seroreversion had occurred in any of the persons in the current serosurvey. A previous study showed cross-reactivity in HI assay between the vaccine strain A/Panama/2007/99 reference antiserum and the triple reassortant A/swine/Minnesota/593/99, which is not unexpected since the HA gene of the triple reassortant viruses is a descendant of human viruses that circulated in 1995 (14,15). However, no cross-reactivity was observed between the reference human strain antiserum and the isolate from this study, which suggests that the seroconversion observed was indeed due to infection with swine infl uenza (H3N2) and not to cross-reactive antibody to human infl uenza (H3N2) infection. The low rate of seropositivity to recently circulating strains of human infl uenza in the study is likely explained by the fact that the farm is a relatively closed community. The child who was seropositive for both human and swine infl uenza viruses was likely exposed to both viruses. strains currently circulating in North America. This region of the protein has been assigned to antigenic sites (17) and has been associated with adaptation to growth in eggs (18 (19). The spectrum of pathogenicity of SIV infection ranges from asymptomatic infection (6) to death; 7 of these 50 patients died (5,(20)(21)(22)(23)(24). Laboratory-confi rmed swine infl uenza in humans may be "the tip of the iceberg." Diagnosis of the current case was serendipitous because typing was performed only because the case occurred outside of infl uenza season. The mode of spread of SIV in humans is not established. Because of his young age, the index patient was not likely to have had unrecognized direct contact with swine. That aerosolization of infl uenza virus occurs is increasingly recognized (25), but the child was reportedly never in the barns that housed the swine. However, other members of the farm reported that infants were sometimes taken for walks through the barn. The child also may have acquired the virus from person-to-person spread or from fomites. All 13 patients in the Fort Dix outbreak and 15 of 37 previously reported civilian case-patients also had no swine contact (19,20). The Fort Dix outbreak of SIV in humans lasted only 21 days and never spread outside the military base. The calculated basic reproductive rate (R 0 ) was only 1.1 to 1.2. This suggests that person-to-person spread of the implicated H1N1 strain was not effi cient enough to produce a major epidemic (26). However, future strains of SIV could have a higher R 0 , and documentation of a case of swine infl uenza (H3N2) in a child with unrecognized transmission within the community adds another possible mechanism by which major epidemics of infl uenza could arise. Swine infl uenza infection in humans most commonly results in either no symptoms or a self-limited illness (6). However, routine surveillance for cases among swine workers may enable early detection of a strain with the potential for personto-person transmission, prompting institution of infection control measures and vaccine development.
2014-10-01T00:00:00.000Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "75292e827ce0bcaf2a0dd4bcc233bf5f0685b5d8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid1312.070615", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75292e827ce0bcaf2a0dd4bcc233bf5f0685b5d8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252159949
pes2o/s2orc
v3-fos-license
Determining the origin of different variants associated with familial mediterranean fever by machine-learning A growing number of familial Mediterranean fever (FMF) patients in Israel do not have a single country of origin for all four grandparents. We aimed to predict the Mediterranean fever gene (MEFV) variant most likely to be found for an individual FMF patient, by a machine learning approach. This study was conducted at the Sheba Medical Center, a referral center for FMF in Israel. All Jewish referrals included in this study carried an FMF associated variant in MEFV as shown by genetic testing performed between 2001 and 2017. We introduced the term ‘origin score’ to capture the dose and different combinations of the grandparents’ origin. A machine learning approach was used to analyze the data. In a total of 1781 referrals included in this study, the p.Met694Val variant was the most common, and the variants p.Glu148Gln and p.Val726Ala second and third most common, respectively. Of 26 countries of origin analyzed, those that increased the likelihood of a referral to carry specific variants were identified in North Africa for p.Met694Val, Europe for p.Val726Ala, and west Asia for p.Glu148Gln. Fourteen of the studied countries did not show a highly probable variant. Based on our results, it is possible to describe an association between modern day origins of the three most common MEFV variant types and a geographical region. A strong geographic association could arise from positive selection of a specific MEFV variant conferring resistance to endemic infectious agents. Familial Mediterranean fever (FMF) is the most common syndrome in the group of hereditary auto-inflammatory diseases 1 . It is an autosomal recessive disease that mainly associates with variants in the MEFV gene, located on chromosome 16. MEFV encodes the pyrin protein, which is important for the inflammatory response to infectious agents 2 . More than 300 variants of the MEFV gene have been identified in Infevers https:// fmf. igh. cnrs. fr/ ISSAID/ infev ers/ search. php?n=1 (Infevers: an online database for autoinflammatory mutations. Copyright. Available at https:// infev ers. umai-montp ellier. fr/ Accessed at (02/2022) [3][4][5][6] ). The five most common variants (p.Met694Val, p.Val726Ala, p.Met694Ile, p.Met680Ile and p.Glu148Gln) account for the vast majority of cases 7,8 . The prevalence of FMF is highest among ethnic inhabitants of the Mediterranean basin with a carrier rate of up to 1 in 4 in certain populations. In recent years, the disease has been reported in ethnically heterogeneous patients around the globe [9][10][11][12][13] . Israel is considered an endemic area for FMF 14,15 . Its current population has diverse origins in the Jewish diaspora including Europe, northern Africa, and Asia. There is a correlation between the p.Met694Val variant and Jewish Moroccan ethnicity as well as with a severe disease phenotype 16,17 . However, associations between other countries of origin and FMF variants have been only partially established 18 . Such knowledge is important to understand the epidemiology of FMF. Here we use a novel approach, based on a machine learning algorithm, to predict the mutation type carried by a patient based on the countries of origin of his/her parents or grandparents. Hashomer, Israel, which is a referral center for genetic testing and evaluation of FMF patients. First, we collected data on all referrals to our center for genetic analysis by their primary physician following a clinical suspicion for FMF between 2001 and 2017. All referrals negative for variants in MEFV were excluded. Since mixed origins mainly characterize Jewish patients only Jewish referrals were included in our study group. Data regarding the gender and the specific variant of each referral was extracted from medical records. This research was approved by Sheba Medical Center institutional ethics committee. All methods were performed in accordance with the relevant guidelines and regulations. Genetic analysis of MEFV. For the genetic analysis, DNA was extracted from 100 µl of blood taken from the referral using a Puregene kit (Gentra Inc.) and was screened for five known variants in MEFV, LRG190t1:c.2080A > G p.(Met694Val), c.2177 T > C p.Val726Ala, c.422G > C, p.Glu148Gln, c.20420G > A or c.2040G > C p.Met680Ile, and c.2082G > A p.Met694Ile, using a commercial kit (Gamidagen) or polymerase chain reaction (PCR) amplification and restriction enzyme analysis 19 . Computational analysis. Origin score. In genetic studies, it is usually straightforward to investigate the association between country of origin and variants using mathematical tools such as Bayes rule. However, given the ancestral diversity of the Israeli Jewish population, the subjects referred to our center often do not have a single country of origin. It was therefore necessary to construct a model using machine learning in order to perform statistical analysis. We included in the analysis countries from which at least 15 referrals originated. Based on this threshold, the data used for the analysis included 26 countries (out of 48 reported to be countries of origin for parents or grandparents by patients in the cohort). First, data on referrals and countries of origin were tabulated in a matrix with a row representing a subject and a column representing a possible country of origin. We then calculated an "Origin Score" in the following way: In each cell we stored the fraction of the subject's origin from each country. For example, if a referral has two grandparents from Algeria, one from Morocco, and one from Iraq, then the values of the corresponding cells will be 0.5, 0.25, and 0.25, respectively, and cells corresponding to all other countries will be assigned a value of 0. If information about the country of origin of one of the grandparents was missing, it was assumed that both grandparents from that side had the same country of origin. Subjects without information on at least one grandparent from each side were excluded from the analysis. Based on the method described above, we calculated for every country, the sum of origin scores of subjects with any level of ancestry from that country. The sum of origin scores per country is presented in Fig. S1. Machine learning approach. The logic behind our novel machine learning based approach is that the level at which we are able to predict if a person has a specific variant based on his/her origin is an indication of the strength of the correlation between the origin and the variant. Clearly, the stronger the association between a given country of origin and a specific variant the more accurate is the prediction. For the machine learning approach, we used the logistic regression module "scikit-learn" in Python 2.7 20 . Logistic regression is a linear model used to measure the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities describing the possible outcomes based on the logistic function. In our study, we used the countries of origin as the independent variables and attempted to predict the specific variant as a categorical dependent variable (i.e., if the person has or does not have the specific variant). The performance of the prediction was evaluated by the area under the curve (AUC) measure, which shows the deviation of the performance from a random prediction, which has an AUC value of 0.5 while a perfect prediction has a value of 1. We validated our model using tenfold cross validation (dividing the data randomly each time to 90% for training and 10% for testing) and by bootstrapping (where subsets were resampled with replacement 1000 times and patients that were not included in the sample were used as the test dataset). For each prediction, we averaged each vector coefficients and used the result to identify origins that are positively and negatively associated with certain variants. Since our analysis revealed that the p.Met694Ile and p.Met680Ile variants were very rare in our study sample, we excluded these variants from the subsequent analysis. Therefore, we included in the final analysis only the three most common variant types: p.Met694Val, p.Val726Ala and p.Glu148Gln. The data used for the prediction of the country of origin included only patients that carry a single type of mutation, either homozygous or heterozygous. Compound heterozygotes were not included since we did not have enough data for each compound heterozygous pair. Selection of country groups. We combined the different countries of origin into groups that contained four countries each, covering four possible origins per patient. We formed a group from the four countries that were ranked highest in their association with each variant and another group with the four countries that were ranked as the least associated. Ethics committee. The study has been approved by the appropriate ethics committee. Informed consent was waived by the ethics institutional review board (IRB) -Sheba Medical Center (SMC-9763-12). Results A total of 1842 referrals for MEFV genetic testing had at least one MEFV gene variant. After excluding 61 subjects with uncommon variants, we included 1781 subjects (52% females) in our analysis ( Table 1). The number of subjects detected with each MEFV variant is presented in Fig. S2 Fig. S3). Origination in Tunisia, Libya, Morocco, and Algeria was positively associated with the p.Met694Val variant, roughly 70% referrals of Moroccan decent carried this variant. Origination in Romania, Germany, Iran and Poland reduced the chance of carrying the p.Met694Val variant (Fig. 1A, B). The performance of this prediction Fig. 2A). Moreover, by multivariate logistic regression analysis we demonstrated that Libya, Tunisia, Morocco and Algeria as countries of origin contributed the most to the probability that a referral would be homozygous for p.Met694Val variant with even higher degree of certainty (AUC = 0.86; Fig. 2D). Referrals with the p.Val726Ala variant had a high probability of originating from Lebanon, Romania, Hungary, or Poland (Fig. 3A). Ancestors from Morocco, Libya, Tunisia, and Algeria reduced the likelihood of this variant (Fig. 3B) with an AUC of 0.83 (Fig. 2B). Iran, India, Yemen and Ukraine are the origins that contribute the most to the existence of p.Glu148Gln variant (Fig. 4A), whereas Tunisia, Libya, Algeria and Morocco had an opposite impact (Fig. 4B) with an AUC of 0.67 (Fig. 2C). Fourteen of the studied countries did not show strong association with a single MEFV variant. Discussion Many types of variants in the MEFV gene are associated with FMF. The five most commonly identified mutation types have been denoted as the founder mutations 7,8 . An association between ethnicity and the type of variant has been suggested, but a clear connection has not been established 17 . This study was conducted in order to demonstrate such link: We sought to demonstrate that a patient's variant could be predicted based on his/her family origins. Given the ancestral diversity of the large study population and the fact that the subjects rarely had a single country of origin, we constructed a model using machine learning in order to perform statistical analysis. We introduced an origin score to quantify the ethnic complexity of each individual. www.nature.com/scientificreports/ Analysis of the study population, which included 1781 Israeli referrals for FMF testing mainly due to FMF suspicion, allowed us to extract reliable results despite the ethnic diversity of the study population. Our results showed that the p.Met694Val variant was the most prevalent among the study population, identified in 72% of studied subjects. Referrals whose parents or grandparents came to Israel from Tunisia, Libya, Algeria, or Morocco were most likely to carry this specific variant. The second most common mutation, p.Glu148Gln, was observed in 23% of the cohort; the most common countries of origin were Iran, Yemen, India, and Ukraine. The p.Val726Ala variant was the third most common, found in 19% of subjects. The countries of origin that mostly contributed to the existence of this variant are Lebanon, Romania, Hungary, and Poland. Notably, an inverse relationship exists between the p.Met694Val variant and p.Val726Ala with regards to country of origin. Countries that are most correlated with p.Met694Val are those least likely to be predictive of p.Val726Ala and vice versa. All in all, the machine learning approach identified a single highly probable MEFV variant in 12 of the origins studied. The same was not the case for FMF referrals of other origins including Iraqi-Jews despite their high origin score, suggesting that at least two MEFV variants are probable in those origins. Based on the obtained results we deduce that the common MEFV variants in the Israeli Jewish population of our time have origins in a different geographical area: p.Met694Val in North Africa, p.Val726Ala in Europe and p.Glu148Gln in Asia. In general, the machine learning results are consistent with already established variants frequencies in North African, and Ashkenazi Jews 9 , yet they add a larger geographic scope and a better, country-wise perspective. For instance, our study identified Lebanon, a longtime residence of a small and relatively isolated Sephardi community, as the fourth country predicting Val726Ala, a variant considered to be of Ashkenazi origin (Ashkenazi allele frequency (AF) = 0.04, GnomAD 21 , https:// gnomad. broad insti tute. org) ( Table 2), perhaps as a consequence of random genetic drift. It is also intriguing that Ukraine, a residence of Ashkenazi Jewry, was found to be an origin of the p.Glu148Gln variant, along with a distant cluster of Asian countries. This finding could arise from the ethnic composition Ukraine immigrants to Israel, which includes mixed families of Ashkenazi and non-Jewish origins 22 . The geographical pattern of the p.Met694Val and p.Val726Ala variants observed in our study does not extend to the non-Jewish Caucasian populations of Europe (AF = 0.0009, gnomAD 21 ) nor to the non-Jewish North-African population 23,24 , consistent with genetic drift, randomly occurring in small and isolated populations of the Jewish diaspora 9 , undergoing an evolution-based positive selection. Indeed a plague endemic could pose a rapid selection for MEFV variants introduced to Middle Eastern-derived populations early on 25 . Specifically the p.Met694Val and p.Val726Ala variants were shown to impede the evasion of Yersinia pestis detection by the intracellular pathogen sensing system, which is mediated by the pyrin inflammasome 26 . Leukocytes from asymptomatic carriers mounted higher IL-1β levels in response to Y. pestis in In-vitro studies 25 , Emphasizing the selective advantage of MEFV heterozygotes. Our study also identified an Asian origin for the p.Glu148Gln variant in the Israeli FMF referrals. An Asian origin is in agreement with the high frequency of this variant in the non-Jewish south and east Asian populations (AF = 0.298 and 0.280, respectively, GnomAD 21 ), ( Table 2). This observation may be rooted in the early settlement of the Asian Jewish diaspora and its admixture with the local population 27 . Considering that FMF morbidity is scarce in south Asian countries, the clinical significance of the p.Glu148Gln variant is uncertain, and it's inclusion among the variants associated with FMF needs to be carefully discussed. A recent study showed a 17-fold increased penetrance of FMF in compound heterozygotes carrying both the p.Glu148Gln and p.Met694Val variants, over heterozygotes carrying the p. Met694Val variant alone, in north African Israeli-Jews 28 . This suggests that the p.Glu148Gln might be considered pathogenic in certain ethnicities. The assignment of a certain variant to a particular origin might be compromised by two confounders: first, the study cohort mainly comprised symptomatic referrals, which may underrepresents low penetrance variants such as p.Val726Ala and p.Glu148Gln. However, the studied population concords with those served by practitioners, and therefore our results answer and are appropriate for medical needs. Second, the exclusion of compound heterozygous subjects could somewhat skew the results. However, the distribution of the excluded mutations is comparable to their distribution in the populations affected by this step. Therefore, its impact on the results is minimal.
2022-09-10T06:17:22.695Z
2022-09-08T00:00:00.000
{ "year": 2022, "sha1": "ada48ceebc0e6fdb162f87901a258980838a26ca", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bde5aca213b00733f914c312b4142f8aa6580de0", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
250275061
pes2o/s2orc
v3-fos-license
Associations Between Carotid Plaque Characteristics and Perioperative Cerebral Blood Flow Determined by Arterial Spin Labeling Imaging in Patients With Moderate-to-Severe Stenosis Undergoing Carotid Endarterectomy Purpose To examine the associations between carotid plaque characteristics and perioperative cerebral blood flow (CBF) by arterial spin labeling (ASL) imaging. Materials and Methods Patients with unilateral moderate-to-severe carotid stenosis referred for carotid endarterectomy (CEA) were recruited and underwent carotid vessel wall and brain ASL magnetic resonance imaging. The following imaging features were measured: relative CBF (rCBF = CBFindex−hemisphere/CBFcontralateral−hemisphere) in the middle cerebral artery territory; plaque burden and the presence of lipid-rich necrotic core; intraplaque hemorrhage (IPH); calcification; ulcer and fibrous-cap rupture; and the volume and maximum plaque components' area percentages. The associations between plaque characteristics and perioperative CBF were analyzed. Results Sixty-one patients (mean age, 66.6 ± 7.8 years; 55 males) were included. Univariate linear regression showed that rCBFpre−CEA was associated with stenosis [β, −0.462; 95% confidence interval (CI), from −0.797 to −0.126; p = 0.008], calcification (β, 0.103; 95% CI, 0.005–0.201; p = 0.040), maximum IPH area percentage (β, −0.127; 95% CI, from −0.223 to −0.030; p = 0.012), and ulcer (β, 0.069; 95% CI, 0.025–0.113; p = 0.005); rCBFpost−CEA was associated with the IPH volume (β, −0.060; 95% CI, from −0.107 to −0.014; p = 0.013). After adjusting for the confounding factors, the associations of calcification with rCBFpre−CEA (β, 0.099; 95% CI, from 0.004 to −0.194; p = 0.042) and IPH volume with rCBFpost−CEA (β, −0.060; 95% CI, from −0.109 to −0.011; p = 0.020) remained statistically significant, while those of rCBFpre−CEA with maximum IPH area percentage (β, −0.089; 95% CI, from −0.188 to 0.011; p = 0.080) and ulcer (β, 0.050; 95% CI, from −0.012 to 0.112; p = 0.100) did not remain statistically significant. Conclusion The compositional characteristics of carotid atherosclerotic plaques, particularly IPH, were associated with perioperative CBF in patients with unilateral moderate-to-severe carotid stenosis undergoing CEA. Our findings indicated that the patients with larger carotid IPH could expect smaller improvement in CBF following CEA. Materials and Methods: Patients with unilateral moderate-to-severe carotid stenosis referred for carotid endarterectomy (CEA) were recruited and underwent carotid vessel wall and brain ASL magnetic resonance imaging. The following imaging features were measured: relative CBF (rCBF = CBF index−hemisphere /CBF contralateral−hemisphere ) in the middle cerebral artery territory; plaque burden and the presence of lipid-rich necrotic core; intraplaque hemorrhage (IPH); calcification; ulcer and fibrous-cap rupture; and the volume and maximum plaque components' area percentages. The associations between plaque characteristics and perioperative CBF were analyzed. INTRODUCTION Stroke, characterized by high morbidity, disability, and mortality (1,2), is one of the major causes of death worldwide. Carotid atherosclerosis stenosis accounts for 25-30% of adult strokes. The previous studies reported the reduction and redistribution of cerebral perfusion in patients with carotid stenosis and asymptomatic patients with subclinical cerebrovascular atherosclerosis (3,4). Carotid endarterectomy (CEA) is a key intervention for moderate-to-severe carotid stenosis through which plaque removal, recanalization, and improved cerebral perfusion reduce the risk of a future stroke (5)(6)(7). Cerebral blood flow (CBF), a major physiological parameter of cerebral perfusion, usually changes before clinical symptoms appearing in patients with cerebrovascular disease (8)(9)(10). Therefore, an early description of the changes in CBF could help prevent ischemic events and evaluate the effects of vascular interventions. Carotid atherosclerotic stenosis directly leads to reduction of ipsilateral CBF (11). Furthermore, disruption of the vulnerable carotid plaque characterized by intraplaque hemorrhage (IPH) or ulcer contributes to cerebral microcirculation obstruction by microemboli (12,13). The previous studies demonstrated that perioperative CBF in patients with moderate-to-severe carotid stenosis was associated with plaque burden and components. Jongen et al. (14) found that carotid plaque burden and degree of stenosis were negatively associated with ipsilateral CBF as measured by computed tomography (CT) perfusion imaging in patients with symptomatic carotid stenosis (stenosis ≥50%) at baseline. A clinical study of 72 patients with carotid stenosis demonstrated that carotid IPH and calcification were significantly associated with baseline CBF and cerebrovascular reactivity which were determined by Xe-CT (15). Investigators also determined the relationships between carotid plaque characteristics and post-CEA CBF. Lishmanov et al. (16) reported that the CBF and cerebral blood volume measured by singlephoton emission CT increased after CEA in patients with carotid stenosis. Our previous study demonstrated that both plaque burden and lipid-rich necrotic core (LRNC) were correlated with changes in CT perfusion measurements following CEA (17) indicating that, carotid plaque characteristics could predict the CBF in patients with carotid stenosis undergoing CEA. The CBF can be assessed using cerebral perfusion technologies, including CT perfusion, dynamic susceptibility contrast magnetic resonance (MR) perfusion, and arterial spin labeling (ASL)-based perfusion imaging. However, CT imaging that emits ionizing radiation is not an ideal methodology for a repeated monitoring of cerebral perfusion changes perioperatively. Dynamic susceptibility contrast MR perfusion imaging is contraindicated for CBF assessment in patients with renal dysfunction. Arterial spin labeling (ASL) MR imaging (MRI) has good reproducibility and permits non-invasive quantification of blood flow using the protons of arterial blood water molecules as endogenous diffusible tracers (18)(19)(20) (21). Other investigators reported the characteristics of territorial CBF distribution in patients undergoing carotid stenting and CEA assessed by ASL MRI (22). However, few studies utilized ASL MRI to determine the associations between carotid plaque characteristics and perioperative CBF in patients undergoing CEA. This study aimed to evaluate the associations between the carotid plaque characteristics and CBF in patients with unilateral moderate-to-severe carotid stenosis before and after CEA, using carotid vessel wall and brain ASL MRI. This study could help find a surrogate indicator of CBF for perioperative brain perfusion in patients undergoing CEA. Study Sample Symptomatic and asymptomatic patients with moderate-tosevere unilateral carotid stenosis (50-99%) determined by CT angiography and referred for CEA were prospectively recruited and underwent brain and carotid artery MRI. The exclusion criteria included: (1) ≥50% stenosis in the contralateral internal carotid artery; (2) significant stenoses (≥50%) or occlusions in the intracranial vasculature; (3) acute infarction; (4) cerebral tumors; (5) underwent a vascular intervention treatment such as CEA or carotid stenting; (6) contraindications to MRI examination. Clinical information (e.g., age, sex, and history of hypertension, diabetes, hyperlipidemia, smoking, and drinking) was collected from the clinical records. This study was approved by the local ethic committee. The study was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All enrolled patients signed written informed consent. Carotid Artery and Brain MRI The carotid vessel wall MRI was performed on a 3.0 T scanner (uMR780, United Imaging Healthcare, Shanghai, China) with an 8-channel carotid coil within 1 week before the CEA. TR/TE, 3,000/71 ms; matrix size, 160 × 160; b = 0 and b = 1,000. All sequences were acquired with the same field of view of 250 × 250 mm 2 and slice thickness of 5 mm. In addition, a 3D-pcASL sequence was acquired for the entire brain with the following parameters: TR/TE, 4,632/10.5 ms; slice thickness, Surgical Procedures All CEA procedures were conducted by the same senior neurosurgeon with over 30 years of experience, following standardized surgical procedures under general anesthesia. Cerebral oxygen saturation and microembolism were monitored throughout the perioperative period. Deep neck dissection and vessel manipulation were performed under a microscope. The plaques and thickened intima were carefully removed to achieve a smooth vessel wall. Data Analysis The carotid vessel wall MR images were reviewed by two trained observers with >2 years of experience in cerebrovascular imaging using custom-designed software (Vessel Explorer 2, TSimaging Healthcare, Beijing, China). When the interpretation results were inconsistent between these two observers, a senior observer with more than 10 years of experience reviewed the images. All three observers were blinded to the clinical information and status of the cerebral collateral circulation. The image quality was assessed using a 4-point scale (1 = poor, 2 = marginal, 3 = good, and 4 = excellent). Only those with image quality ≥ 2 were included for further Frontiers in Neurology | www.frontiersin.org analysis. The plaque morphology assessment included: mean lumen, wall, and total vessel areas; maximum wall thickness; and normalized wall index [(%) = wall area/total vessel area ×100]. These were measured by manually outlining the lumen and wall boundaries for each index artery. The presence of carotid plaque components, including calcification, LRNC, IPH, ulcer, and fibrous cap rupture, was assessed using published criteria (23). The volume and maximum area percentage of plaque components, including calcification, LRNC, and IPH, were measured for each patient. Excellent inter-observer and intra-observer agreements were found in carotid plaque morphology evaluations (17). The whole-brain 3D-pcASL MRI scans were transferred to an MR workstation (AW 4.6, GE Healthcare, Waukesha, WI, USA) for post-processing. Two observers with more than 2 years of experience in processing ASL images drew the regions of interest corresponding to the cortical flow territory of the middle cerebral artery on the index and contralateral sides of each axial slice. The observers were blinded to the degree of carotid stenosis, carotid vessel wall images, and clinical information. Three slices, one at the level of the basal ganglia and the adjacent up and down slices, were used to eliminate changes in the CBF value in the middle cerebral artery territory. The regions of obsolete cerebral infarction within the regions of interest were excluded from the CBF calculations. The average absolute CBF value was calculated from the CBF measurement of the three slices on each side. The relative CBF (rCBF) was calculated by dividing the absolute CBF value on the index side by the absolute CBF value on the contralateral side. The rCBF values were calculated before and after the CEA procedure, respectively. Statistical Analysis Continuous variables with normal distribution are presented as means with standard deviations, and those with abnormal distribution are described as medians with interquartile ranges. The correlation between rCBF and the clinical information was analyzed using the Pearson's or Spearman's correlation coefficients. The correlations between the plaque characteristics and rCBF were also analyzed by univariate and multi-variate linear regressions before and after adjusting for confounding factors. Two-tailed p < 0.05 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 19.0 (IBM Corp., Armonk, NY, USA). Correlations Between Clinical Information and CBF The associations of the clinical risk factors with CBF pre−CEA and CBF post−CEA (Figure 1) on the index side are summarized in Table 3. A significant correlation was found between diastolic blood pressure and CBF pre−CEA (r = −0.279, p = 0.029). No significant correlations were found between other clinical factors and CBF measurements (all p > 0.05). DISCUSSION This study investigated the associations between the carotid plaque characteristics and the perioperative CBF in patients with unilateral moderate-to-severe stenosis using multi-contrast carotid vessel wall and whole-brain 3D-pcASL MRI. We found that calcification was positively correlated with CBF before CEA, and the IPH size was negatively correlated with CBF after CEA. The data indicated that the patients with calcification had higher CBF at baseline than those without, and patients with large carotid IPH achieved lower CBF following CEA than those with small or no carotid IPH. We found that the stenosis of index carotid artery was negatively correlated with the CBF at baseline, consistent with the previous studies (14,24). Many studies demonstrated that the CBF decreased with the increase of the carotid stenosis degree (14,24). Tang et al. (24) reported that the carotid artery blood flow on the index side decreased from 418.4 ml/min to 301.0 ml/min, with stenosis of index carotid artery increasing from no stenosis to severe stenosis. Carotid artery stenosis will directly lead to hypoperfusion in the downstream cerebral tissues if the collateral circulation is insufficient. Furthermore, the investigators found that the atherosclerotic plaque vulnerability was significantly associated with the degree of carotid stenosis. Zhao et al. (25) reported that the prevalence of fibrous cap rupture was 23.2%, 33.3%, and 53.8% in carotid arteries with 1-49%, 50-69%, and ≥70% stenosis, respectively. The European Carotid Surgery Trial included 3,007 patients with symptomatic carotid stenosis, reported that the prevalence of plaque surface irregularity (35-65%) and thrombus formation (20-45%) increased with the increase of degree of carotid stenosis from 10 to 99% (4). Fibrous cap rupture or surface irregularity in carotid arteries stimulates thrombosis that might lead to embolism and cerebral hypoperfusion. In the present study, we found a positive correlation between calcification and the CBF at baseline, consistent with previous studies (15). A clinical study of 72 patients with unilateral carotid stenosis reported that intraplaque calcification was positively correlated with cerebrovascular reactivity (r 2 = 0.282, p = 0.016) (15). Our findings might have arisen from various compensatory mechanisms during carotid plaque progression. Atherosclerosis is a chronic inflammatory disease of the arteries. It starts with the internalization and deposition of lipids in the intima, followed by phagocytosis of the lipids by macrophages to form foam cells. Calcium deposition occurs at advanced stages. Calcification might be a marker of a long-term progression of the carotid plaque, yielding blood flow compensation in the downstream tissues through either the Circle of Willis or leptomeningeal collateral channels. These compensatory mechanisms partially accounted for the small impact of the reduced CBF at baseline. This study suggested that patients with a calcified carotid plaque may better compensate for the reduced CBF through collaterals than those without calcification. This study revealed that the carotid IPH size was negatively correlated with CBF after CEA. We speculated that this finding might be attributed to the rapid carotid plaque progression caused by IPH and the subsequent downstream micro-disruption. Several studies suggested that IPH was a trigger of plaque vulnerability and a powerful indicator of plaque progression (26)(27)(28)(29)(30). Histopathological studies suggested that IPH was primarily related to the extravasation of erythrocytes and leucocytes from immature micro-vessels in the adventitia (26,31). An autopsy study showed that IPH might be a potent atherogenic stimulus by contributing to macrophage infiltration, free cholesterol deposition, and enlargement of the lipid-rich necrotic core (27). Another longitudinal in vivo study suggested that carotid IPH accelerated plaque progression over an 18month follow-up period (28). Impaired microcirculation in patients with carotid stenosis and a larger-sized vulnerable IPH plaque might account for the diminished improvement in CBF after removing the plaque (12,13). Our findings suggested that the patients with large-sized carotid IPH might experience a smaller improvement in the CBF after revascularization than those with smaller carotid IPH. This study had several limitations. First, its small sample warrants further large-sample studies. Second, the collateral circulation status, particularly in the Circle of Willis, was not analyzed. Such collateral circulation could affect the perioperative CBF. Third, we only used one post-labeling delay (2,000 ms) during the ASL imaging, which may have led to an underestimation of the CBF in patients with slow blood flow due to severe carotid stenosis. Future studies using multiple delay times during ASL imaging are suggested. Finally, this study evaluated the CBF for only 72 h after CEA. Long-term CBF improvement after CEA needs further investigation. In conclusion, compositional characteristics of carotid atherosclerotic plaques, particularly IPH, were associated with perioperative CBF in patients with unilateral moderate-to-severe carotid stenosis undergoing CEA. Clinicians should focus on plaque calcification and IPH, in addition to the level of carotid stenosis. This study suggested that plaques with calcification were more stable, and patients with atherosclerotic carotid stenosis and large IPH might achieve smaller CBF improvement after CEA than those without. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Medical Ethics Committee of Peking University Third Hospital. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS Material preparation and data collection and analysis were performed by YL, RH, HX, TW, and GZ. The first manuscript draft was written by YL. Manuscript revision was made by HY, XZ, RH, and YL. All authors contributed to the study conception and design. All authors commented on previous versions of the manuscript and approved the final one.
2022-07-05T15:33:47.052Z
2022-07-05T00:00:00.000
{ "year": 2022, "sha1": "4fdf2b9ac8741b6a3370ebbc03a801b5aefa244b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "4fdf2b9ac8741b6a3370ebbc03a801b5aefa244b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
228099688
pes2o/s2orc
v3-fos-license
Ravens parallel great apes in physical and social cognitive skills Human children show unique cognitive skills for dealing with the social world but their cognitive performance is paralleled by great apes in many tasks dealing with the physical world. Recent studies suggested that members of a songbird family—corvids—also evolved complex cognitive skills but a detailed understanding of the full scope of their cognition was, until now, not existent. Furthermore, relatively little is known about their cognitive development. Here, we conducted the first systematic, quantitative large-scale assessment of physical and social cognitive performance of common ravens with a special focus on development. To do so, we fine-tuned one of the most comprehensive experimental test-batteries, the Primate Cognition Test Battery (PCTB), to raven features enabling also a direct, quantitative comparison with the cognitive performance of two great ape species. Full-blown cognitive skills were already present at the age of four months with subadult ravens’ cognitive performance appearing very similar to that of adult apes in tasks of physical (quantities, and causality) and social cognition (social learning, communication, and theory of mind). These unprecedented findings strengthen recent assessments of ravens’ general intelligence, and aid to the growing evidence that the lack of a specific cortical architecture does not hinder advanced cognitive skills. Difficulties in certain cognitive scales further emphasize the quest to develop comparative test batteries that tap into true species rather than human specific cognitive skills, and suggest that socialization of test individuals may play a crucial role. We conclude to pay more attention to the impact of personality on cognitive output, and a currently neglected topic in Animal Cognition—the linkage between ontogeny and cognitive performance. How intelligence evolved still remains one of science's greatest mysteries. However, the past few years have seen two major and interrelated streams of research, one focusing on the evolution of the brain, and the other one pinpointing similarities and differences in behaviour (e.g. [1][2][3][4][5]. The majority of research interest has been devoted to the primate order [6][7][8][9] , thereby incorporating information about the phylogenetic relationships between species as well as presumed selective pressures acting upon the development of cognitive skills. One of the most comprehensive experimental studies tapping into the wide spectrum of physical and social cognitive domains has been carried out by Herrmann and colleagues 10 . They designed a test battery to compare the cognitive skills of human children, and two of our closest living relatives, chimpanzees (Pan troglodytes), and orangutans (Pongo pygmaeus). Two and a half year old children and chimpanzees (mean age: 10 years) showed very similar cognitive performance for dealing with the physical world, suggesting that human children's physical cognitive skills are still equivalent to those of our last common ancestor some 6 million years ago 10 . In stark contrast, the children outperformed both great ape species in tasks dealing with the social world (see for similar results on bonobos Pan paniscus 11 ). The authors argued that these results provide no support for the general intelligence hypothesis 12 predicting that human cognition differs from that of apes only in general cognitive processes (such as memory, learning, or perceptual processing). Rather, human infants' social cognitive skills are already on a species-specific Scientific Reports | (2020) 10:20617 | https://doi.org/10.1038/s41598-020-77060-8 www.nature.com/scientificreports/ and two carnivore species) showed that the developmental pace of ravens was markedly accelerated compared to that observed in the other species while the general developmental pattern was relatively similar 72 . This study, although only qualitative, marks a new trend in Cognitive Development since comparative research has traditionally been biased towards investigations of the cognitive development of human and non-human primates only 73,74 . For instance, Wobber and colleagues 11 adapted the PCTB of Herrmann and colleagues 10 to compare the development of cognitive skills between human children, bonobos, and chimpanzees. They found significant differences in the pattern and pace of cognitive development between the human and the two great ape model groups, with an accelerated ontogeny in children compared to individuals of the great ape species. In addition, divergent patterns of cognitive development were particularly apparent in the social domain, including for instance greater inter-relationships of social cognitive skills in children relative to apes (see also 75 ). Hence, to enable a more detailed understanding of cognitive performance across development in corvids and to address these critical gaps in our knowledge, we carried out a large-scale assessment of ravens' cognitive skills across nine physical and six social cognitive tasks with a special focus on development. In addition, we revisited the claim that corvids rival non-human primates in their cognitive abilities 34,40 by carrying out the first systematic, quantitative comparison of physical and social cognitive performance between ravens and individuals of two great ape species 76 . To do so, we applied the methodology of the PCTB 10 as close as possible for a species using her beak instead of extremities (see also for adaption of size of material 18 ). The Corvid Cognition Test Battery (CCTB) was administered to eight hand-raised birds. The physical tasks comprised different cognitive scales involving spatial (investigating for instance spatial memory, and object permanence), quantitative (testing the ability to understand relative numbers and the addition of numbers), and causal tasks (examining causal reasoning via distinct cues such as sound and shape). The social tasks involved cognitive scales of social learning (for instance using information provided by the experimenter to solve a task), communication (for example taking into consideration the attentional state of a human experimenter) and theory of mind (for instance being able to understand the intentions of the experimenter) (for more details see Table 1, and the supplementary material). Also note that we adopted the original terms by Herrmann and colleagues 10 to enable comparison between tasks and species. However, some tasks represent precursors to distinct skills only rather than full-blown cognitive abilities, for instance gaze following does not equal theory of mind. The CCTB was carried out during four distinct equally distributed time points after the birds had fully hatched: Four months of age, eight months of age, twelve months of age, and 16 months of age. The following detailed descriptions of the tasks have been adopted from the studies of Schmitt and colleagues 18 and Herrmann and colleagues 10 . Concerning the number of trials per task and item, we followed the methodology of Schmitt and colleagues 18 . We addressed the following three research questions: (1) Do ravens perform differently in the domains of physical and social cognition? To investigate this question, we compared the performance of the ravens in the physical cognitive tasks to their performance in the social cognitive tasks. Based on previous findings 48 , and given that ravens live in complex social systems consisting of fission-fusion dynamics and long-term monogamy 40,47 , we predicted to find higher scores in the social than in the physical cognitive domain. (2) How does cognitive performance develop in ravens? To address this question, we compared the performance of all individuals across four different time points: four months of age, eight months of age, twelve months of age, and 16 months of age. Based on the existing studies of cognitive development in ravens 68,72,77 , we predicted to find a relatively rapid development across cognitive scales and the four investigated time points. To investigate this question, we quantitatively compared the cognitive performance of the ravens in the CCTB to the cognitive performance of chimpanzees and orang-utans in the PCTB 10 . Since ravens are known to exhibit a variety of socio-cognitive traits necessary to manoeuvre successfully through their complex social world 34,48 and have been suggested to be social rather than physical intellects (but see for tool performance and physical cognitive skills 42,48 ), we predicted to find species differences between ravens and great apes in the physical cognitive domain only. Methods Birds and study site (see for methods, birds and apparatuses used 76 ). All birds had been taken from their captive parents at the age of three weeks (April/May 2014) and had been hand-raised in the corvid aviaries of the Max-Planck Institute for Ornithology in Seewiesen, Germany. The first weeks (until the end of May 2014), the ravens were hand-reared in artificial nests (chicks originating from the same parents were kept in the same carton box with wooden sticks and leaves). This took place in a smaller room to mimic "natural" conditions as good as possible. Only after fledging (~ 45 days, end of May 2014), the birds were moved to the outdoor aviary. The group of ravens consisted of four sibling pairs, which were marked with coloured rings on their legs for identification. Immediately after fledging, all our birds were trained using positive reinforcement techniques (rewarding the animal when it performs the target behaviour, waiting at their starting perch, etc.) to be able to be individually separated within the test compartments. Prior to the start of the CCTB, all birds were familiarised with the experimental equipment (e.g., wooden boxes with holes, plastic bottles, etc.), and the cameras. The test participation was always voluntary. If a bird did not engage during testing (e.g., did not make a choice), it was released to the group and tested on the subsequent day. Hence, none of our individuals had prior testing experience other than habituation to the test facilities and training to interact with the human experimenters. Since one bird stopped participating voluntarily in the second experimental set, we did not continue testing this individual www.nature.com/scientificreports/ for the rest of the experiment (see Table 2). Testing took place from Monday to Friday (sometimes Saturday) between 08:00-12:00 a.m. and between 02:00-04:00 p.m. The raven aviaries (see Fig. 1) were composed of one big (12 × 4.3 × 5.3 m) and three small sections (one section: 3.8 × 2.9 × 2.9 m, and two sections: 2.14 × 2.9 × 2.9 m), and all contained natural vegetation (e.g., perches) and diverse ground cover including soil and gravel. The ravens were fed twice a day between 07:00-08:00 a.m. and 04:00-05:00 p.m. with various types of meat, dairy products, mealworms and fruits. Water was freely available throughout the day. In the first experimental set, when the birds were not yet flying/moving around a lot, we videotaped all experiments with one video camera (Canon Legria HF S10). In the other three experimental sets, we used two cameras (Canon Legria HF S10 and Canon HF M41). We placed the cameras two meters away from the testing compartment to avoid disturbing the birds. One of the cameras was placed in Experimental Compartment A behind the experimenter, and the second camera was placed in the Feeding Kitchen to enable filming through the window (see Fig. 1). Testing apparatus, general procedure and habituation. The testing apparatus was located in the same compartment as the experimenter and the bird had to indicate her choice by pointing/touching through the wire mesh (see Fig. 1). To keep the birds motivated, we used highly desired food rewards, which were only available in the experimental context (pieces of peanuts, pieces of dog treats "Frolic", skin of porks "Grammeln"). The testing was done by two experimenters, MJS and CRB. They had hand raised the ravens with the help of volunteers, and were highly familiar with all birds since their arrival in Seewiesen. The birds were tested during four developmental time points, at four months (July/August 2014), eight months (November/December 2014), twelve months (March/April 2015), and 16 months (July/August 2015) of age. MJS was the main experimenter during the first two time points and experimental sets, whereas CRB was the main experimenter during the second two time points and experimental sets of testing (see Table 1 for a detailed description of the amount of trials and tasks). Table 2 The ravens. Table 2 provides information about the tested birds (name, sex, and sibling group named after their origin). 1 Individual stopped participating during the second set of experiments and was not tested further. Figure 1 The raven aviaries. Figure 1 depicts a sketch of the raven aviaries in Seewiesen. The thick lines represent opaque site elements/fences. Fig. 1; the tested bird was located in Experimental Compartment B, the rest of the group in the Compartment for Handraised Ravens). The human experimenter sat in a second compartment (see Fig. 1; Experimental Compartment A; again physically and visually isolated from the rest of the group) and interacted with the bird through the wire mesh that separated the two testing compartments. The testing apparatus used during the majority of the experiments (exception Social Learning, Gaze following and Pointing Cups) consisted of a grey polyvinylchloride board located on two stone blocks and a transparent sliding board also made of polyvinylchloride (see Fig. 2). The sliding board was lying on top of the grey board. Three cups were used to cover/ present the food reward. These were placed on the sliding table. a. Spatial memory Three cups were placed in a row on the platform. The experimenter showed the bird two rewards, and placed them under two adjacent cups of the three cups in full view of the bird. Then the platform was pushed towards the bird, and it was allowed to make two choices in succession by pecking against the cups. If, however, the bird chose the empty cup first, it was not allowed to make further choices. The response was counted as correct when the bird had chosen both baited cups in succession. b. Object permanence Three cups were placed in a row on the platform. An additional small opaque cup was used. The experimenter baited this small cup while the bird was watching. The small cup was then moved towards one of the larger cups, which was slightly lifted by raising the side not facing the bird. The experimenter then made a swapping movement with the small cup, as if swapping the reward under the larger cup. The experimenter also touched the other cups to avoid local enhancement. After moving the small opaque cup under the specific larger cup, the experimenter lifted the small cup to show the bird that the small cup was now empty. The platform was pushed forward to allow the bird to choose. There were three possible displacements performed: Single displacement The experimenter moved the small cup hiding the reward under one of the three cups, as described above, and swapped the reward under it. Double adjacent displacement The experimenter moved the small cup hiding the reward under two adjacent cups in succession, as described above, and left the reward under one of these cups. Double non-adjacent displacement The experimenter moved the small cup hiding the reward under the left and right cup in succession, as described above, and left the reward under one of them. A correct response was counted when the bird had chosen the baited cup. c. Rotation Three cups were placed in a row on a cardboard, which was then placed on the platform. The experimenter showed a reward to the bird, and placed it under one of the three cups while the bird was watching. Then the tray was rotated in three possible ways: www.nature.com/scientificreports/ 180° middle The reward was placed under the middle cup, and the tray was rotated 180° in clockwise or counter clockwise direction (counterbalanced). After the rotation, the reward was located at the same position as it was initially placed. 360° The reward was placed under either the left or right cup, and the tray was rotated 360° in clockwise or counter clockwise direction (counterbalanced). After the rotation, the reward was located at the same position as it was initially placed. 180° side The reward was placed under either the left or right cup (counterbalanced), and the tray was rotated 180° in clockwise or counter clockwise direction (counterbalanced). After the rotation, the reward was located on the opposite side of where it was initially placed. After the completed rotation, the bird was allowed to choose one cup. A correct response was scored when the bird chose the baited cup first. d. Transposition Three cups were placed in a row on the platform in front of the experimental compartment. The experimenter showed a reward to the bird, and afterwards placed the reward under one of the three cups while the bird was watching. Then one of three possible manipulations was performed: Single transposition The experimenter switched the position of the baited cup with one of the empty cups. The third cup was not touched. Double unbaited transposition The experimenter switched the position of the baited cup with one of the empty cups. Then the positions of the two empty cups were switched. Double baited transposition The experimenter switched the position of the baited cup with one of the empty cups. Then the position of the baited cup was switched again with one of the empty cups. After the transpositions were completed, the bird was allowed to choose one cup. A correct response was scored if the bird chose the baited cup first. a. Relative Numbers The experimenter placed two small rectangular cardboard pieces (10 × 10 cm) on the platform and lifted an occluder to prevent the bird from watching the baiting procedure. Then the experimenter baited the cardboard pieces with different amounts of equally sized food pieces (1/8 of a Frolic piece). The experimenter then placed the cardboard pieces in the middle on the platform, and removed the occluder so that the bird could see the amounts lying on each board. After ~ 5 s had passed and the bird had paid attention, the experimenter moved the plates simultaneously to the sides of the platform, one to the right and one to the left. The sliding table was pushed to the front, and the bird was allowed to choose and obtain all food pieces lying on the respective plate. Each bird received one trial for each of the following pairs of numbers (the order was randomized but constant among birds): 1:0, 1:2, 1:3, 1:4, 1:5, 2:3, 2:4, 2:5, 2:6, 3:4, 3:5, 3:6, 3:7, 4:6, 4:7, 4:8 (the side was counterbalanced). A correct response was scored if the bird chose the larger quantity first. b. Addition Numbers The experimenter placed two small rectangular cardboard pieces on the platform, and lifted an occluder to prevent the bird from watching the baiting procedure. Then the experimenter baited the two cardboard pieces with different amounts of reward (same as in Relative Numbers). She/he also baited a third cardboard piece, which was placed in the middle. Then the three boards were covered with cups and placed in the middle of the platform. After the occluder was removed, the experimenter lifted the cups of the two outer cardboards simultaneously. After ~ 5 s had passed, the experimenter covered the two outer plates again and uncovered the cardboard in the middle. The bird was able to view the amount lying on the middle cardboard for ~ 5 s. Then the experimenter transferred the rewards from the middle plate to one of the side cardboards. During the transfer, the bird could not see the content of the side cardboard boards because they were still covered with the cups. Then the experimenter removed the empty cardboard in the middle, and the bird was allowed to choose between the two covered cardboards on the outer sides (the order was randomized but constant among bird): 1:0 + 3:0 = 4:0; 6:1 + 0:2 = 6:3, 2:1 + 2:0 = 4:1, 4:3 + 2:0 = 6:3, 4:0 + 0:1 = 4:1, 2:1 + 0:2 = 2:3, 4:3 + 0:2 = 4:5 (the side was counterbalanced). A correct response was scored if the bird chose the larger quantity first. a. Noise The experimenter placed two cups on the platform, and lifted an occluder to prevent the bird from observing the baiting. Then the experimenter put a reward (peanut) in one of the two cups, and closed both cups with the small cardboard board already used in the Quantity task. After the occluder was removed, one of two possible manipulations were performed: Noise full The experimenter shook the baited cup three times, so that the food rattled inside, and only lifted the empty cup without shaking it. Whether the experimenter started with the baited or empty cup was randomized. Noise empty The experimenter shook the empty cup (which produced no sound) three times, and then lifted the baited cup without shaking it. Whether the experimenter started with the baited or empty cup was randomized. After the manipulations, the bird was allowed to choose one cup. A correct response was scored if the bird chose the baited cup first. b. Shape The experimenter placed an occluder and placed two identical items (see items below) on the platform. The experimenter showed the bird the reward (1/8 of a Frolic), and placed it underneath one of the two identical objects causing a visible inclination or bump. After this procedure, the occluder was removed, and the bird was allowed to make a choice. Board The experimenter hid the reward underneath one of two cardboard pieces (10 × 10 cm). The reward caused a visually apparent inclination as it was placed on the food (the other board remained flat on the table). Cloth The experimenter hid the reward underneath one of two pieces of white cloth (4 × 2 cm). The reward made a visible bump under the piece of cloth where it had been hidden (the other cloth remained flat on the table). A correct response was scored if the bird chose the baited board or baited cloth first. c. Tool properties The experimenter lifted an occluder and placed two different tools on the platform. One tool was functional and could be used to retrieve a reward associated with it (e.g., lying on top of it). In contrast, the second tool was non-functional, and could not be used to obtain the reward. The following manipulations were conducted: Side The experimenter put two identical pieces of white cloth (4 × 2 cm) on the platform behind an occluder, and placed a reward on top of one cloth piece. The other reward was placed directly next to the other piece of cloth (i.e., making the second tool ineffective for retrieving the food). After the occluder was removed, the bird had to choose the functional tool by either pulling the piece of cloth with the reward on top of it, or by pecking against the functional piece of cloth. Bridge The experimenter put two identical small plexiglass bridges over each of the far ends of the two identical pieces of cloth behind an occluder. One reward was then placed on top of the bridge (making the tool ineffective in retrieving the food). The other reward was placed on the cloth underneath the bridge. After removing the occluder, the bird had to choose the functional tool by either pulling the cloth with the reward placed directly on it, or by pecking against the functional piece of cloth. Ripped The experimenter put up an occluder and placed a rectangular, intact piec of cloth on one side of the table and two smaller cloth pieces on the other side. She/he arranged the small pieces of cloth in a way that there was a 1 cm gap between them. Then one reward was placed on top of the far end of the intact cloth. The other reward was placed on the out of reach piece of the two disconnected pieces (making the tool ineffective to retrieve the reward). After removing the occluder, the bird had to choose the functional tool by either pulling the cloth with the reward placed directly on it, or by pecking against the functional piece of cloth. Broken wool The experimenter put up an occluder, and placed two strings of wool on the platform. One string was cut into two pieces. Similarly to the Ripped condition (see above) both strings were arranged in a way that the gap was visible, but that both pieces showed equal length. A peanut was tied to the far end of the wool strings out of the bird's reach. After removing the occluder, the reward could only be retrieved by pulling the intact piece of wool. a. Comprehension The experimenter placed two cups on the testing platform behind an occluder, one on the left and the other one on the right side. The experimenter showed the bird the reward, and let the reward then disappear behind the occluder. Subsequently, the experimenter hid a reward under one of the cups, removed the occluder, and gave one of the three following social cues: Look: The experimenter sat behind the platform and alternated her/his gaze between the bird and the baited cup while calling the bird's name. After these gaze alternations, she/he continuously looked towards the cup. Point The experimenter sat behind the platform and continuously pointed to the baited cup using the extended index finger of her/his cross-lateral hand. At the beginning of the pointing, the experimenter alternated her/his gaze between the bird and the cup three times and called the bird's name. Subsequently, she/he continously looked towards the cup. Marker The experimenter held an iconic photo marker, which depicted the reward in her/his hand, and alternated the gaze three times between the photo marker and the bird while calling the bird's name. Then the experimenter placed the photo on top of the baited cup. On the other cup, the experimenter placed an empty piece of paper, which had the same size. Both pictures were placed at the same time. After providing one of these cues, the bird was allowed to choose one cup. A correct response was scored if the bird chose the baited cup first. b. Production: pointing cups Two cups served as hiding places for a food reward. These cups were placed in a distance of two meters to each other and close to the fence of the experimental compartment. The cups did, however, not touch the fence. Hence, the bird was not able to touch the cups with its beak. The second experimenter (E2) entered the testing area, placed a reward under one of the two cups while the bird was watching, and then left the testing area. Then the first experimenter (E1) entered the testing area and sat down equidistant to the two cups. She/he waited until the bird approached one cup and pointed towards it with its beak through the wire mesh. A correct response was scored, if the bird chose the correct cup first within one minute. c. Production: attentional state E2 entered the testing area and placed a reward out of reach but in front of the birds' experimental compartment on the bird's right or left side. Then E2 left the area and E1 entered the experimental compartment. She/he stood on the end of the room opposite of the reward and pretended not to see the reward on the floor. E1 stood and the four following behaviours: a. Gaze Following Baseline: As baseline condition, the experimenter sat for two minutes in front of the experimental compartment and looked at the subject. All look-ups from the bird were counted to calculate a baseline level (look-ups per min). In the experimental condition, the experimenter sat in front of the bird and handed a piece of food to the bird to attract the bird's attention. When the bird came closer and looked at the experimenter, the trial started. The gaze cue was conducted in three different ways: Head + Eyes: The experimenter called the bird's name and showed a piece of food. Then the experimenter hid the food in her/his hand, which remained in front of her/his body. Afterwards the experimenter looked up for ~ 10 s by lifting up the head and the eyes. Back: The experimenter sat with her/his back facing the bird. The experimenter called the bird's name and showed a piece of food to the bird. Then the food was hidden in the experimenter's hand, which remained in front of the experimenter's body. Afterwards the experimenter looked up in the air for ~ 10 s. Eyes: The experimenter called the bird's name and showed the bird a piece of food. Then the experimenter hid the food in her/his hand, which remained in front of the experimenter's body. Afterwards, the experimenter glanced up in the air for ~ 10 s without moving the head, meaning her/his face was still facing the bird as before. A correct response was scored if the bird followed the gaze of the experimenter (movement of the head to face upwards or tilting of the head resulting in one eye gazing upwards). b. Intentions E1 put an occluder on the platform and placed two cups. She/he showed the reward to the bird, and then hid it in one of the two cups. After removing the occluder, E2 manipulated the cups in one of the two following ways: Trying: E2 reached for the baited cup and tried unsuccessfully to remove the lid while looking at the cup. Reaching: A plexiglass barrier blocked E2′s access to the two cups. She/he unsuccessfully tried to gain access to the baited cup by extending the equilateral arm and simultaneously looking at the correct cup. E2 continued to give this cue until the bird made a choice. After each demonstration, E1 approached the table after ~ 3 s and pushed the platform forward so that the bird was allowed to make a choice. To count as a correct response, the bird had to choose the baited cup first. All trials were done in order, categorical by task, and using the same order as applied in the PCTB 10 . Scoring and reliability. Great apes use their hands to explore objects, while ravens manipulate objects with their beaks and feet. Thus, in contrast to the procedure of Herrmann and colleagues 10 , a choice was scored when the tested individual pointed with the beak through the wire mesh at one of the locations of the objects (cups or other material), or pecked against the cup/material. When the tested bird pointed at the correct location, it was given the opportunity to retrieve a small food reward. When it made incorrect responses (except otherwise stated), the experimenter showed the location of the hidden food after each trial, took the food away and did not give any reward to the bird. Scoring took place by both experimenters during testing (in all tasks except gaze following). In the gaze following task, a second observer coded the videotapes to assess inter-observer reliability, resulting in an 'excellent' level of agreement (Cohen's K = 0.93). Statistical analyses. To investigate how the proportions of correct responses of ravens varied with age and cognitive scale, we used a Generalized Linear Mixed Model (GLMM 78,79 ) with a logit link function. The response in this model consisted of the proportion of correct trials. In R such an analysis of proportions of binary outcomes is possible with the response being a two columns matrix consisting of the number of successes and failures per trial respectively 80 . As predictors with fixed effects, we included age and scale as our key test predictors, and sex and experimenter (two levels) as control fixed effects. Because we predicted a scale dependent change of the performance throughout ontogeny, we incorporated the two-way interaction between age and scale as another test predictor with fixed effect. As random effects, we included the identity of the bird and the sibling group, as well as the item and the task and also the trial ID into the model. To control for varying chance probabilities across the cognitive tests, we included chance probability (log-transformed) of the different items as an offset term into the model 79 . To keep type I error rate at the nominal level of 0.05 81,82 , we included all theoretically identifiable random slopes components (age, scale, experimenter, and their interaction within bird identity and sibling group; sex within sibling group; age, sex, and experimenter within item and scale; we manually dummy coded and then centred factors before entering them into the random slopes part of the model). Initially, we also incorporated all correlations between random intercepts and slopes. However, most of them appeared to be unidentifiable, as indicated by absolute correlation parameters being essentially one 83 . Hence, we removed them from the model. Since chance probabilities for the items in the tasks social learning, attentional state and gaze following cannot be determined (see Table 1), we excluded these from the model. To assess the overall effect of our key test predictors, we compared the fit of the full model (with interaction, fixed factors and random effects) with that of a null model 82 comprising only the control fixed effects predictors, the random effects, and the offset term using a likelihood ratio test 84 www.nature.com/scientificreports/ To assess model stability, we compared the estimates obtained from the model based on all data with those obtained from models with the levels of the random effects excluded one at a time. The results showed that the model was relatively unstable with regard to the effect of the two-way interaction. Overdispersion appeared to be no issue (dispersion parameter: 1.00). To rule out collinearity, we assessed Variance Inflation factors (VIF 85 ) for a standard linear model excluding the interaction, the random effects, and the offset term. With maximum VIF of 3.86 for age and 3.84 (squared Generalized VIF taken to the power of 1/2 × the respective degrees of freedom 86 ) for experimenter collinearity was not severe. We fitted the model in R (version 3.4.0 87 ) using the function glmer of the R package lme4 (version 1.1-13 88 ). Confidence intervals were obtained using the function bootMer of the package lme4, using 1000 parametric bootstraps and bootstrapping over the random effects, too (argument 'use.u' set to FALSE). We derived tests of the individual fixed effects using likelihood ratio tests comparing the fit of the full model with that of models lacking the terms to be tested one at a time ( 81 ; R function drop1 with argument 'test' set to "Chisq"). Prior to fitting the model, we z-transformed age to a mean of zero and a standard deviation of one. The sample size for this model was 754 tests of eight ravens in four sibling groups, tested in twelve tasks and with 26 items. To compare performance levels among species, we also used a GLMM with logistic error structure and logit link function. The response was again a matrix with the numbers of correct and incorrect responses. Into the model, we included as key test predictors with fixed effects species and its interaction with scale. To control for sex effects, we included sex as an additional fixed effect (and also the main effect of scale). As random intercepts effects, we included item, task, individual, task nested in individual, and trial ID. As random slopes, we included species and sex within item and task and scale within individual (all manually dummy coded and then centred). As for the first model, we included chance probability (log-transformed; see Table 1) of the different items as an offset term into the model. The null model lacked species and its interaction with scale but was otherwise identical to the full model. The model was not overdispersed (dispersion parameter: 0.881), and collinearity was no issue either (maximum squared Generalized VIF: 1.02). We determined model stability and confidence intervals as for the first model. The sample size was a total of 4342 trials, conducted with eight individuals using 26 items and twelve tasks (with 1752 task nested in individual). Finally, we fitted an additional model for species comparison, using only those items for which the probability of a correct response was unknown (see Table 1). This model was identical to the other species comparison model, with the exceptions that it did not include an offset term, lacked the random effects of task and task nested in individual, and that the fixed effect of scale comprised only the levels Causality, Communication, Quantities, Space, and Theory of Mind. The model was somewhat underdispersed (dispersion parameter: 0.524). The sample size for this model was a total of 1611 trials conducted with eight individuals using eight items. Ethical note. All applicable national, and institutional guidelines for the care and use of animals were followed. In accordance with the German Animal Welfare Act of 25th May 1998, Section V, Article 7, the study was classified as non-animal experiment and did not require any approval from a relevant body. Results To test whether ravens' cognitive performance differed in relation to social or physical cognitive tasks (question 1), we fitted a model with an interaction between scale and age. Overall, the full-null model comparison was significant (likelihood ratio test: chi-square = 17.265, df = 9, P = 0.045), but the interaction between age and scale did not reveal significance (chi-square = 2.417, df = 4, P = 0.660; see Table S1 and S2 in the supplementary material). After removal of the non-significant interaction, we found that the performance of the birds was on average significantly higher in quantitative skills as compared to all others. In addition, spatial skills were significantly lower as compared to all others (see Fig. 3; for further details see Table S3 and S4 in the supplementary material). The same model was used to investigate the ontogeny of cognitive skills in ravens (question 2). Their performance did not vary strongly over the course of the study period (− 0.063 ± 0.062, chi-square = 1.005, df = 1, P = 0.316, see Fig. 4). With regard to the random effects, we found that these were mostly estimated to contribute very little to the probability of a correct choice. The clearly strongest random effects were the random slopes of experimenter within item, task, individual, and sibling group. Furthermore those of the interaction between age and scale space within individual and of scale communication within individual (see Table S2). The latter suggest that individuals varied in parts considerably with regard to how their scale specific performance varied with age. To examine whether ravens rival great apes in cognitive skills (question 3), we fitted a model with an interaction between species and scale. Into this model, we only included those tasks for which the probability of a correct response was known (all physical cognitive tasks [scales: Causality, Spatial, Quantity] and the social cognitive tasks comprehension, pointing cups (scale: Communication), and intentions (scale Theory of Mind; see Table 1 and Table 3). The model revealed clear species differences (full-null model comparison: chi-square = 32.123, df = 10, P < 0.001). Furthermore, the full model showed a significant interaction between species and scale (chisquare = 15.008, df = 8, P = 0.059). Ravens showed a lower performance than the two great ape species in spatial skills. The performance of ravens and great apes in quantitative and theory of mind skills was similar. Concerning causal and communicative skills, it was slightly below that of great apes (see Table 3, Fig. 5, and Table S5 and S7 in the SA for details; see also S8 for raw data comparison). With regard to the random effects in the species comparison model, we found that some of the random slopes of scale within individual and of species within task were estimated to contribute considerably to the response (see Table S5). This suggests that the effects of scale were in part differing considerably between individuals and that species differences were in part varying considerably between tasks. Furthermore, we also found the www.nature.com/scientificreports/ estimated random intercept effects of individual and item to be quite large. This implies that the probability of solving a given problem varied quite considerably between individuals and items. The model using only those tasks for which chance probability was unknown did not reveal a significant species difference (chi-square = 7.914, df = 6, P = 0.244). This means that the performance of our ravens and the great ape individuals did not differ considerably in social learning skills, communicative skills (Attentional State task), and theory of mind skills (Gaze following task; see Table 4 and Table S6 in the SA). Some of the random effects were estimated to contribute considerably, implying that the probability to solve a given problem varied in part strongly among individuals and items (see Table S6). Discussion Here, we provide the first quantitative, large-scale investigation of physical and social cognitive skills in a largebrained songbird species-ravens. We particularly examined the effect of development on cognitive performance, and revisited the claim that corvids rival non-human primates in their cognitive abilities 34,40 . To achieve these goals, we fine-tuned one of the most elaborate large-scale cognitive test batteries-the PCTB 10 -to raven features. The results demonstrated that our ravens showed comparable cognitive performance in the domains of social and physical cognition. The performance was highest in tests of quantitative and lowest in tasks of spatial skills. Full-blown cognitive skills were already present at the age of four months, and did not significantly change within the investigated time window. The quantitative cross-species comparison showed that, with the exception of spatial skills, the cognitive performance of our birds was on par with those of orang-utans and chimpanzees. In the following, we will discuss these findings in detail. Cognitive performance in physical and social cognitive scales. Overall, we found that our ravens' physical cognitive performance was very similar to their social cognitive performance, with highest performance scores in quantitative skills and lowest performance scores in spatial skills. These results are not in line with our prediction suggesting that ravens perform differently in the domains of physical and social cognition 48 . There are several possible explanations. First, differences in physical and social cognitive performance may have simply been obscured by the use of a cognitive test battery designed to tackle potential drivers of human cognitive evolution (see for similar accounts 18,89 ). For instance, task design in the PCTB is anchored in the challenges faced by humans and great apes in their daily lives: to find and locate food, use tools and cope with conspecifics. In contrast, although ravens also have to deal with the challenges of discovering and locating food and manoeuvring in a complex social world, they extensively scatter-hoard carcass meat and are non-habitual tool-users 47,90 . The test battery may therefore have not been suitable to pinpoint differences in ravens' physical and social cognitive skills. However, if this explanation is true, we would have expected to find no differences between scales which does not accord with our observations (but see for a recent study on parrots 56 ). Second, differences in physical and social cognitive performance may only develop later than 16 months of age, and were thus not detected across the four investigated time points. If this explanation were true, we would have expected to find no differences between any tested physical and social cognitive scale across the four www.nature.com/scientificreports/ different time points, but this was not the case (see Table S4). In addition, recent studies on the development of gaze following skills 77 and sensorimotor abilities of ravens 72 showed that the general developmental pace is very fast compared to that of other bird and mammal species. Third, the assumption that ravens have specialized in the social rather than the physical domain 48 is simply due to shortage of data. Indeed, due to ravens living in complex societies characterized by fission-fusion dynamics researchers have been fascinated with their social cognitive abilities (see for recent reviews 40,49 ). In addition, studies examining single cognitive aspects have provided many crucial aspects to the remarkable tool-kit of ravens' physical and social cognitive skills (e.g. 42,46,91,92 ). Furthermore, ravens are renowned for caching and hoarding food 40 , combining both sophisticated social (e.g., being highly sensitive to the presence of predators and/or conspecifics that may pilfer caches 40,47 ), and physical cognitive skills (such as remembering where and how much food was cached 40,47 ). Hence, our results reveal that ravens are both social and physical intellects, and strengthen recent suggestions that ravens cognitive skills are an expression of general rather than domain specific intelligence 36 . In addition, a recent reanalysis of the original PCTB dataset of chimpanzees and children 75 using a confirmatory factor analysis (CFA) did not support the original division of the test battery into a social and a physical cognitive domain. Instead, it identified a spatial cognition factor (see also 93 ), suggesting to move beyond the idea that social cognition might be dissociable from physical cognition and evolved separately. The study, thereby, also adds important fuel to the recent debate on cognitive test batteries in animal cognition research (e.g. 18,56,89 ). For instance, some scholars stress to pay more attention to overlooked task demands that may affect performance www.nature.com/scientificreports/ (e.g., tracking the movement of human experimenters 94 ), while others suggest to improve test batteries on multiple fronts such as the design of the tasks, the domains targeted and the species tested 95 . Furthermore, scholars emphasized the importance of addressing the same conceptual question by using tasks that a given species can solve 50 . In addition, Völter and colleagues 96 proposed a psychometric approach involving a threestep program consisting of (1) tasks that reveal signature limits in performance (i.e. the way individuals make mistakes), (2) assessments of the reliability of individual differences in task performance, and (3) multi-trait multi-method test batteries. The development of cognitive skills. The results showed that our ravens' cognitive performance did not change across the four investigated time points of four, eight, twelve and 16 months respectively. These findings support the prediction that ravens undergo a relatively rapid cognitive development. They further 67 showed that magpies master Piagetian Stages 4 and 5 before nutritional independence. Hoffmann and colleagues 99 investigated whether object permanence abilities are a function of the duration of development across four corvid species. Taking the hatching-to-fledging time as an indicator for development, they showed that Eurasian jays needed by far the shortest time for passing Stage 5 (6 weeks of age) and Stage 6 (7 weeks of age), with carrion crows (Stage 5: 11 weeks of age; Stage 6: 13 weeks of age) and ravens (Stage 5: 11 weeks of age; Stage 6: 14 weeks of age) following several weeks later. These results are in contrast to findings on individuals of two psittacine species (Cyanoramphus auriceps, Psittacus erithacus), which show considerably slower developmental paces and achieve Piagetian Stage 5 only after independence (between 19 weeks of age, respectively 18 weeks of age) 67 . The differences in developmental speed and the linkage to general developmental patterns may reflect a general difference in maturing executive functions and hence cognitive trajectories of corvids and parrots 99 . However, it may also be possible that rapid cognitive development has been selected for in food-storing species, which use memory to retrieve stored food and have a larger hippocampus relative to the rest of the telencephalon than do species that store little or no food 14,59 . Since ravens' survival and reproductive output relies heavily on successful cooperation and alliances 40,47 , the rapid pace of ravens' cognitive toolkit in the physical and social domain may thus also represent a selective response to manoeuvring in a world characterized by the complex challenges of an ever-changing ecological environment and governed by highly cooperative motives 46,47 . Comparison of cognitive performance of ravens and great apes. With the exception of spatial skills, the quantitative comparison of performance scores of our ravens and the great ape individuals showed considerable similarities across the two domains of physical and social cognition. These results are also in line with a recent study using the PCTB to test cognitive performance of two Old World monkey species with chimpanzees showing higher performance scores than macaques in tasks of spatial understanding and tool-use only 18 . Since ravens perform impressive flight acrobatics, rely heavily on caching and pilfering of food-stores 40,47 , and have been shown to master stage 6 of object permanence 68 , the relatively low performance scores in the Space scale are surprising. Similarly, a recent study using the PCTB to investigate and compare cognitive skills of four parrot species (Ara glaucogularis, Ara ambiguus, Primolius couloni, Psittacus erithacus) showed that the parrots' performance was also relatively poor in the scale Space (but also across all other scales tested). Individuals were significantly above chance only in the object permanence (Ara glaucogularis, Primolius couloni, Psittacus erithacus), and the rotation task (Ara glaucogularis 56 . Hence, our findings may echo Köhler who noted that "the success of the intelligence tests in general will be more likely endangered by the person making the experiment than by the animal" (p 265 100 ). Since, ravens' and other corvids' social life is highly competitive 101 , all aspects of their cognitive abilities have likely been shaped by the need to out-compete conspecifics in general. It thus may be possible that our ravens' performance in the scale Space-but also all other physical cognitive scales-was overshadowed by a social component with the ravens perceiving the experimenter as a competitor for the food reward. These findings may add a new aspect to proposals suggesting to integrate a competitive component into experimental designs 71,102 . Table 4 Species comparison for behaviours with unknown chance probability. Table 4 depicts species comparison for behaviours with unknown chance probability. (1) Dummy coded, the reference category was Raven. (2) Dummy coded, the reference category was scale Communication only including the task Attentional State. (3) Dummy coded with female being the reference category. (4) Only including the task Gaze Following. www.nature.com/scientificreports/ In contrast to our ravens' performance, however, the parrots tested by Krasheninnikova and colleagues 56 performed at chance level across all three physical and all three social cognitive scales. These results are in stark contrast to previous findings on parrots' remarkable cognitive capacities (see for reviews 49,103 ). They also emphasize Tinbergen's notion that the same test for a different species may therefore not be the same test 104 . Furthermore, differences in test performance between individuals of the parrot and our study may also be due to differences in socialization such as hand-raising, habituation and training procedures, and social bond strength between the birds and the experimenters (see also 77,105 ). For instance, the birds in the present study were tested by two highly familiar people who had also hand-raised them. In contrast, tests in the study of Krasheninnikova and colleagues 56 had been conducted by ten familiar experimenters, which had not hand-raised them, and four unfamiliar assistants. Hence, future studies should investigate the impact of these factors on cognitive performance in more detail to minimize possible counterproductive effects. In addition, analyses of why species fail in certain tests in combination with informed accounts of their ecological and social validity will aid in getting a better understanding of whether distinct tasks are too easy or too difficult for a given species to be solved 18,89,102 . Furthermore, it is certainly an issue that the test battery was constructed and administered by humans 10 , influencing cognitive performance of our ravens overall. For instance, Schloegl and colleagues 77 investigated the ontogeny of gaze following in ravens by using observations of spontaneously occurring gaze following behaviour between conspecifics and controlled experiments involving human experimenters. They found that visual co-orientation with conspecifics emerged around eight weeks of age, while gaze following behaviour to human-given cues could only be observed seven weeks later. Schloegl and colleagues 82 suggested that human models may not be capable of providing the same stimulus quality as a conspecific due to emphasizing different aspects for eliciting gaze following behaviour. In contrast, Heinrich 47 suggested that there is something unique about ravens that permits an uncanny closeness to develop with humans, thereby allowing insights in skills that could otherwise never be discovered. Taken together, the present experiments provide evidence that our ravens' experimental performance was on par with those of adult great apes in the similar tasks. They thus strengthen the idea that ravens evolved a general and flexible neural system for higher cognition 36,106 rather than being highly specialized in a few domains only 107 . Yet, we do not claim that the cognitive abilities of ravens and great apes are generally similar since similarity at the behavioural level does not need to reflect the same underlying cognitive mechanisms 50 . This may be particular true for complex cognitive abilities such as tool use, cooperation, or referential signalling that involve different cognitive building blocks 36 . For example, referential signalling may involve aspects of learning, memory, empathy, and theory of mind, but the degree to which each of the abilities are involved and has advanced may differ between species and taxonomic groups 46,108,109 . In addition, it may also be the case that the cognitive competencies in the items tested in the PCTB simply did not differ substantially 18 . Furthermore, proponents of situated cognition argue that cognition reaches beyond the brain and tackle the relation between cognitive processes, on the one hand, and their neuronal, bodily, and worldly basis, on the other (for a review see 110 ). This means that choices made via non-homologous body parts-beaks (ravens), hands (great apes), and eyes (ravens) combining panoramic sight with excellent stereoscopic vision 111 -not only involve different effectors but also different processors possibly influencing cognitive processing and output. In addition, we do not claim that the cognitive performance of our eight ravens can be generalized to the species as a whole and corvids in general. For instance, some random effects seem to have influenced task performance suggesting to pay special attention in future studies to personality, task-performance across age and thus ontogeny of test-subjects (see e.g. 112 ). Hence, the present study may pave the way to future collaborative studies and data sharing across research labs encouraging a ManyBirds project (see for related efforts 113,114 ). It may thus aid in 1) tackling one of the biggest obstacles in Animal Cognition research, to obtain sufficient sample sizes, and 2) improving and adapting distinct tasks of test-batteries to better implement and mimic the ecology of the respective model species (see also 115,116 ). Therefore, future studies should expand the range of investigated skills in a given test-battery beyond social interactions with humans and foraging contexts, and situate the findings within a comparative evolutionary framework (see also 95,96,116 ). Furthermore, we hope to inspire more research into the impact of ontogeny on cognitive performance, which, although constituting one of Tinbergen's four why's, is especially lagging behind in studies of Animal Cognition 117,118 . Conclusion Here, we systematically tested the physical and social cognitive skills of eight hand-raised ravens, members of the corvid family, with a special focus on development. The results enabled the first direct, quantitative comparison with the cognitive performance of individuals of two great ape species, chimpanzee and orangutans, tested across the same domains and tasks. Our results suggest that ravens are not only social intellects but have also developed sophisticated cognitive skills for dealing with the physical world. Furthermore, their cognitive development was very rapid and their cognitive performance was on par with adult great apes' cognitive performance across the same cognitive scales. Our findings thus put recent assessments of ravens' and great apes' conspicuous similarities in single cognitive paradigms on solid footing. In addition, they show that the impact of ecological challenges of species' cognitive development has, at least in the field of cognition, been severely underestimated and that socialization may influence test performance. Hence, studying cognition requires also an understanding of the dynamic of the different influences that, during ontogeny, contributes to adult cognition 118 .
2020-12-12T14:08:02.724Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "59069652cacaf3b7e3bc03c6711cc33f31d67338", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-77060-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcd79666222492e13c137dd62092ebaa0b3c9e91", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
237392177
pes2o/s2orc
v3-fos-license
Comparison of Histomorphometric Study of Chromaffin Cells in Adult Males Squirrel (Sciurusanomalus) and Hamster (Mesocricetusauratus) The adrenal glands are endocrine glands that produce a diversity of hormones comprising of adrenaline, the aldosterone and cortisol. The present study aimed at investigation of the histomorphometric features of chromaffin cells. There were two types of chromaffin cells. In squirrel, the first type was columnar in shape and brownish in color contained spherical nucleus located at the base of cells, which represented the epinephrine secreting cells, and the second type was polygonal in shape and light brownish in color contained spherical nucleus located in the center of cells, which represented the norepinephrine secreting cells. The adrenal medulla of hamster consists almost entirely of columnar or polyhedral chromaffin cells forming clusters and anastomosing cords separated by sinusoids, giving a strong reaction with methylene eosin stain more than that seen in squirrel. The statistical analysis showed that the means diameter of epinephrine cells and norepinephrine cells in the right adrenal gland in squirrel were lesser than those of hamster significantly at P<0.05, but in the left adrenal gland in squirrel the means were greater than those of hamster significantly at P<0.05. In conclusion, the present findings showed the reaction of chromaffin cells of hamster with methylene –eosin stain to be stronger than with hematoxylin-eosin stain, while the opposite was true in case of the squirrel. INTRODUCTION he African giant rat's adrenal medulla was found to consist of clusters of granular, mildly basophilic cytoplasm cells with multiple capillaries in its fine supporting stroma. The adrenal medulla contained around a quarter of the gland. The retained catecholamine granules of the adrenal medullary cells (chromaffin cells) were oxidized to a brown color when the gland had been fixed in potassium dichromate fixative (1). The medulla region was composed of ovoid group of cells (chromaffin cells) that arranged in irregular cords separated by blood sinusoid and surrounded by central vein, and there were two types of cells, the first was columnar in shape and brownish in color representing the epinephrine secreting cell, and the second type was polygonal in shape and light brownish in color with spherical nucleus representing the norepinephrine secreting cells when fixed in chromate salts (2).The adrenal medulla parenchyma of porcupine was shown to consist of poorly arranged cells into clusters and strings. Chromaffin cells were columnar in form and the cells were darkly identified with a very basophilic nucleolus. The cytoplasm was treated basophilically and distinctly isolated from the cortex (3). The adrenal medulla constituted approximately one quarter (25.7%) of the hystrixcrestate gland area. The A B S T R A C T The adrenal glands are endocrine glands that produce a diversity of hormones comprising of adrenaline, the aldosterone and cortisol. The present study aimed at investigation of the histomorphometric features of chromaffin cells. There were two types of chromaffin cells. In squirrel, the first type was columnar in shape and brownish in color contained spherical nucleus located at the base of cells, which represented the epinephrine secreting cells, and the second type was polygonal in shape and light brownish in color contained spherical nucleus located in the center of cells, which represented the norepinephrine secreting cells. The adrenal medulla of hamster consists almost entirely of columnar or polyhedral chromaffin cells forming clusters and anastomosing cords separated by sinusoids, giving a strong reaction with methylene eosin stain more than that seen in squirrel. The statistical analysis showed that the means diameter of epinephrine cells and norepinephrine cells in the right adrenal gland in squirrel were lesser than those of hamster significantly at P<0.05, but in the left adrenal gland in squirrel the means were greater than those of hamster significantly at P<0.05. In conclusion, the present findings showed the reaction of chromaffin cells of hamster with methylene -eosin stain to be stronger than with hematoxylin-eosin stain, while the opposite was true in case of the squirrel. T *Correspondence: Histology75@yahoo.com chromaffin cells were found in irregular clusters and at the cortico-medullary cortical boundary cells. The cells that stored epinephrine (E-cells) were found to be more abundant and smaller than those that stored norepinephrine (NE-cells). The bulks were moderately dense electron E granules, but some were extremely dense NE granules. Rarely have the ganglion cells been detected. Instead of a large central venula, multiple central sinusoidal vessels were found. Both cells had one single, large spherical nucleus. Nucleoli have been well described, and up to three nuclei per cell have been observed on occasion. In terms of granulate size and form, E-cells are similar to NE-cells (4). The adrenal medulla was observed to consist of chromaffin cells that formed clusters and cords forming anastomosis. In addition, adrenal medulla comprised of two types of chromaffin cells. The norepinephrine cells had a large spherical nucleus and granules extremely dense with electrons. The epinephrine cells were similar to norepinephrine cells, but their granules were less dense electrons, and there was a small empty space between granules and boundary membrane (5). MATERIALS AND METHODS In this study, 20 animals were used, 10 of which were squirrels and the other 10 were hamsters. They were euthanized by inhaling chloroform. The adrenal glands were removed from and placed in Orth's solution for 24 hours, and then treated with the routine staining of hematoxylin and eosin as well as methylene blue (1%) and eosin for studying the histological characteristics. Then, the slides were examined with the light microscope, and by using an ocular lens, the measurements of the diameter of cells and the nuclei of adrenaline and nor adrenaline cells were determined for both animal groups. Statistical analysis was applied using two ways ANOVA and T-test, the means were considered statistically significant at the level of P≤ 0.05. Reagents Orth's stock solution consisted of potassium dichromate (25 g) and sodium sulfate (10 g) dissolved in 1000 mL of distilled water. Orth's Working Solution Orth's working solution was made up of 50 mL of the stock solution. Just before use, 5 mL of 37% formaldehyde were added and fixed for 24 h, and after fixation the samples incubated in water bath. The fixed samples were stored in 70% alcohol. RESULTS AND DISCUSSION Medulla was composed of ovoid group of cells (chromaffin cells) that arranged in irregular cords separated by blood sinusoid and surrounded by central vein. There were two types of chromaffin cells, the first type was columnar in shape and brownish in color contained spherical nucleus located at the base of the cell, this represents the epinephrine secreting cell, the second type was polygonal in shape and light brownish in color contained spherical nucleus located in the center of the cell, this represents the norepinephrine secreting cell ( Figure 1). These cells also appear as pale brown after fixation with potassium dichromate (Figure 3). Similar results were reported in the medulla of guinea pigs and vizcacha, in which the medulla composed of ovoid group of cells (chromaffin cells) that arranged in irregular cords separated by blood sinusoid (6,16). These cells also appeared as light blue with Methylene-Eosin staining ( Figure 4). The results obtained by the methods used by the above studies mainly depended upon the experience of the histologists. A successful preparation demonstrated well the nucleus, cytoplasm, and cell-granules. The means of diameter of the epinephrine cells and their nuclei in the right adrenal gland were 18±0.97 μm and 23.7±0.44 μm, in the left adrenal gland were 10.4±0.73 μm and 12±0.52μm, there is a significant difference between the diameter means of the epinephrine cells and nuclei in the right and left adrenal gland of squirrel at p < 0.05, in which the left adrenal gland was greater than the right one ( Table 1). The means of diameter of the norepinephrine cells and the nuclei in the right adrenal gland were 13.2±0.38 μm and 14.25±0.75 μm, while in the left adrenal gland were 7.25±0.16 μm and 8.12±0.72 μm, there is a significant difference between the means of diameter of the epinephrine cells and their nuclei in the right and left adrenal glands of squirrel at P< 0.05, in which the left gland was greater than the right one (Table 1).There is a significant difference in the diameter mean of cells of the right and left adrenal glands between squirrel and hamster depending on the activity of the gland. Also, the statistical analysis showed that the means of diameter of nuclei in the left and right adrenal glands in squirrel were significantly lower (P<0.05) than those of hamster due to variations in species and nutrition. The adrenal medulla of hamster consists almost entirely of columnar or polyhedral chromaffin cells forming clusters and anastomosing cords separated by sinusoids. The outer and inner zone of the medulla can sometimes be separated. While the outer zone composed of larger and darker stained cells, the inner zone comprised of smaller and lighter stained cells, since the reticularis projections that appear within the medulla at the junction of the cortex and medulla interdigitate. There are two types of chromaffin cells, the type of secreting epinephrine has bigger and less dense granules, and the type of secreting norepinephrine has somewhat smaller dense granules. The medulla consists of mainly modified postganglionic sympathetic neurons with heavy chromium salt stains and multiple brown granules in the cytoplasm (Figure 3). The chromaffin cells had been observed in irregular clusters. Most of them had fewer electron-dense granules with an open boundary, but others had very dense electron granules. These results are similar with results reported by (7) in African giant rats, (8) in domestic animals and (9) in Guinea pigs. These cells also look pale blue after fixation with potassium dichromate and staining with methylene blueeosin ( Figure 5), and pale brown after staining with H&E ( Figure 6). Aqueous or alcoholic solutions of eosin and aqueous solutions of methylene-blue require to be independently and consecutively employed for the double staining of sections. The means of diameter of the epinephrine cells and their nuclei in the right adrenal gland were 19.2±1.29 μm and 10.7±0.69 μm, in the left adrenal gland were 20.9±1.62 μm and 11.5±1.01 μm, there is a significant difference between the diameters of the epinephrine cells and the nuclei in the right and left adrenal glands of hamster at P< 0.05, in which the left gland was greater than the right one (Table 1). These results coincide with (10,11,12) in rats and with (13) in squirrel Sciurusanomalus, and with domastic animals (14). However, these results disagree with (15) in Galeaspixii.The statistical analysis showed that the means of diameter of epinephrine cells in the right adrenal gland in squirrel were significantly lesser (P<0.05) than those of hamster (Table 1), but in the left adrenal gland of squirrel were significantly greater (P<0.05) than those of hamster ( Table 1). KADHIM AB AND KHALEEL IM The means of diameter of the norepinephrine cells and their nuclei in the right adrenal gland were 13.6±0.44 μm and 7.69±0.36 μm, while in the left adrenal gland the means were 13.7±0.90 μm and 8.67±0.85 μm. There is a significant difference (P<0.05) between the mean of diameter of the epinephrine cells and their nuclei in the right and left adrenal glands of hamster, in which the left gland was greater than the right one ( Table 1). The statistical analysis showed that the means of diameter of norepinephrine cells in the right adrenal gland of squirrel were significantly (P<0.05) lesser than those of hamster (Table 1), but in the left adrenal gland of squirrel the diameters were greater (P<0.05) than those of hamster (Table 1).
2021-09-01T15:07:58.407Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "164fa266a20198eaaf88cdb3eca7505d4c3d95a0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.30539/ijvm.v45i1.1040", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d19cdeaa389485630addae6803942bcfe5d2ba95", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
38784234
pes2o/s2orc
v3-fos-license
Monitoring of Lactic Fermentation with a Coupling Electronic Nose and Gas Chromatography In this work, the performance of dehydratation-desalcoholization system based on an electronic nose coupled to gas chromatography was tested. The system was used for monitoring the volatile compounds produced during a lactic fermentation with a heterofermentative bacteria (Lactobacillus fermentum Ogi E1). The monitoring was carried out with dehydratation and desalcoholization or dehydratation only, on the basis of low ethanol concentration produced by this bacteria. In the first case, fermentation head-space analyses showed low signals from each gas sensor, then the principal components analyses (PCA) resulted confused. However with the only dehydratation system, the electronic nose was able to detect some volatile compounds during bioprocess. The PCA showed a single distribution, permitting to conclude that principal component 1 represented the ethanol concentration. The system is appropriate to monitor some parameters during the fermentations process as ethanol, lactate and biomass concentration. Introduction Electronic noses (E-noses) are tested and applied since the eighties as aromatic quality sensors in the agricultural, environmental, medical, biotechnological and food domains [1][2][3].They are typically composed of an array of non-specific chemical gas sensors characterized by a broad and partly overlapping selectivity to volatile compounds.This concept was inspired by the human nose and clearly shows similarity with the human brain-olfactory system [4].E-nose is on the contrary a fast, reliable, costeffective, in line, automatic and operator-friendly system of aroma analysis [3,5].Nevertheless, the E-nose's gas sensors provide a large and complex amount of data (i.e.sensor responses), which has to be processed by pattern recognition techniques such as principal component analysis (PCA), linear discriminant analysis (LDA) or neural network (NN) [6,7].Recently, several studies were proposed to improve discrimination between E-nose data, first analyzing by PCA, in order to reduce the data dimension, and secondly, selecting some of the most relevant principal component values as input in classification techniques such as LDA or NN [5,8].Data processing improves the selectivity of the systems leading to an extensive range of applications. Samples classification [6,9], adulterations or detection of defaults in aroma [10,11], quality measurement [8] and process monitoring [12][13][14] are the main applications of the E-nose technology.Recent applications of E-nose concerned the biotechnological domain.E-nose was implemented to study its ability for diagnosis, detection and screening of various stages of renal disease [15] or for monitoring industrial processes related to microorganisms [14,16,17] or cells cultures [18][19][20].In the latter areas, the initial studies consisted in analyzing the headspace generated by various microorganisms grown on Petri dishes with the E-nose and detecting and identifying microorganisms from the responses of the E-nose treated by chemometrics [21,22].For instance, Dutta et al. [21] showed that gas sensors efficiently identified six species of bacteria responsible for eye infections and ten clinically important microorganisms were successfully tested and identified by [22] Moens et al. (2006).Gardner et al. [23] successfully predicted the class and growth phase of two potentially pathogenic bacteria by analyzing samples of the cultivation headspace with six Metal Oxide Semiconducting (MOS) gas sensors.A cultivation of Saccharomyces cerevisiae on glucose was monitored on-line (ethanol concentration and course cultivation) by analyzing the cultivation gas effluent with the E-nose [24].The potential of the E-nose technology was confirmed as well on a production-scale CHO-cell process [18], on the detection of the metabolic burden on a recombinant E. coli strain [16] or bacterial infections in cell cultures [19,20] successfully monitored growth of Methanobacterium formicicum using a MOS and MOSFET (Metal Oxide Semiconducting Field Effect Transistor) E-nose in order to detect disturbances in the microbiological process [25] and to identify two different oenological Saccharomyces cerevisiae strains in alcoholic fermentation [14]. In the other hand, Lactobacillus fermentum is a heterofermentative bacteria [26] producing ethanol in low concentrations (environs 4 g/l).This concentration is enough for being detected by the MOS sensors.This bacteria is able to produce some volatile compounds.Jackson et al. [27] have detected more than 15 volatile compounds in pork loin tissue inoculated with Lactobacillus plantarum and Lactobacillus fermentum where the principal aroma compounds were acetone, sulfurdioxide, dichloride ethane, trichloride methane, benzene and toluene. The aim of this study was to investigate on-line lactic fermentation with an E-nose equipped with a back-flush gas chromatography removing alcohol and/or water from samples before analyzing. Microorganism and Growth Conditions Lactobacillus fermentum Ogi E1 (I-2028, CNCM, Institut Pasteur) isolated from ogi [26] was used in this study.A simplified yeast extract medium (SYAM), set up to study the physiology of L. fermentum Ogi E1 [28] was used as fermentation medium.For routine cultivation, a modified MRS medium [29] was used with potato soluble starch (Prolabo-Merck eurolab, Lyon, France) as substrate, following the composition given by [30]. Lactic Fermentation Fermentations at pH 5.0 with potato soluble starch as substrate were performed at 30˚C in 2 l bioreactors (Inceltech, Toulouse, France) with a 1.5 l working volume.pH was controlled with either NaOH (5N) or HCl (5N).The growth medium was gently stirred (200 rpm) to maintain homogeneity.The bioreactors were inoculated (10% v/v) with 12-hour pre-cultures.To establish anaerobic conditions as recommended by Calderon et al. [28], the fermentation medium was flushed with nitrogen while the medium was cooling just after autoclaving (120˚C, 15 min).Then a slight overpressure of nitrogen was maintained within the reactor during fermentation.Fermentations were run twice. Electronic Nose A commercially available E-nose (FOX 4000, Alpha MOS, France) with eighteen different metal oxide semiconductor gas sensors (MOS) was used.The different sensors were disposed in three temperature-controlled chambers; each chamber included six sensors, a thermometer and a humidity sensor.The sensor arrangement in each chamber is depicted on Table 1.A generator of purified air (Whatman, UK) with a CaCl 2 post dehydration column was used to provide clean dry air to the electronic nose system. The bioreactor headspace gas was continuously pumped with the aid of a membrane compressor (Fisher Bioblock Scientific, France) placed before the sampling loop.Due to the small bioreactor volume, the gas sample was reintroduced in the bioreactor in order to avoid depresssion and volatile compounds losses.Sampling from this gas flow was performed every 30 min through a 6-port automated sampling valve and the sample was introduced in a gas chromatograph (IGC 121C, Intersmat, Belgium) equipped with a Porapak Q column (1 m × 0.32 cm).The samples were then dehydrated and de-alcoholised or dehydrated only by a patented back-flush technique [31].In this technique three multiway electrovalves were used for automatic injection in the GC, column back-flush and automatic injection in the E-nose. Ethanol Ethanol was analyzed on-line by gas chromatography (IGC 121C, Intersmat, Belgium) with a flame ionization detector and a dehydration-dealcoholisation system.The analytical column was a 1 m Porapak Q column operated at 180˚C.Nitrogen served as carrier gas at a flow rate of 18 ml/min.The ethanol calibration was carried out using standard ethanol solutions placed in the bioreactor and analyzed in the gas chromatograph-E-nose system in the operating culture conditions.This calibration was Copyright © 2013 SciRes.ENG carried out before each fermentation batch. Biomass The determination of cell mass concentration was performed by an optical sensor (653/BT65 model, Wedgewood Technology Inc, CA, USA) measuring medium turbidity.Previously, a calibration curve was carried out in order to transform optic density into biomass concentration (g dry matter/l).This determination was conducted in order to verify a correct performance of the lactic fermentation [28]. Data Analysis The software provided with the E-nose system was used to acquire and store the gas sensor array signals.From each sensor signal, the fractional difference was calculated as shown in Equation ( 1): Where S fd corresponds to the modified signal, S max to the maximum sensor signal value, and S baseline to the base line sensor signal value. Each sensor signal was auto-scaled (i.e.mean-centered and divided by its standard deviation for rescaling with unit variance) to obtain S fdN .The maximum value of S fdN for each sensor was used for PCA to avoid domination of high sensor responses in data processing.Relevant information contained in low sensor responses was thus taken into account in multivariate analysis processing.PCA were carried out with the chemometric toolboxes of the software Matlab 6.5 (the MathWorksInc, MA, USA). Results and Discusion Ethanol, lactate as well as optical density were monitored (Figure 1) and the results are accorded with those obtained previously Calderon et al, 2001 [28].A diminution in the growth rate was found after eight hours probably due to the lactate accumulation or substrate starvetion. Monitoring of Emissions of Volatile Compounds with Dehydration and Dealcoholisation Due to the design of the GC-electronic nose, GC-system could be used for the sample dealcoholisation and/or dehydration prior to the measurements with the E-nose.In the second case, it suffices to switch the valve back- flush immediately after the release of water before the elution of ethyl acetate, about 20 seconds after injecttion. There is evidence that signals delivered by sensors are low and that in the principal component analyses, all dates are totally overtype and unreadable (results not shown).In our working conditions, after removing alcohol and water, the head-space of bioreactor has not measurable information. However, some approaches have been done: Signals intensity from sensors increment during the first four hours (Figure 2).The organic volatile compounds concentration is low and during exponential phase the volatile compounds production was stopped.It is important to indicate that this time correspond with the total starch consummation.The starch is a complex matrix able to retain volatile compounds.This non-specific interaction between aroma compounds and polysaccharides were reported as a reduction in aroma compound volatility [32].Authors have reported that the polysaccharides form a complex with the volatile compounds and their volatileity is decreased.Then it is due a possible interference of starch with the detection of volatile compounds.Firstly it was detected an increase in electronic nose signals according to starch consummation.After total starch consummation, the values of electronic nose signals rested constants.This fact corresponds with the beginning of the exponential phase and to the ethanol production during the fermentation process.The effect of ethanol on the volatility decrease has been reported too.Then, it is possible that the volatility of volatile compounds during this fermentation stage was masked by the ethanol production as demonstrated by Ragazzo-Sanchez et al. [33]. Monitoring of Emissions of Volatile Compounds with Dehydration Only Fermentation monitoring with dehydration only was performed with the total volatile compounds produced.Samples were taken in the head-space and only dehydrated previous to electronic nose analysis.Results showed a net production of volatile compounds since the first fermentation hours.Global volatile compounds concentrations become more important after 3 h and until 14 h (Figure 3).After that, signals delivered by the electronic nose rest constants. The PCA showed a clear evolution of the sensors during the fermentation process (Figure 4).The particular form of this curve is obtained when an electronic nose analyzes different concentrations of volatile compounds.This trend has been obtained during the analysis of reference solutions containing ethanol and ethyl acetate (results not showed).It is possible that principal axis 1 meaning correspond to the variation in ethanol concentration in the fermentation.The meaning for the axis 2 is more difficult to elucidate, but it could be in corresponddence to the variation in the ethyl acetate concentration (even production or monitoring of a re-consummation). A multilinear regression between sensors responses and ethanol and ethyl acetate concentrations was conducted.The correlation obtained for this variables presented a good regression coefficient of 0.9.The regression coefficient was enhanced to 0.95 after selection of specifics sensors (number 1, 3, 6 and 7).For each group an array is built on the new variables.Unknown samples are projected on the discriminate subspace and compared to each group via the associated array.This method is used to predict quantitative values, based on a calibration curve, and correlated with quantitative variable characteristics of the analyzed sample (Figure 5).The correlation for ethyl acetate is less linear, then a correlation coefficient of 0.7 was obtained and 0.74 after optimization. The higher variation registered correspond to ethanol concentration.Then, the whole information given by the electronic nose could be related as a direct function of this variable, which is related to other fermentation variables. In the other hand, some sensors shown a typical trend related to microbial grow and production of metabolites during a fermentation process (Figure 3).Some studies on MOS sensors have shown responses curves that were linear with head-space concentrations [34,35].However, their non-linearity is documented by the manufacturer [36] and confirmed [37].Beside, other authors have underlined the difficulty to distinguish between different head-space concentrations of a volatile compound [38,39] with electronic noses.There are few studies in this direction, as presented by Ragazzo et al. [40], who demonstrated the non-linear dependence of the sensor signals up on the volatile compound concentration, by analyzing a series of solutions of a same chemical but at different dilutions.These authors proposed that several sensors were able to predict the concentration of ethyl acetate and hexanol at concentration from 100 and 150 mg/L. The regression coefficient between the sensor P30/1 and the optical density at 600 nm, ethanol and lactate concentration, with a non-linear function, were 0.95, 0.96 and 0.98 respectively (Figure 6), similar approaches were observed with others sensor (Table 2).In the whole cases, when the predictions were possible, the regression coefficient corresponding to microbial grow was lower thanthose corresponding to ethanol and lactate.This could be due to the specific nature of each sensor.In the case of ethanol and lactate sensors are specific to these molecules, but concerning biomass detection, the sensors respond to a global volatile compounds produced by the bacteria. Conclusions It was possible to monitor the lactic fermentation sampling of the total head-space.The correlation between ethanol concentration and the signal sensors presented a typical curve and it was possible to determine the maximal growth rate. The prediction capability in addition to the versatility of the computer to be used online, places the electronic nose system coupled to the gas chromatography, in an advantageous position compared to conventional analytical systems. Figure 4 . Figure 4. PCA analysis previous dehydration using the maximal level of responses of sensors MOS during the lactic fermentation. Figure 5 . Figure 5. Correlation between ethanol concentrations (g•L −1 ) predicted from electronic nose analysis and real concentrations. Montserrat Calderon - Santoyo studied biochemistry Engineering at InstitutoTecnológico de Morelia.MCs in Biochemistry Engineering at InstitutoTecnológico de Veracruz.PhD in Food Sciences at the Université des Sciences et Techniques du Languedoc at Montpellier, France with distinction in 2001.Now she is professsor/researcher at Instituto Tecnológico de Tepic.In 2002, she was appointed member of the Mexican National Researchers System (SNI).She teaches at the MSc and PhD in Food Sciences.She is responsible of Food Microbiology laboratory at Laboratorio Integral de Investigación en Alimentos (LIIA).Pedro Ulises Bautista-Rosales studied Biochemistry Engineering in 2004 and MSc in Food Science in 2007 at Instituto Tecnológico de Tepic.PhD in Use, Management and Preservation of Natural Resources at the Centro de Investigaciones Biológicas del Noroeste at La Paz, Baja California Sur, México with distinction in May 2013.Guadalupe Luna-Solano studied Chemical Engineering and Foods Science postgraduate at the Institu-toTecnológico de Veracruz and received her PhD in 2003.Two years later, she joined the chemical and biochemical department of the InstitutoTecnólogico de Orizaba of Veracruz as researcher professor, at the same department where she teaches food engineering.In 2006, she was appointed member of the Mexican National Researchers System (SNI).The main focus of her work is the application of drying process (lyophilization, osmotic drying, spray drying, and fluidized bed drying) in the different foods and microorganism as yeast and bacteria.Charles Ghommidh is professor at the Food Science Department of Polytech Montpellier (France).He received a PhD in Bioengineering at the Institut National
2017-10-01T12:22:35.491Z
2013-09-10T00:00:00.000
{ "year": 2013, "sha1": "68ae873a82783f53eda8b1e0b46bc70247083ee4", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36783", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "68ae873a82783f53eda8b1e0b46bc70247083ee4", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
55786519
pes2o/s2orc
v3-fos-license
The Impact of Austerity Measures on Government Borrowing in GIIPS The article investigates the effects of austerity measures on government debt in Greece, Ireland, Italy, Portugal and Spain (GIIPS) by employing panel cointegration test and using data between 1998 and 2014. The result of empirical analysis shows that tax rate increase on personal income did not result with decrease in government debt. Interest rate and wage that are control variables are also positively related with government debt levels. The result of this empirical analysis suggests that the impact of austerity measures on government borrowing in GIIPS is positive, despite the expectations of certain economic agents. Introduction The USA faced with subprime crisis in 2007, which had worldwide effects by 2008.The European Union is affected by the subprime crisis via its banks.European crisis which erupted in 2009 is called 'sovereign debt' crisis because of banks' high debts to mainly private sector which became government debt after the bail out of the banks.European sovereign debt crisis first began in Greece due to government budget deficit manipulation.The crisis emanated to other Eurozone countries especially to Italy, Ireland, Portugal and Spain.The origins of the crisis are debated by scholars.Some of them claim that crisis originated from Eurozone's structural weakness (De Grauwe et al., 2015) some of them suggest that crisis started because of systemic risk of banking (Black et al., 2016).Still some state that the failure results from the lack of fiscal coordination (Chamley, 2012).Yet the crisis increased the debt of GIIPS countries, as the graph below shows.GIIPS countries experienced dramatic increases in debt ratio during crisis.Ireland's debt ratio was quite low before subprime crisis in 2007 but began to increase drastically afterwards.The graph shows that Greece had the highest debt/gdp ratio after crisis.The graph also shows that Ireland succeeded to decrease debt ratio after crisis in 2013 whereas other countries could not.The other countries failed to decrease their debt ratios after the end of financial crisis. After the eruption of the crisis, countries were forced to take measures in order to recover the economy by tightening their belts.Governments began to decrease expenditures and increase tax.These measures are called 'austerity measures' (Note 1) and they are applied by almost all European governments.Some countries applied strict austerity measures whereas some of them applied soft.For example Germany, UK and France applied austerity measures but those measures were soft and partial so measures affected only a part of the society.However austerity measures in GIIPS countries were hard and affected all parts of society.Greece made huge cuts on wages and pension expenditures after economic crisis.Austerity measures became the core issue of Greek politics after 2010 because of public's unrest.Greek personal income tax was 40% in 2009 with dramatic increase in 2010 it became 49%.The change was too dramatic in Greece.Ireland also increased its tax rate to 46% from 41% in 2008-2009 period.These examples show us that GIIPS countries made drastic changes in their tax regimes during economic crisis. In this article we try to find the relationship between tax rate increase and government borrowing in GIIPS countries.To the best knowledge of the authors, the success of austerity measures in terms of impact on government borrowing has not been analysed empirically so far. The rest of the paper unfolds as follows.Section 2 makes the literature review, section 3 explains the dataset and methodology, section 4 depicts the findings and discussion, section 5 concludes and makes policy recommendations. Literature Review There are some studies that have been carried about the effects of tax rate to economy.For example Akhmetova (2012) tries to depict if there is any connection between value added tax (VAT) and consumption.She concludes that the level of aggregate consumption is highly and negatively influenced by the increases of the effective VAT rate.She claims that if effective VAT rate increases one percent, consumption decreases by 0,81 percent.On the other hand one percent increase of personal income tax implies 6,8 percent decrease of household final consumption expense.Maşca et al. (2015) analyse the fiscal policy and growth in EU with 27 countries.They try to define the interaction between tax, consumption, transfers and real GDP.They conclude that both total taxes and taxes on labour have negative impact on growth in EU countries.According to the authors total government expenditure also have negative impact on growth. Fiscal adjustments are useful tools to design economic activity.Fidrmuc et al. (2015) analyse short-term effects of fiscal adjustment, which implies changes on cyclically adjusted primary balance, on economic activity.Their findings demonstrate that growth responds negatively to cyclically adjusted primary balance but positively to lagged change.They claim fiscal adjustments (Note 2) have negative effect on output in the short term.Finally they state that spending based adjustments lead to smaller output losses than tax-based adjustments.This result implies that spending based policies are more useful for growth.Alesina et al. (2012) analyse effects of tax based and expenditure based fiscal adjustments (change on government revenue, expenditure) on output.Their analysis suggests that the fiscal adjustments, based upon spending cuts or tax increase decrease output but spending based adjustments are much less costly in terms of output losses.They reach the conclusion that tax-based adjustments are associated with deep and long recessions while expenditure-based adjustments are not.Alesina and Ardagna (2010) state that fiscal policies based on expenditure cuts are more effective than tax cuts to handle deficits and confine debt ratio.On the other hand according to their analysis, fiscal stimulus based on tax cuts is more effective to fight against recession. The austerity measures are debated by scholars to see whether measures are useful or not.Krugman (2010Krugman ( , 2012) ) states that austerity is not a solution for European Union economic crisis.Expenditure cuts do not increase financial market confidence, which would help economic recovery.He asserts that if Europe imposes austerity measures it will hurt economic growth.Krugman (2012) states that on the contrary, as in the case of United States, EU should increase expenditure for economic recovery, as suggested by Keynes.Krugman gives the example of Spain to clarify why austerity measures are not essential (Note 3).Jordà and Taylor (2015), compare two sides of the austerity debate by using 'local projection' (LP) method, which is expansionary austerity as defended by Alesina-Ardagna (2010) and contractionary austerity as defended by Guajardo (2010).Their analysis consent Guajardo's study.In addition, Jorda and Taylor (2015) find that fiscal contraction prolongs the pain when country's economy is weak but the cost is less if economy is strong.De Grauwe and Ji (2013) state that austerity programs have 'fallacy of composition' problem.They say if every country imposes austerity program at the same time it will be unsuccessful and increase the cost of program especially for the periphery countries.They claim that unsustainable debt regime of southern debtor countries will not end for years.De Grauwe (2015) tries to find relation between primary budget balance, austerity and interest rate.He says automatic stabilizers are seen as one of successful implementation during crisis times.During crisis GDP decreases while government spending increases automatically, so government debt to GDP increases.This automatic process affects the depth of the crisis negatively.De Grauwe (2015) concludes that panic induced austerity measures lessen the effect of automatic stabilizers in the government budgets in Eurozone. As explained above, debt ratio is key factor of European economic crisis.Some scholars analysed effects of debt ratio.For example Boussard et al. (2013) simulate multiplier effect of fiscal consolidations on debt ratios.They investigate how fiscal adjustment's multiplier affects the debt ratio in EU countries.They simulate a model to define the effects of GDP multiplier to debt ratio.Their model concluded that multiplier effect is more in crisis times than normal times.And Julio et al. (2015), study relationship between fiscal adjustments (revenue based package and expenditure based package) and debt ratio, GDP, inflation and snowball effect in Euro area by using Dynamic Stochastic General Equilibrium (DSGE) model.Their findings suggest that fiscal consolidation effort in financial crisis times, to bring public debt to GDP ratio down is not effective.This effort on the contrary increases output losses with the increase of risk premium in the short term.On the other hand, fiscal consolidation efforts may decrease government debt to GDP ratio however, that results with large output losses, unfavourable budgetary and economic conditions.They point out that their finding cannot be generalized to the larger economies or to whole Euro area because the interest rate change and trade channels could be different in larger economies. Even though there are some studies that are carried about the effects of fiscal adjustments on economy, the lack of empirical analysis on the impact of austerity measures on government borrowing renders the empirical analysis indispensable. Methodology In this study data for Greece, Ireland, Italy, Portugal and Spain, which are referred as GIIPS countries, are used.These countries are selected as they have applied austerity measures to decrease budget deficit and debt.This study tries to find out if austerity measures are useful tools to decrease government debt ratio in GIIPS countries. Annual data is used from 1998 to 2014 (Note 4).We collect government debt to GDP data from IMF, tax rate (Note 5) of countries from the dataset of European Commission, interest rates from Eurostat and wages (Note 6) from OECD.Interest rate is long -term interest rate of 10 year bond rates which is also referred to as Maastricht bonds. In terms of methodology first we employed Im-Pesaran-Shin (IPS) (Im et al., 2003) and Levin-Li-Chun (LLC) (Levin et al., 2002) panel unit root tests to detect if the data is stationary or not.The null hypothesis of IPS test is 'unit root' which means if we reject the null hypothesis, data is stationary and if we fail to reject the null hypothesis, data is unit root. The second step of our methodology is a panel cointegration test.The most applied and prevalent cointegration test is Pedroni test.Pedroni (1999) names seven panel cointegration statistics.Basically, it employs four panel statistics and three group panel statistics to test the null hypothesis of no cointegration against the alternative hypothesis of cointegration.In the case of panel statistics, the first-order autoregressive term is assumed to be the same across all the cross sections, while in the case of group panel statistics the parameter is allowed to vary over the cross sections.The heterogeneous panel cointegration test advanced by Pedroni (1999Pedroni ( , 2004) is performed as follows: where; t is the number of observations over time and N is the number of individuals in the panel.The seven tests that are suggested by Pedroni (1999) are explained below; The panel v-statistic: (2) The panel ρ-statistic: (3) The panel t-statistic (non-parametric): (4) The panel t-statistic (parametric): (5) The group ρ-statistic: (6) The group t-statistic (non-parametric): (7) The group t-statistic (parametric): (8) The null hypothesis of Pedroni (1999) is 'no cointegration' which means that if we reject null hypothesis we may conclude that there is cointegration between government debt, inflation, tax rate and wage.The panel cointegration test proposes that there is long run relationship between variables. As the third step we apply Fully Modified Ordinary Least Squares (FMOLS).Pedroni(1999) proposed FMOLS estimator suggested by Philips and Hansen (1990) to get estimates for homogenous cointegration vector.There is a common value for the cointegrating vector in the null hypothesis of FMOLS.The alternative hypothesis of FMOLS the cointegrating vector needs not to be common.We use FMOLS test to get coefficients of panel cointegration test.The FMOLS test is formulated as; Where β ij is the FMOLS estimator. To analyse the effects of austerity measures on countries we suggest a model that consists government debt, interest rate, personal income tax rate and wage.The panel cointegration test allows for cross-sectional interdependence with both different individual effects and deterministic trends and can be defined as: where i = 1,……N represents the panel member, t = 1,……t refers to the time period, Govdebt represents the total government debt to GDP ratio, Int represents the long term interest rate, Taxrate represents the personal income tax rate and wage represents the average wage of citizens in GIIPS countries. Results and Discussions In this study we employ long-term interest rates, tax rates and wages as regressors of GIIPS countries' government debt.We use IPS and LLC unit root test to analyse whether these data are unit root or stationary.The result of IPS and LLC unit root test is shown at Table 1 below We fail to reject the null hypothesis of unit root, since the probability values given in the brackets are higher than % 5 significance level.After we find out that the data are unit root then we can employ cointegration test.The results imply that interest rate and tax rate positively and significantly affect government debt in GIIPS countries at 95% confidence level.In parallel to our expectations we found out that increases in interest rate automatically impacts the level of government debt.Our second anticipation was that tax rate increase causes lower government debt.However in GIIPS countries we find out that personal income tax increases cause government debt increases.Wage increase does not affect government debt at 5% confidence level but it affects positively the government debt at 10% confidence level.Results concerning wages are also consistent with our expectations since governments may finance wages of workers with debt. Tax increase has not decreased government debt so we could say that tax increase did not result with decrease of government debt.Our analysis shows that Krugman's (2012) suggestions about the nonusefullness of austerity measures as a solution for economic crisis in Europe is to a certain extent proven empirically. Conclusion and Policy Implications Austerity measures effected people's daily life in Europe and effects of austerity measures on government debt levels have not been so far analysed.We analysed the effects of personal income tax increase, interest rate increase and wage increase to government debt to GDP ratio by panel cointegraion test.Empirical findings show that there is a positive relationship between tax, interest rate and wage increase with government debt increase.This means that the tax increase did not decrease government debt level, as one would have predicted. Fiscal measures, besides monetary policy, are important policy tools to overcome economic crisis.Fiscal measures concern tax and expenditure regime.Despite the fact that IMF and ECB forced GIIPS countries to apply hard austerity measures, our empirical analysis show that increasing tax and decreasing wage do not effect government debt negatively. The result of our analysis suggests that austerity measures should be revised for low indebtedness of GIIPS countries.Sometimes fiscal adjustments do not end up with intended results.Austerity measures applied by GIIPS countries are a good example of this, as tax increase did not cause a decrease in government debt. Policy makers should take into consideration that traditional policies, which are applied by problematic countries, do not always end up with good results.Our analysis shows that application of austerity measures end up with high indebtedness of governments.The results of empirical analysis suggest that GIIPS countries should not have increased their personal income tax in the context of austerity measures.It seems increasing personal income taxes is not an appropriate decision to lower government debt since austerity measures cause low growth, which in return decrease the tax collected. Figure 1 . Figure 1.Debt to GDP ratio of GIIPS countries . Table 2 shows that we reject the null hypothesis of no cointegration because five of seven statistics are below % 5 significance level.It means that there is cointegration between the variables defined as GOVDEBT, INT, TAXRATE and WAGE.After detecting cointegration, we employ FMOLS test in order to find out coefficients.The results of FMOLS test are shown at table 3 below.
2018-12-06T02:16:33.200Z
2016-11-17T00:00:00.000
{ "year": 2016, "sha1": "488f1c26b5aa07042f3b84776a2be3dddd9a5f2b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/ijef.v8n12p106", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "488f1c26b5aa07042f3b84776a2be3dddd9a5f2b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
13060725
pes2o/s2orc
v3-fos-license
A Movement-Assisted Deployment of Collaborating Autonomous Sensors for Indoor and Outdoor Environment Monitoring Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper. Introduction Progress in hardware and networking technologies enables large-scale deployment of collaborating sensing devices and the creation of modern data acquisition systems that can greatly enhance the capability to sense and control physical environments. The potential applications contain a comprehensive surrounding monitoring, unmanned space exploration, objects tracking, surveillance systems, etc. Indeed, recently, a tremendous interest can be seen in the design and development of wireless sensor networks (WSNs), i.e., dynamically-configurable, self-organizing computer networks composed of numerous smart, embedded sensing devices that communicate wirelessly sharing the same radio channel. Rising demand for the capabilities of sensing systems, the lack of fixed network infrastructure and the limited energy and computation resources of their components provoke a broad spectrum of hardware and software engineering challenges involving high quality and secure communication, localization, optimal deployment, energy-efficiency, self-operability, scalability and performance. To meet these needs, a variety of methods is used and implemented, which results in the development of novel communication protocols, data acquisition algorithms, localization schemes and deployment techniques. WSNs utilize an ad hoc networking, a new paradigm of communication where all wireless devices (network nodes) communicate with each other in a collaborative way to achieve a common goal, usually without central management. Nodes (data sources) transmit data to devices located within their transmission range. For communication with a base station (data sink) and nodes beyond the transmission range, the relay nodes are used. The lack of fixed network infrastructure components allows creating unique topologies and enables the dynamic adjustment of sensing devices' operation to the current requirements. However, to fulfill sensing tasks, a full coverage of a region of interest (ROI) and permanent connection between sensing devices and a base station are usually required. A weak deployment of sensors may result in sensing holes and losses of connectivity or the redundancy of coverage, which causes redundant data in the network and usually wastes energy resources. On the other side, the obligation of providing high coverage of ROI and network connectivity leads to the necessity in dealing with the network construction costs' trade-off, as aggressive consolidation may lead to performance degradation. Consequently, an optimal deployment of sensors has received significant attention in the recent years. A variety of approaches and techniques has been proposed and presented in the literature [1,2]. In order to meet the challenging goals of novel indoor and outdoor sensing systems, the mobility of sensors is often leveraged for the deployment [3][4][5][6]. Sensors can be mounted on mobile platforms, i.e., unmanned vehicles, mobile robots or drones, and moved to desirable positions. It is obvious that mobility implies an additional complexity layer. However, the complexity of WSN design, implementation and management is compensated by a number of benefits. Endowing network nodes with mobility drastically expands WSN capabilities. Mobility allows one to detect the lack of deployment objectives, improve coverage and communication connectivity, even decreasing the number of employed sensing devices. A large number of measurement targets can be handled with a smaller number of migrating sensors. Although numerous movement-assisted deployment strategies have been investigated and are described in the literature, the development of a scalable and robust technique for sensors' placement to achieve the optimal sensing coverage and ensure satisfactory communication connectivity is still a challenging task, especially in the case of an unknown and changing region of interest. The aim of this paper is to present the application of numerical optimization, specialized heuristics and computer simulation to movement-assisted deployment of sensors in a sensing area. We consider networks with self-configuring capabilities that comprise multiple autonomous mobile devices equipped with various detectors and radio transceivers. We have developed a family of algorithms for forming adaptive and coherent wireless sensor networks for monitoring purposes. Both pre-configuring and self-configuring topologies have been considered. In general, the proposed algorithms share similar goals with many other solutions described in the literature. We focus on the development of an optimal sensing network formed by mobile sensors for outdoor and indoor environment monitoring, gas cloud detecting and surroundings, human existence detecting in the case of a disaster, rescue action supporting, etc. The aim is to determine the minimum number of sensors needed to be placed to achieve an optimal sensing coverage, maintain communication with the base station and minimize the energy usage and time for carrying sensors. Furthermore, we assume that our sensing networks should dynamically adapt to changing environments and fully cover a region of interest while new events occur, without internal sensing holes. The novelty in our approach is the simulation-based computing scheme for the calculation of mobile sensors' displacement ensuring permanent connectivity with the base station. It employs numerical optimization to calculate optimal positions of sensors and the concepts of an artificial potential field and a particle-based mobility to calculate collision-free traveling patterns for mobile platforms carrying sensors. Multiple simulations were performed to show the effectiveness of the proposed deployment strategies considering WSN sensing coverage and connectivity. We assessed the quality of the presented deployment strategies based on the results obtained for outdoor and indoor regions of interest. The remainder of the article is organized as follows. The related work is reviewed in Section 2. A formal statement of a problem of sensing network design and the performance measures are provided in Section 3. The proposed deployment strategies and the model for motion trajectories' calculation are described in Section 4. The results of simulation studies are presented and discussed in Section 5. Finally, we conclude the paper in Section 6. Deployment Strategies in a WSN Effective deployment of sensing devices is a fundamental issue in implementing a multi-hop wireless sensor network that enables the monitoring of a bounded region of interest and transmitting measurements to the base station. The deployment model determines the types, numbers and positions of devices in order to create a powerful system fulfilling the requirements of a given application scenario. Various models have been developed and described in the literature. Their performance can be evaluated using multiple metrics, such as coverage, connectivity, transmission quality, cost, etc. A brief overview of the deployment strategies is presented in the following subsections. The attention is focused on models for WSNs composed of mobile sensors. Classification of Deployment Strategies The deployment strategies can generally be divided into: • random dropping, • stationary deployment, • movement-assisted deployment. Random dropping of sensing devices is a widely-used deployment strategy, especially in disaster areas, wilderness, harsh fields or toxic urban areas, where sensors cannot be deployed manually. In general, due to the uneven distribution, the unknown and uncontrolled location of each device and the significant risk of unpredictable node failure, it cannot be expected that sensors are placed in a desired way. Moreover, sensor devices can fail at runtime for various reasons, such as damaging events, hardware effects, power depletion and degrading coverage. Consequently, the final network cannot satisfy the requirements for connectivity, cooperation and coverage, no matter how many devices are dropped. Dropping sensors in groups and a multi-stage strategy, where sensors are deployed iteratively taking into account the quality of previous deployments, are proposed for coverage improvement [7][8][9]. However, in general, random dropping can be used at initiation, but is not always a sufficient solution. In the case of a known, bounded sensing space, it is possible to calculate the optimal locations of sensing devices to obtain a good coverage and connectivity network [10]. All spatial positions of sensors are a priori determined. The key problems are how many sensors are sufficient for covering the region of interest, how to ensure a coverage of good compactness and the satisfied communication connectivity in the case of a sufficient number of sensors. Various solutions are presented and discussed in the literature. The most popular approach is to create regular grid topologies. The common idea of other methods is formulating a linear or a nonlinear optimal covering problem solved by optimization solvers. Recently, in many applications, movement-assisted sensor placement is investigated. The lack of the fixed spatial positions of sensors allows creating unique topologies (optimal at a given time) and enables the network to be dynamic and flexible. Mobile sensors can be used to cover the region of interest and/or to improve the coverage by detecting and covering holes in a network, replacing failed nodes and restoring connectivity. However, for the movement-assisted deployment, several basic issues must be solved. The most important ones are: localization in the case of a dynamically-changing network topology, maintaining communication connectivity and energy conservation. The GPS systems or specialized location schemes [11][12][13] are required to track the spatial positions of nodes. To maintain permanent communication connectivity, each sensor has to remain within a transmission range of at least one other node, regardless of the moving strategy. Moreover, mobility consumes a considerable amount of energy. The motion trajectories should ensure fast target point achievement with minimum energy consumption. Another problem to solve is how to guide mobile devices to explore entire workspaces, avoiding obstacles and reaching target positions. A comprehensive study of existing mobility models can be found in the literature [14][15][16][17][18]. Many of the algorithms for motion trajectory calculation have been introduced in mobile robotics [19,20] and next adapted to wireless mobile ad hoc networks. Overview of Movement-Assisted Deployment Strategies In general, various movement-assisted strategies have been investigated for sensor placement. They can be classified with respect to various criteria, such as the expected network configuration, operation mode and computing organization, the hardware's capabilities, nodes' properties, such as the ability for autonomous operation, etc. Surveys of existing solutions can be found in the literature [2,3,21,22]. With regard to the hardware capabilities of devices that form a network and fitting all or selected sensors with a mobility platform, we can distinguish two deployment schemes: • sensors conveyed by single or several mobile platforms, • mobile sensors. In the first one, a single or several mobile platforms (robots or unmanned vehicles) carry sensors and move around a working scene. While traveling, they deploy sensors at desired positions; usually the vertices of a geographic grid. In the second strategy, it is assumed that each sensor has locomotion. Hence, a network comprises multiple autonomous mobile devices (mobile sensors) that can place themselves by changing their geographic location usually in a collaborative way. The advantage of this solution is the high flexibility: a network topology can easily adapt to a changing environment. Sensor self-deployment can take place after initial sensor dropping or a team of mobile sensors can migrate from the base to the target location. In general, sensor migration can be accomplished in a direct or shifted manner. With regard to the manner of sensor migration, we can distinguish: • one-stage deployment, • multi-stage deployment. In one-stage deployment (direct migration), the whole motion trajectory is built, and the sensor moves toward the target position. In multi-stage deployment (shifted migration), a group of sensors changes position simultaneously. A multi-hop motion pattern is calculated, and each sensor shifts its position by one hop toward the target location. With regard to the calculation scheme and deployment process organization, we can distinguish: • pre-defined deployment, • self-organizing deployment, • hybrid deployment. In pre-defined deployment, all spatial positions of sensors are a priori (off-line) determined by the central dispatcher and transmitted to the mobile sensors. Next, all sensors are forced to move, avoiding obstacles, in the advisable direction with adequate speed to reach the target positions. Finally, a pre-defined network topology is formed. In the simplest case, regular grid topologies are created. Various forms of such topologies have been considered and compared in the literature. The most popular one are presented in Figure 1. A comprehensive analysis of the widely-used triangular grid deployment is presented in [1]. Another approach is to calculate the optimal spatial positions of sensors taking into account the shape and characteristics of a sensing scene and phenomenon [10,23]. The covering optimization problem is formulated and solved by optimization solvers. Self-organizing deployment is usually one possible option when the environmental dynamics precludes off-line pre-configuration, and nodes have to self-configure to establish an optimal network topology that maintains the sensing and communication coverage under energy constraints [3,24]. Hence, sensors are not assigned to fixed spatial positions, but are capable of tracking a changing environment. In this approach, all sensors that possess a certain amount of decision making autonomy move, avoiding obstacles in a workspace, while forming a multi-hop network that enables the continuous communication with the base station. Moreover, they perform cooperative simultaneous localization and communicate over the network. This strategy has important advantages over a scheme with pre-configuration, including autonomous deployment and flexibility. However, the realization of these envisaged gains depends on the communication and coordination capabilities of the network system. Finally, to combine the advantages of pre-defined and self-organizing deployment, hybrid schemes have been developed. In these strategies, the deployment of sensors is composed of two (or more) stages. In the first stage, sensors are forced to move to advisable targets: nodes of the network determined by the central dispatcher. In the second stage, sensors are switched to the self-organizing deployment to adapt the network topology to the current snapshot of the sensing scene. Considering the distribution of the decision process, we can distinguish: • deployment with coordination. In the centralized approach, the locations of all network nodes and motion trajectories are off-line calculated by the central dispatcher and transferred to sensing devices. In such an approach, the base station has to possess global knowledge about the whole working space and sensing phenomenon. It cannot be applied to monitoring processes with fast dynamics. In distributed deployment, all sensors are autonomous agents that collaborate to create a network. Each node estimates its own position based on the local data gathered from its neighbors and creates the motion trajectory. In the last approach, a two-level decision structure is implemented. The base station coordinates the sensor placement by influencing the individual sensors' decisions. Problem Statement The aim of our research is to construct of a multi-hop wireless sensor network that enables sensing coverage of a bounded region of interest ROI ⊂ W and that intermittently transmits measurements to the base station. It is assumed that the network can operate in a disaster area, harsh fields, etc., and the sensing space ROI can move in W and change its shape. A WSN sensing system has to evolve and dynamically adapt its topology with respect to changes in the workspace. Hence, sensors cannot be deployed manually, and the movement-assisted deployment described in Section 2.1 is the only viable option. Application Scenarios Definition Consider a network WSN comprised by a set S D of N mobile devices D i (network nodes), [25,26] where: S L denotes the set of active direct connections between each pair of devices D i and D j , i = j. Each D i is equipped with the radio transceiver and the detector (or a set of detectors) and has locomotion. Hence, it is a solid body with any shape, which can move and rotate in a spatial domain W ⊂ 3 . Its position is described by a reference point c i = [x i , y i , z i ], which is the location of its antenna, and its orientation is defined by the following Equation [19]: where Q i denotes a quaternion k 1 , k 2 and k 3 the fundamental quaternion units. We define a set S O of obstacles in the workspace W that have to be avoided by mobile sensors. All obstacles are solid bodies with any shape, as well. Two types of obstacles are distinguished, i.e., S O = S M ∪ S S . A set S M consists of mobile obstacles O m , m = 1, . . . , M, and a set S S consists of static obstacles O s , s = 1, . . . , S. Three application scenarios described in Section 2.2 are considered. 1. Pre-defined deployment: each network node is forced to move in the advisable direction to reach its target location The target location can change over time. 2. Self-organizing deployment: all devices can move and organize themselves in an as large as possible network to cover the ROI taking into account their detectors and radio ranges. 3. Hybrid deployment: the combination of pre-defined and self-organizing strategies. In the aforementioned scenarios, a satisfactory connectivity among all working sets of sensors and the base station should be provided. While forming a network for permanent monitoring and transmit measurements to the base station, the reference distancesd ij between D i and other nodes (D j , j = 1, . . . , N, j = i) and the reference distanced ig to the target point g i should encourage the creation of a well-connected network with good coverage. The performance measures for a good coverage and connectivity network are defined in the following subsection. Performance Metrics Multiple metrics can be used to measure the performance of a movement-assisted sensor placement. It is not enough to observe only coverage. Referring to the literature [1,6,23,[27][28][29] and considering the results of our research, we have provided the following performance measures: connectivity, compact coverage, quality of transmission and cost. These are mainly caused by economic or technical constraints, such as hardware cost, low battery power and limited computation capabilities. Connectivity The fundamental characteristic of a wireless sensor network is connectivity. Direct or indirect connections between all sensors and the base station are expected. In general, the measurements of all isolated nodes are useless. The WSN that maintains the permanent connection with the base station is comprised of a set S C D of devices that form a topology satisfying the following condition: where p denotes an identifier of a transmission path from the device D i to the base station BS formed by a set P p i of K ordered nodes, such that D and r t l denote the distance between the node l and its successor in the path p and the transmission range of D p l , respectively. Sensors without direct or indirect connection with the base station form a set of isolated nodes, Hence, the number of isolated nodes C iso in WSN is equal to |S I D |. It is obvious that the lesser the number of isolated nodes, the better the connectivity; therefore, C iso should be minimized. Compact Coverage Compact coverage is the most important requirement of the sensors' deployment. In most applications, it is expected to cover as large a part of a given region of interest as possible. Therefore, the percent of ROI covered by a WSN can be adopted as the performance metric. It is calculated as follows: where ROI denotes a region of interest, D i the i-th sensing device (network node), S D a set of whole devices, S C D a subset of S D defined in Equation (5) and cov s i the sensing coverage of the i-th device. The coverage is related to the number of sensing devices and their hardware equipment. Sometimes, particularly in large ROI and poorly-equipped devices, problems with ensuring coverage of good compactness may occur. In such a situation, a question is how much of the ROI can be sensed. In the case of poor results, the only option is to increase a number of nodes in a network. Quality of Transmission In the case of many sensor networks based on contention-based MAC protocols (e.g., CSMA (Carrier Sense Multiple Access)), the quality of transmission is often decreased by an interference. It is obvious that the smaller the number of connections with neighboring nodes, i.e., the smaller node degree, the less interference is observed (see [29]). On the other hand, an insufficient number of connections with neighboring nodes endangers the network reliability, by limiting the number of alternative paths to the sink. In order to improve the quality of transmission, either scheduled-based MAC protocols, like time-synchronized channel hopping (TSCH) [30,31], or networks with an appropriate node degree should be considered. Therefore, in our paper we propose to keep the average network node degree C dim , on the low level, not-exceeding 10 (the best results of the experiments were collected for C dim ∼ 4 − 6). In the above equation L denotes a total number of active links in a whole network WSN and dim(D i ) the degree of the node i. Cost of Deployment The obligation of providing high coverage and connectivity leads to the necessity in dealing with the network construction costs' trade-off. We restricted our consideration only to the time of deployment and energy consumption, which are commonly listed costs of deployment. Moreover, we did not take into account costs concerned with the further activity of the created system, i.e., monitoring, tracking, etc. Robots' motion consumes the most energy and time. The energy used for wireless transmission is significantly smaller and can be ignored. Therefore, the path length of all mobile platforms C len from the starting points to the target points should be minimized. where C tim denotes the total time of a given WSN deployment, t (t = 1, . . . , C tim ) is a time step and c i is the reference point of the node D i . The criteria C cov , C iso , C dim can be used to assess the quality of a sensing system, i.e., sensing coverage, its connectivity and efficiency. C len and C tim show the energy and computational costs of the deployment process. Collaborative Mobile Sensing Network Design As was stated in Section 3, the aim is to create an optimal architecture of a mobile sensing network for indoor or outdoor ROI permanent monitoring. In systems that exploit mobile sensors, the spatial positions of all sensors in a workspace have to be constantly updated. Therefore, the important component of the deployment system is the mobility model for computing paths on which the platforms carrying sensors can move to a desired destination. We have developed such a model. Its preliminary version is described in [26]. In this section, we present and discuss the application of our mobility model to determine the collision-free motion trajectories for mobile sensors forming sensing systems for outdoor and indoor monitoring. Consider a WSN as defined in Section 3 that operates in a workspace W. The network is expected to cover the region of interest ROI ⊂ W whose boundary is known and to provide consistent, quality sensing service. All measurements have to be continuously transmitted to the base station using multi-hop communication. In In general, radio and sensing coverage areas of each wireless sensing device are highly irregular, because of interference, obstacles, etc. However, including all of these aspects in communication and sensing models makes them extremely complex. Therefore, we model the radio coverage of D i by a disc cov t i of a radius r t i and the sensing coverage cov s i by a disc of a radius r s i , both centered at the reference point c i . Virtual Force Mobility Model We have developed a novel virtual force model for sensor displacement calculation in a collaborative and coherent WSN. The virtual force technique was originally invented and used in mobile robotics [19]. It is based on the concept of an artificial potential field and an artificial potential function. The artificial potential field can be viewed as a landscape where a mobile device moves from a high-value state to a low-value state. The value of the artificial potential function V depends on a Euclidean distance d between the considered devices or objects. It is viewed as energy and is computed as a sum of repulsive and attractive potentials. The idea is that the target point attracts the mobile device, while the obstacle repels it. The virtual force is expressed by the gradient of V. Consider WSN formed by N sensing devices D i , i = 1, . . . , N, acting in the workspace W. In general, the total potential between D i and all other objects in W, i.e., other nodes and all obstacles, can be expressed as follows: where V ij , V is , V im and V ig denote the potential functions between the sensor D i and, respectively, the j-th sensor, the s-th static obstacle, the m-th mobile obstacle and the target point The main difficulty in mobility models utilizing the concept of the artificial potential field is to construct a potential function that can be used for on-line computing of a new location of a moving device and that does not introduce oscillations and many local minima not corresponding to the target of movement. Much research derives inspiration from classical and quantum mechanics. We developed a simple artificial potential function V i drawing on the Lennard-Jones potential commonly used to model the interactions between a pair of neutral atoms or molecules in liquid crystals [32]. The preliminary version was provided in [26]. The extended one, presented in this paper, was used to model the interactions between a given i-th node and all other objects in a workspace W, i.e., other network nodes and all obstacles. In the above equation, ij ≥ 0, is ≥ 0, im ≥ 0 and ig ≥ 0 denote weighting factors determining the importance of, respectively, the device j, the obstacles s, m and the target g. d ij , d is , d im and d ig are distances calculated based on current measurements.d ij ,d is ,d im andd ig are the reference distances between c i and, respectively, c j , c s , c m and g i .d ig ,d is andd im are determined by a user, whiled ij is constantly updated due to current measurements. A sample V ig and its gradient (force) F ig = ∇V ig are depicted in Figure 3a,b. In general, all functions V ij , V is and V im , the other components of V i (Equation (9)), are similar in shape. In the reference positions of all D i (i = 1, . . . , N), i.e., d ig =d ig , d ij =d ij (j = 1, . . . , N), d is =d is (s = 1, . . . , S) and d im =d im (m = 1, . . . , M), we have an optimal network topology for a given application scenario (an unstable equilibrium (V i ≈ 0, i = 1, . . . , N)). However, it is obvious that in each time step, collisions of D i with only a few obstacles may occur. Therefore, the definition of the potential function (Equation (11)) can be simplified. Let us assume that in a given time t k , only H i nearby obstacles (static and mobile) are located on the path to the target point g i . Moreover, b h i is the point from the edge of the h-th obstacle (h = 1, . . . , H i ) with the shortest distance to c i . The potential function (Equation (11)) can be substituted by the following one: In this definition of the potential function, the reference distance to the detected obstacle is equal to v i max · ∆t. d ih = c i − b h i denotes a distance to the h-th obstacle estimated in the time step t k . Finally, we can calculate the expected position of the reference point c i of the device D i and all points, vertices of its envelope, p i ∈ P i , solving the optimization problem: under the constraints: ∀ The constraints specified in Equation (14) assure that the shape of each device will be preserved after displacement (Figure 2). Due to assumption that each network node is a solid body, the distances between different points from the set P i and the point c i remain constant. The constraint (15) determines the speed range. The optimization problem Equations (13)-(15) has to be solved repetitively, every time step t k , (t k+1 − t k = ∆t) for all network nodes D i , i = 1, . . . , N. After computing new optimal locations, all nodes are moved to these locations avoiding all nearby obstacles. Algorithm for Collision-Free Traversing Pattern Calculation A virtual force algorithm was developed to calculate a collision-free traversing pattern from one WSN configuration to another. In general, the algorithm executed autonomously by each D i is composed of three main steps executed repetitively: Step 1: Estimation of the reference internode distancesd ij (i = 1, . . . , N, j = 1, . . . , N, i = j) based on the current sensing and transmission ranges of all D i . Step 2: Calculation of the new positions of D i solving the optimization problem Equations (13)- (15) with the performance measures equal to a value of an artificial potential function. Step 3: Relocation of D i to the determined position avoiding all nearby obstacles. To estimate the reference distanced ij between two neighboring nodes that implies both coverage and connectivity, the current transmission range r t i and sensing range r s i have to be taken into consideration. The sensing range is usually specified for a given detector; the transmission range r t i is usually specified for a given transceiver; but in many application scenarios, they should be adjusted to communication the conditions and working scene. Current internode distances d ij and reference distancesd ij can be calculated based on known geographical locations of nodes and current measurements. In general, the measurements can be given by various components of each device, e.g., radio transceivers, cameras, detectors, etc. In the case studies, results of which are presented in this paper, a commonly-used radio signal propagation model and signal strength measurements based on RSSI (received signal strength indicator) were applied to estimate the current and reference internode distances. Namely, we used a long-distance path loss model for the estimation of signal degradation with a distance d ij [33]: where Pow t i denotes the power used by the i-th node to transmit the signal, Pow r j the power of the signal received by the receiver (j-th node) and PL(d ij ) (path loss) the average degradation of a signal with a distance d ij , defined as follows: where q is an attenuation constant, which indicates the rate at which the path loss increases with a distance, d 0 denotes a reference distance (d 0 = 1 m for IEEE 802.15.4 [30]) and X σ a zero-mean Gaussian distributed random variable with standard deviation σ (all in dB). In the practical scenarios, the internode distances and reference distances can be estimated using the Equations (16) and (17) model. The least square method and RSSI measurements can be applied to calculate all parameters of this model. The well-known Q-function in statistics described in [33] can be used to determine the probability that the received signal level will exceed the sensitivity of the transceiver. The detailed description of internode distances' estimation can be found in [12,26]. After reference internode distance estimation, each D i computes its expected location in a workspace taking into account these distances, the target destination (if it is recommended) and possible collisions with obstacles in W. In the time steps t 1 , t 2 , . . . , t K , each i-th device calculates its new position in W solving the optimization problem Equations (13)- (15). Next, it moves to the designated location. The tangent bug algorithm described in [19] can be applied to avoid obstacles. The calculations are repeated every time interval ∆t due to changes in communication conditions and the workspace. The detailed description of all steps of the algorithm is provided in [26]. Deployment Strategies and Computing Schemes We have developed algorithms for pre-defined and self-organizing deployment, both utilizing our mobility model for platforms carrying sensors traversing pattern calculation. In general, in the pre-defined deployment, the optimal positions of D i , i = 1, . . . , N in W are explicitly off-line calculated; while, in the self-organizing deployment, the target positions are calculated on-line and are dynamically updated. In our deployment strategies, we have employed two computing and decision schemes: distributed and two-level with a supervisor. In the first one, all calculations are performed by network devices based on locally-available information. In the second one, a central dispatcher (base station) influences the decisions of these devices. The central dispatcher computes its decisions based on data about a workspace and the current states of all sensing devices. Pre-Defined Deployment A coherent sensing system with a triangular topology has been investigated. Taking into account the sensing and transmission ranges, the maximum distance d max ij between pairs of neighbors (i-th and j-th) in a triangular topology that implies both sensing coverage and connectivity is as follows: The two-level computing scheme can be employed to calculate the target positions of mobile robots carrying sensors and traversing patterns to these points. First, the base station explicitly calculates from Equation (18) • Variant B: known map of W (full knowledge about all obstacles is provided). both under the constraints (14) and (15). Note that in the above objective functions, we have taken the weighting factor ig = 1, i, g = 1, . . . , N. Manipulating with the values of these factors, we can generate topologies with other shapes than the triangular one. Next, all robots are forced to move to new positions avoiding obstacles in W. The operations are repeated up to the robots reaching the target positions. It is obvious that in the case of a working scene with numerous obstacles, the final topology can be irregular in subareas. Self-Organizing Deployment In self-organizing deployment, all robots carrying sensors move, avoiding obstacles in a workspace, while forming a coherent multi-hop network that enables the continuous communication with the base station, and cover the region of interest. In this computing scheme, robots are autonomous devices, and no central unit or supervisor influences their decisions. All calculations are fully distributed and on-line performed by network nodes. First, the estimates of reference distanceŝ d ij , i, j = 1, . . . , N, i = j are calculated from Equation (17) based on current geographical locations and the measurements of radio signal strength gathered from node's neighbors, as well as with regard to the current communication conditions. To calculate these estimates, nodes have to perform cooperative simultaneous localization and communicate over the network. Next, every time step t k , the i-th node solves the following optimization problem and determines its desired location in the time step t k+1 . Similarly to pre-defined deployment, two variants are considered: • Variant B: under the constraints (14) and (15). Next, each robot is forced to move to a new position, avoiding collisions. Similarly to the pre-defined deployment, the optimization problem is solved every ∆t. However, in this deployment scheme, the reference distances have to be updated with respect to changes in the workspace. The self-organization allows one to create topologies with various characteristics, dedicated to given application scenarios. Moreover, note that the final topology depends on the values of weighting factors ij in the objective functions (21) and (22). For ij = 1, i, j = 1, . . . , N, the optimal topology should provide the maximum possible coverage of a given ROI. Manipulating with weighting factors, we can generate topologies with various densities of nodes in subregions. This strategy has important advantages over a scheme with pre-configuration, including autonomous deployment and flexibility. However, the realization of these envisaged gains depends on the communication capabilities of the network system. Moreover, it can be inefficient in the case of workspace with narrow passages and multiple densely-deployed obstacles. Hybrid Deployment This strategy combines the advantages of pre-defined and self-organizing deployment. Two variants of hybrid deployment have been developed and investigated. In both of them, the scheme for computing the optimal placement of sensing devices was divided into two stages. Hybrid 1: concentration of nodes in a given subregion and switching into self-organizing deployment. Hybrid 2: improvement of combining pre-defined deployment using the self-organizing capabilities of sensing devices. In the first variant, all robots are forced to move to a neighborhood of one or several points in a workspace determined by the supervisor. Next, they are switched to the self-organizing deployment. Such an approach can be a good solution in the case of narrow passages in a workspace. In the second variant, robots carrying sensors are forced to move to advisable targets: nodes of the grid determined by the base station. Next, they are switched to the self-organizing deployment. Thus, the regular topology adapts to the working scene. Such a deployment scheme can be especially useful in the case of multiple irregular obstacles in a workspace. Simulation Setup A series of simulation experiments was conducted to validate and demonstrate the capabilities of all presented deployment strategies. In the last few decades, numerous software tools for ad hoc network simulation have been developed. The popular commercial and publicly-released simulators (e.g., ns-3 [34], TOSSIM [35], Cooja [36]) focus on the simulation of wire and wireless transmission. Tools for mobile robot motion simulation are provided by simulation environments for mobile robotics (e.g., V-Rep [37]). However, they offer a limited number of mobility models. Therefore, for the sake of results comparability, the formation of two-dimensional mobile wireless sensor networks for indoor and outdoor monitoring was implemented in the form of simulators developed using MobAsim [38], our tool for fast prototyping and simulation of ad hoc networks. The tasks for a team of mobile robots carrying sensors were to create high coverage and connectivity wireless sensor networks and minimize the cost of these networks' construction. Results were compared with respect to the criteria defined in Section 3.2. The collision-free motion patterns for all robots were calculated due to the mobility model described in Section 4.1. Two scenarios were considered, i.e., Variant A, the option without the a priori available map of ROI, and Variant B, the option with the known map of ROI. In Variant A, all obstacles were identified on-line by robots, while in Variant B, each robot used a map of terrain and obstacles for motion trajectory calculation. Scenario 1: Indoor Environment Monitoring The first case study was to design and develop wireless sensor networks for indoor environment monitoring. The aim was to cover the arrival hall (90 m × 90 m) at the airport by sensing devices and ensure permanent multi-hop communication between each device and the base station. Wireless networks with several configurations, composed of 22 mobile robots equipped with sensors and moving with a speed v ∈ [0.2 m/s, 2 m/s], were created for sensing this hall. All robots were modeled by circles with the reference points c i , i = 1, . . . , 22 at their centers. The initial positions of all robots were in the bottom right corner of the hall; see Figure 4. The robots communicated wirelessly sharing the same radio channel; the transmission range of all network devices was equal to r t = 27.6 m, while the sensing range was equal to r s = 10 m. Taking into account the size of the workspace and the limitations of sensing devices and radio transceivers the, maximum possible sensing coverage C cov of the obstacle-free ROI was equal to 85%. The values of the metrics defined in Section 3.2 and calculated for all developed sensing networks are collected in Table 1 and Figure 5. The results of the simulation of the 300 s of the network formation process exploiting pre-defined, self-organizing and hybrid deployment strategies with the known and the unknown map of ROI are presented in Figure 6a-f. The first series of experiments was to create triangular grid topologies. In this scenario, the final exact positions of all sensors were calculated off-line by the central dispatcher in the base station and distributed to all robots. Next, the robots were forced to move to these positions. In all experiments, high connectivity networks were obtained; Figure 6a,b. However, in the case of the variant with the unknown map of ROI, the obstacles forced some changes in the established regular topology and influenced both the coverage and cost of the network. The final coverage was equal to 65%; the total path length was equal to 1660 m; the time of the network deployment was equal to 103 s. Providing the map of ROI seriously improved the quality of the sensing network (Figure 6b). The coverage increased to 73%, and the total path length was reduced to 1475 m. Next, the self-organizing deployment was tested. The robots carrying sensors were free to move, communicate and organize themselves. The goal was to form a high quality network. Figure 6c illustrates the network topologies obtained after 300 s of the network formation process in the case of the unknown map of ROI. Unfortunately, providing the map of ROI only slightly improved the quality of the sensing network. It can be seen that the performance metrics calculated for the self-organizing deployment are worse than the metrics obtained for the pre-defined deployment. The coverage decreased by 13% in the case of the unknown map. It was observed that deterioration in the final deployment was mainly due to the narrow passage in the workspace considered. Therefore, to improve the quality of the sensing network, the two-phase deployment (Hybrid 1) was used. The result of the application of this deployment strategy for a workspace with known obstacles is presented in Figure 6d. In the first phase, the intermediate target position was calculated by the central dispatcher and broadcast to all other devices. The robots were forced to move to reach the neighborhood of this position. Next, they freely organized themselves and formed the final network. The application of such a strategy allowed improving the final coverage to 71%. However, the final coverage was still worse than that obtained employing the pre-defined scheme for network topology calculation, but the time of deployment was seriously reduced. Finally, the Hybrid 2 deployment strategy was evaluated through simulation. The sensing system was built in two phases. In the first phase the triangular grid topology network was created. The pre-defined deployment strategy was utilized. In the second phase, the self-organizing capabilities of the network nodes were applied to adapt the topology to ROI. The results of the simulation of 300 s of the network formation process are presented in Figure 6e,f. Adding self-organization in the final stage of the network formation process, we increased the coverage to 70% (Variant A) and 74% (Variant B). Obviously, the total path and time of deployment increased. In the last series of experiments, the results of which are depicted in Figure 7a,b, the network topologies were created under the assumption that passages for people and goods had to be free from sensors. The Hybrid 2 deployment strategy was used to create the monitoring network. Scenario 2: Outdoor Environment Monitoring The aim of the second series of tests was to form a wireless sensor network for monitoring a gas pipeline system comprised of underground and ground infrastructure, i.e., pipes and four metering stations. The system was located in a square area (70 m × 70 m); Figure 8. The sensing area (ROI) was divided into six subregions; Figure 9a. A set S D of 20 mobile robots equipped with sensors and moving with the speed v ∈ [0.2 m/s, 2 m/s] was used to monitor this gas pipeline system. Similarly to Scenario 1 all robots were modeled by circles with the reference points c i , i = 1, . . . , 20 at their centers. The set S D was divided into six teams S w D , S D = w=1,...,6 S w D ; four consisting of three robots and two consisting of four robots; each team for the monitoring of one subregion. The Hybrid 2 deployment strategy was employed to create the sensing system. The network formation process was evaluated through simulation. The transmission range of each transceiver was equal to r t = 27.6 m, and the sensing range was equal to r s = 10 m. First, the centers of all sections g w , w = 1, . . . , 6 were determined by the central dispatcher based on the map of ROI, and the reference distancesd iw , for all robots from each team S w D , were calculated according to Equation (18). Next, the target positions for all robots from all teams S w D , w = 1, . . . , 6, were computed by the central dispatcher solving the following optimization problem ford iw = 15: under the constraints (14) and (15). In the above formulation, d iw = c i − g w . The solution of the optimization (23), i.e., a vector of target points, was forwarded to the robot teams. Robots from each team were forced to move to reach the neighborhood of their target positions. Next, all robots switched to the self-organizing deployment strategy. New target positions were repetitively calculated. The following optimization problem was solved autonomously by each robot from a team w: under the constraints (14) and (15). The final coverage of the pipeline system under monitoring was equal to 98%, and the average network node degree was equal to 4.24. All metering stations were under monitoring. The total path length of all robots was equal to 12,635.78 m, and the time of the network deployment was equal to 301 s. It is worth mentioning that the case study results presented in this paper are limited to the design of coherent and compact networks for monitoring. The proposed technique for the design of a collaborating mobile network employing our mobility model can be successfully applied to other scenarios. Numerous simulation experiments starting from simple to more complex wireless ad hoc networks topologies were conducted. Some of them are described and discussed in [26]. Conclusions This paper has provided a short overview of wireless sensors' deployment and coverage to monitor indoor and outdoor environment. We outlined the main properties and criteria that should be considered while creating the optimal network topologies for mobile sensing. Moreover, the paper has summarized the results of our research concerned with the application of optimization and simulation technologies to the on-line deployment of mobile autonomous wireless sensors for monitoring purposes. A family of algorithms utilizing pre-defined and self-organizing computing schemes for designing sensing network topologies maintaining permanent connectivity with the central dispatcher were developed and compared. They employ a novel mobility model for the optimal sensors' displacement calculation. The obtained results confirmed that pre-defined deployment is the best solution for the construction of a network for known, obstacle-free workspace monitoring. In the case of more realistic scenarios, when the aim is to monitor a scene with on-line-detected obstacles and moving objects, the hybrid strategy combining both pre-defined and self-organizing deployment is recommended. Including autonomous deployment and self-organizing capabilities of network nodes can improve the quality of the sensing topology, particularly in the case of an unknown map of the workspace considered. The main advantages of self-organizing and hybrid deployments are their flexibility and robustness. The sensing network may easily adapt to a given application scenario, especially in the case of a workspace with narrow passages, numerous unknown obstacles and dynamic changes in the sensing environment. Moreover, the network topology can be quickly modified as needed. Unfortunately, distributed self-organizing deployment involves high communication and computational costs and then a high energy consumption. In many practical applications, the limited resources of real-life networks comprised of simple devices can constrain us to use low complexity deployment strategies. Although numerous deployment methods and algorithms have been proposed and described in the literature, the development of a robust and scalable technique for the optimal placement of sensors in a working space with high sensing coverage, minimal hardware cost and computational burdens is still a challenging problem. The presented strategies for mobile sensing network design have been already verified and evaluated through extensive simulations. Moreover, they will be validated based on the experiments carried out in the testbed networks. Currently, we are working on the prototype ad hoc network for supporting emergency situation awareness and rescue actions composed of mobile devices equipped with the radio transceiver nRF51822 manufactured by Nordic Semiconductors. The self-organizing and hybrid deployment strategies described and evaluated in this paper will be used to calculate the evolving positions of the devices. The next application considered is the implementation of a mobile monitoring system in the laboratory in cooperation with the Robotic Group at Warsaw University of Technology or in one of the experimental platforms (e.g., FIT IoT-LAB [39]). The system will be created from multiple robots carrying sensors and following motion trajectories calculated according to the algorithms described in this paper.
2016-09-21T08:51:56.807Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "ff605fb13fec14b5c130b65f79bf421bbf7e6795", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/9/1497/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff605fb13fec14b5c130b65f79bf421bbf7e6795", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Medicine" ] }
22538251
pes2o/s2orc
v3-fos-license
Lymphatic vascular morphogenesis in development, physiology, and disease The lymphatic vasculature constitutes a highly specialized part of the vascular system that is essential for the maintenance of interstitial fluid balance, uptake of dietary fat, and immune response. Recently, there has been an increased awareness of the importance of lymphatic vessels in many common pathological conditions, such as tumor cell dissemination and chronic inflammation. Studies of embryonic development and genetically engineered animal models coupled with the discovery of mutations underlying human lymphedema syndromes have contributed to our understanding of mechanisms regulating normal and pathological lymphatic morphogenesis. It is now crucial to use this knowledge for the development of novel therapies for human diseases. Introduction The lymphatic vascular system serves key physiological func tions: it maintains fluid homeostasis by absorbing water and macromolecules from the interstitium, enables uptake of dietary lipids and vitamins in the intestine, and serves as a trafficking route for immune cells. The lymphatic vasculature consists of a highly branched network of capillaries and ducts that is present in most organs with the exception of the central nervous system and avascular tissues, such as cartilage. Unlike the blood vas culature, the lymphatic vasculature is blind ending (Fig. 1 A): its small capillaries funnel first into precollecting and larger col lecting vessels and then into the thoracic duct or the right lym phatic trunk, which drains lymph into the subclavian veins. Malfunctioning of the lymphatic vasculature results in lymphedema formation and compromises immune function. In the past decade, tremendous progress has been achieved in understanding the mechanisms regulating the morphogenesis of lymphatic vasculature, mainly accomplished by genetically modified mouse models and discovery of mutations respon sible for human lymphedema syndromes. In addition, models, such as zebrafish and frog tadpoles, are emerging as powerful tools for studying lymphatic vascular development. In this re view, we will summarize the main mechanisms underlying the development of lymphatic vasculature and present an overview of several human diseases that are associated with lymphatic vessel abnormalities. Mechanisms of lymph transport The structure of the different lymphatic vascular compartments, such as capillaries, precollecting, and collecting lymphatic ves sels, reflects its dual role in fluid absorption and lymph trans port. We will briefly present the main aspects of lymph transport, which have been documented in more detail in recent reviews (Dejana et al., 2009;Zawieja, 2009). Fluid and cell uptake by lymphatic capillaries. Lymphatic capillary endothelium has a unique junctional orga nization (Baluk et al., 2007;Dejana et al., 2009). Oak leaf-shaped endothelial cells are connected by discontinuous buttonlike junctions. Free overlapping cell edges anchored on each side by these junctions form "flap valves" (Fig. 1, B and C) through which fluid flows unidirectionally along pressure gradients from the interstitium into the capillary lumen. Actively sprouting lymphatic capillaries have continuous cell-cell junctions, sug gesting buttonlike junctions as characteristics of quiescent and functional lymphatic capillary endothelium (Baluk et al., 2007). Lymphatic capillaries lack mural cells and connect to the ECM via anchoring filaments (Leak and Burke, 1968), which prevent the collapse of capillaries upon the increase of interstitial pres sure ( Fig. 1 B). The lymphatic vasculature constitutes a highly specialized part of the vascular system that is essential for the mainte nance of interstitial fluid balance, uptake of dietary fat, and immune response. Recently, there has been an in creased awareness of the importance of lymphatic vessels in many common pathological conditions, such as tumor cell dissemination and chronic inflammation. Studies of embryonic development and genetically engineered animal models coupled with the discovery of mutations under lying human lymphedema syndromes have contributed to our understanding of mechanisms regulating normal and pathological lymphatic morphogenesis. It is now crucial to use this knowledge for the development of novel therapies for human diseases. Shear stress generated by transcapillary fluid flow regu lates the expression of junctional proteins, upregulates leuko cyte adhesion molecules ICAM1 and Eselectin, and promotes secretion of chemokine CCL21, mediating dendritic cell migra tion (Miteva et al., 2010). Thus, mechanical stimulation may be important for immune surveillance function of lymphatic vasculature. Dendritic cells first squeeze through pores that punctuate the sparse basement membrane of lymphatic capillar ies and, subsequently, reach the lumen through interendothelial flap valves (Fig. 1 B; Pflicke and Sixt, 2009). They are then transported toward the draining lymph nodes where they induce immune responses. The lymphatic vasculature resorbs fluid, macromolecules, and cells from the interstitium. (B) Mechanism of lymph formation in capillaries. Interstitial components penetrate lymphatic capillaries via openings between LECs. The specialized structure of such openings prevents the return of lymph back to the interstitium. Anchoring filaments attach LECs to the ECM and prevent vessel collapse under conditions of increased interstitial pressure (black arrow). (C) Junctional organization of LECs in lymphatic capillaries and collecting vessels. Both "buttons" and "zippers" share a repertoire of adherens and tight junction-associated proteins (e.g., VE-cadherin, zonula occludens-1, occludin, and claudin-5). The main difference between them resides in their organization (Baluk et al., 2007). (D) Mechanism of lymph propulsion in collecting vessels. Coordinated opening and closure of lymphatic valves is important for efficient lymph transport. SMCs covering each lymphangion possess intrinsic contractile activity. EC, endothelial cell. lymph hearts located at the junctions of the lymphatic and venous systems control lymph flow (Kampmeier, 1969). Lymphatic vascular morphogenesis Lymphatic vascular development requires transdifferentiation of venous endothelial cells toward the lymphatic endothelial phenotype, separation of blood and lymphatic vasculature, sprout ing of lymphatic vessels, and lymphatic vascular maturation (Fig. 2 A). Over 20 genes orchestrate these processes in mice (Table I), and recently, lymphangiogenesis has also been exam ined in lower vertebrates, such as fish and frogs. Establishment of LEC identity and lymphatic sprouting. Lymphatic vessels stem from preexisting blood vessels. Elegant lineage tracing by Srinivasan et al. (2007) dem onstrated the venous origin of the mammalian lymphatic vascu lature as previously proposed (Sabin, 1902;Kaipainen et al., 1995). The venous origin of LECs has been confirmed in Xenopus laevis and zebrafish as shown by realtime imaging in the latter and, therefore, appears to be evolutionary conserved (Ny et al., 2005;Yaniv et al., 2006;Hogan et al., 2009a). Lymphatic valves contain two semilunar leaflets, which are covered on both sides by a specialized endothelium an chored to the ECM core (Lauweryns and Boussauw, 1973). High lymph pressure upstream of a valve opens the valve and enables lymph flow, whereas reverse flow pushes the leaflets against each other and closes the valve (Fig. 1 D). Therefore, opening and closing of the valve depend on periodic changes in fluid pressure within collecting vessels. The number of valves per vessel segment varies depending on tissue type, being gen erally highest in organs with high hydrostatic pressure, e.g., legs in humans (Földi et al., 2006). Lymphangiogenesis Vegf-c-Vegfr-3 pathway Vegfr3 Receptor tyrosine kinase Hypoplasia, chylous ascites (+/Chy, ENU-induced mutation, loss of tyrosine kinase activity; Karkkainen et al., 2001) Interestingly, postnatal development of lymphatic vessels in or gans other than skin is Vegfc/Vegfr3 independent, and internal lymphatic capillaries regrow in mice with mutated Vegfr3 or upon Vegfc depletion (Karkkainen et al., 2001;Mäkinen et al., 2001;Kärpänen et al., 2006b). In zebrafish, the secreted protein Ccbe1 controls lymphatic sprouting from veins, and its function is conserved, as CCBE1 mu tations cause human syndrome presenting with lymphatic dysplasia (Alders et al., 2009;Hogan et al., 2009a; see Heredity lymphedema syndromes). The venous origin of LECs and conserved function of VEGFC, VEGFR3, and CCBE1 (Karkkainen et al., 2000(Karkkainen et al., , 2003Ny et al., 2005;Küchler et al., 2006;Yaniv et al., 2006;Alders et al., 2009;Hogan et al., 2009a,b) clearly underpin the common origin of the vertebrate lymphatic vasculature. Nevertheless, within this common scheme, there seem to be differences between mam malian and zebrafish LEC behavior: in mice, lymphatic sprouting occurs after veins have formed, whereas zebrafish venous sprouts and lymphatic precursors emerge from the cardinal vein simul taneously . Half of those venous sprouts connect with intersegmental vessels to form veins, whereas other sprouts disconnect from the vein and migrate toward the horizon tal myoseptum region, constituting a pool of future LECs. These cells, called parachordal lymphangioblasts, migrate along arteries either dorsally to form intersegmental lymphatic vessels or ven trally to form the thoracic duct Geudens et al., 2010). At 5 d after fertilization, a functional lymphatic sys tem has been established in the zebrafish trunk capable of taking up substances from the interstitium and of transporting lymph into the venous system (Küchler et al., 2006;Yaniv et al., 2006). Future studies will have to show whether the requirement for arteries in guiding LEC migration is a zebrafishspecific feature or whether this represents a general scheme among vertebrates: the anatomical proximity of mammalian arteries and lymphatic vessels has often been noted but commonly attributed to high arterial pressure and a need of absorbing extravasated water and proteins near arteries. Using zebrafish, a role for Notch/Dll4 signaling has been demon strated in guiding LECs along arteries , and there might be earlier roles for Notch at the level of venous sprout ing (Liao et al., 2010). Interestingly, loss of the arterial regulator synectin also compromises the development of zebrafish lymphat ics . Hematopoietic cells and lymphatic vascular development. In mammals, lymphatic and blood vascula tures are connected only in a few defined locations where lymph is returned back to blood circulation. Platelets are important for keeping both vascular systems apart (Table I): platelet deple tion or defective platelet aggregation leads to abnormal lympho venous connections and bloodfilled lymphatic vessels (Ichise et al., 2009;Bertozzi et al., 2010;Carramolino et al., 2010;Suzuki Inoue et al., 2010;Uhrin et al., 2010). According to the current model, platelets aggregate at sites of communication between the cardinal vein and lymph sacs and "seal off" lymphatic vessels from the vein (Fig. 2, A and D). Platelet aggregation is initiated by binding of the Oglycosylated mucoprotein podoplanin expressed on LECs to the Clec2 receptor on platelets (Bertozzi et al., 2010;Uhrin et al., 2010). Clec2 further induces intracellular signaling In mice, LECs are first specified in the anterior cardinal vein around embryonic day 9.5 (E9.5) when a subset of venous endo thelial cells expresses the transcription factor Prox1 and the lym phatic vessel hyaluronan receptor1 (LYVE1) in a polar manner (Fig. 2 B). Prox1 / mice do not develop any lymphatic structures because of failed budding and sprouting of LECs . The transcription factor Sox18 induces Prox1 ex pression, and Sox18 / mice develop edema caused by blockage of LEC development in the vein in certain genetic backgrounds (François et al., 2008). In vitro studies demonstrate SOX18 bind ing to the Prox1 promoter and show that PROX1 can confer lym phatic identity to blood endothelial cells (BECs; Hong et al., 2002;Petrova et al., 2002;François et al., 2008). Thus, Sox18 and Prox1 constitute an essential signaling axis for LEC specification. The nuclear receptor CoupTFII (Lin et al., 2010;Srinivasan et al., 2010) has an earlier developmental role as a venous identity factor, but it also directly interacts with Prox1 (Lee et al., 2009;Yamazaki et al., 2009) and regulates the expression of LECspecific genes, such as neuropilin2 (Nrp2; Lin et al., 2010). Prox1/LYVE1-positive cells bud and migrate dorsolater ally from the central veins. They subsequently form the first bona fide lymphatic structures (jugular lymph sacs) in regions where lymphangiogenic growth factor Vegfc is provided by the lateral mesoderm (Fig. 2 A; Karkkainen et al., 2003). This process occurs at several positions along the anterior-posterior axis of the early embryo and results in the formation of jugular, medial, and axial lymph sacs, which further give rise to a primary capillary plexus (Sabin, 1902). Vegfc is critical in the process: Vegfc / mice lack all lymphatic vasculature, and even Vegfc +/ displays lymphatic hypoplasia (Karkkainen et al., 2003). The sprouting response of LECs to VEGFC is mediated by the receptor tyro sine kinase VEGFR3 and its nonsignaling transmembrane co receptor Nrp2 (Fig. 2 C). Nrp2 is highly expressed in lymphatic capillaries and becomes internalized together with VEGFR3 upon stimulation of LECs with VEGFC and VEGFD (Kärpänen et al., 2006a). Intriguingly, Nrp2 is important for capillary sprout ing but dispensable for the formation of lymph sacs (Yuan et al., 2002;Xu et al., 2010). Vegfr3 is initially expressed also in BECs but becomes mostly restricted to LECs after E10.5. Vegfr3 sig naling depends on interaction with claudinlike protein Clp24 and receptor internalization, a process requiring ephrinB2 Wang et al., 2010). Interestingly, the combined dele tion of Vegfr3 ligands Vegfc and Vegfd in mice does not pheno copy the inactivation of Vegfr3, pointing to a ligandindependent Vegfr3 function (Haiko et al., 2008). Budding of LECs from veins requires Vegfr3 kinase activity, whereas deletion of the Vegfr3 ligandbinding domain does not alter lymph sac formation (Fig. 2, A-C; Zhang et al., 2010). Proteolytically processed VEGFC also interacts with VEGFR2, which is expressed by lymphatic endothelium. However, activation of Vegfr2 alone promotes lymphatic vessel enlargement but not sprouting (Wirzenius et al., 2007). VEGFC induces formation of VEGFR2/ VEGFR3 heterodimers at angiogenic tip cells, suggesting that heterodimerization of VEGFR3 with VEGFR2 may contribute to lymphangiogenic sprouting (Nilsson et al., 2010). Endothelial specific loss of Rho GTPase Rac1 leads to an abnormally close association of lymph sacs and cardinal veins, suggesting that it stabilization, and they also control lymphatic vascular develop ment. Mice hypomorphic for Tie1 exhibit LEC hyperplasia and abnormal remodeling of lymph sacs, whereas mice deficient in one of the Tie2 ligands, angiopoietin2, show defective lymphatic vascular remodeling and lack valves (Gale et al., 2002;Dellinger et al., 2008;D'Amico et al., 2010). Tie2 activation induces phosphoinositide (PI) 3kinase and Akt signaling in vitro, and consistent with these observations, mutations in several PI3kinase pathway components or loss of Akt1 leads to lymphaticremodeling defects (Gupta et al., 2007;MoutaBellum et al., 2009;Zhou et al., 2010). Zebrafish tie2 / undergoes normal lymphangio genesis. However, redundancy with Tie1 needs to be examined (Gjini et al., 2011). Pathological lymphatic vascular morphogenesis Given the importance of lymphatic vessels for normal body functions, it is not surprising that defects of the lymphatic vascu lature are implicated in a variety of human pathologies. Roles of lymphatic vessels in tumor metastasis and inflammation have been recently covered in several excellent reviews (Sleeman et al., 2009;Tammela and Alitalo, 2010). Here, we will concen trate on the defects of vascular morphogenesis in human lymph edema syndromes and some rare but debilitating diseases in which lymphatic vasculature is suggested to play a central role. Hereditary lymphedema syndromes. Lymphatic vessel dysfunction results in progressive accumulation of protein rich interstitial fluid and formation of nonpitting localized tis sue swelling or lymphedema (Fig. 3). It is a chronic debilitating condition associated with increased local susceptibility to infec tions and certain cancers, such as angiosarcoma. Lymphedema can be inherited (primary lymphedema) but is more com monly caused by damage incurred by collecting lymphatic vessels or lymph nodes during cancer surgery or radiation ther apy (secondary lymphedema). Pathologies of secondary lymph edema have recently been reviewed (Rockson, 2001(Rockson, , 2008Tammela and Alitalo, 2010). Hereditary lymphedema is a rare genetic disorder, which can develop in utero, neonatally, or more frequently, years or de cades after birth ( Fig. 3 and Table II). Missense mutations within the VEGFR3 tyrosine kinase domain cause Milroy disease, which is characterized by underdeveloped and dysfunctioning cutaneous lymphatic vessels (Karkkainen et al., 2000;Mellor et al., 2010). Recently, mutations in the recessive CCBE1 gene, shown to con trol lymphatic sprouting in zebrafish (Hogan et al., 2009a), have been identified in a subset of Hennekam syndrome patients, who develop limb lymphedema, dilated intestinal lymphatic vessels, mental retardation, and facial anomalies (Alders et al., 2009). Intestinal lymphatic capillaries are also reduced in number and abnormally patterned, suggesting that defective lymphatic capil lary function is a cause of the syndrome (Alders et al., 2009). Lossoffunction mutations in FOXC2 cause lymphedemadistichiasis syndrome (LD), which is characterized by late onset lymphedema and a double row of eyelashes (distichiasis; Fang et al., 2000). Gainoffunction mutations in FOXC2 occur in patients with lymphedema, but the association of these muta tions with distichiasis awaits further investigation (van Steensel cascades mediated by spleen tyrosine kinase (Syk), Slp76, and PLC2, which then lead to formation of the blood clot that seals off the vein from the lymph sac (Ichise et al., 2009;Bertozzi et al., 2010;SuzukiInoue et al., 2010). In addition to platelets, myeloid cells regulate lymphatic vas cular morphogenesis. Macrophagedeficient PU.1 / and Csfr1 / mice exhibit hyperplastic dermal lymphatic capillaries, suggesting that macrophages restrict proliferation of LECs (Gordon et al., 2010). Conversely, abnormal accumulation of myeloid cells, pro ducing high levels of cytokines and VEGFD, induces the forma tion of dermal lymphaticovenous shunts in Syk / mice (Böhmer et al., 2010). Similar mechanisms are likely at play in Angptl4 / mice, in which excessive macrophage activation by chylomicrons may be responsible for fusion of intestinal blood and lymphatic vessels (Bäckhed et al., 2007;Lichtenstein et al., 2010). Lymphatic vascular remodeling and maturation. Starting from E15.5, the lymphatic vasculature is reorganized into lymphatic capillaries, precollectors, and collecting lymphatic ves sels (Fig. 2 A). In mice, transient upregulation of the forkhead transcription factor Foxc2 is the first sign of formation of collecting lymphatic vessels (Norrmén et al., 2009). Lymphatic valves con tinue to express high levels of Foxc2 and Prox1 throughout devel opment and in adults. LECs in any given lymphangion decrease the expression of Prox1, Vegfr3, LYVE1, and Ccl21, secrete base ment membrane components, and acquire SMC coverage (Mäkinen et al., 2005;Norrmén et al., 2009). In the absence of Foxc2, transi tion from capillary to collecting lymphatic vessel phenotype and formation of lymphatic valves are arrested (Petrova et al., 2004;Norrmén et al., 2009). FOXC2bound enhancers in LECs are sur rounded by nuclear factor of activated T cells (NFAT) binding sites, and pharmacological inhibition of NFAT activation results in lymphatic patterning defects reminiscent of Foxc2 / phenotypes (Norrmén et al., 2009). This suggests that Foxc2 and NFAT path ways cooperate in establishing collecting lymphatic vessels. Ephrin-Eph signaling is essential for embryonic angio genesis, and targeted inactivation in mice of ephrinB2 or its re ceptor EphB4 leads to aberrant embryonic blood vessel formation (Adams and Eichmann, 2010). Reverse signaling via PDZ inter action sites of ephrinB2 is also required for the maturation of collecting lymphatic vessels (Mäkinen et al., 2005). In mice, the presence of a mutation in this PDZ interaction site of ephrinB2 prevents the formation of valves and leads to persistent LYVE1 expression in presumptive collecting vessels. These mutant mice also display defective sprouting of lymphatic capillaries, which acquire ectopic SMC coverage (Mäkinen et al., 2005). Integrin 9 and its ligand fibronectin (FN) containing the EIIIA domain (FNEIIIA) control later steps of lymphatic valve formation (Bazigou et al., 2009). The integrin 9-1 complex binds to FNEIIIA, tenascin, and osteopontin in vitro and regu lates the organization of FNEIIIA microfibrils. Loss of integrin 9 prevents the elongation of valve leaflets, resulting in the forma tion of ringlike constrictions, which are unable to prevent lymph backflow (Bazigou et al., 2009). Fn-EIIIA / mice have a similar phenotype, demonstrating that FNEIIIA is a physiologically relevant integrin 9 ligand (Bazigou et al., 2009). The Tie1 and Tie2 endothelial receptor tyrosine kinases are essential for blood vascular remodeling, maturation, and distant sites, where they may block lymphatic function, causing accumulation of lymph in the chest and abdominal cavity and lymphedema. The cystic destruction of the lung parenchyma over time impairs lung function, which is ultimately only rescu able through lung transplantation (Seyama et al., 2010). The kinase mammalian target of rapamycin (mTOR) plays a central role in integrating growth factor-activated signaling. Its abnormal activation is a likely cause of LAM, as patients with germline mutations of mTOR repressors tuberosis sclero sis complex1 and 2 (TSC1 and TSC2) genes develop the dis ease. Somatic biallelic loss of TSC2 occurs in sporadic LAM cases (Carsillo et al., 2000;Sato et al., 2002). In line with these findings, encouraging results were observed in patients treated with mTOR inhibitors (Glasgow et al., 2010). Given the close association of LAM cells with lymphatic vessels and the lym phatic pattern of dissemination, combining the blockage of mTOR with antilymphangiogenic therapy seems to be a reason able further step in developing better treatment for this disease. Gorham disease (GD). GD is a rare disease of unknown etiology characterized by bone resorption and local vascular pro liferation. The disease is frequently complicated by systemic dysfunction of lymphatic vessels, such as chylothorax and chy lous ascites (Radhakrishnan and Rockson, 2008). Endothelial cells in the lesions are likely of LEC origin, as they express LEC markers LYVE1 and podoplanin, and VEGFR3 is increased in 50% of vessels (Hagendoorn et al., 2006). Nonendothelial cells from GD lesions resemble immature osteoclasts; they se crete cytokines and angiogenic factors, are highly invasive, and may, thus, contribute to disease progression (Colucci et al., 2006). Moreover, GD osteoclast precursors show increased sen sitivity to humoral factors, promoting osteoclast formation and bone resorption (Hirayama et al., 2001). Overall, the clinical pic ture points to an intriguing link between LEC proliferation and activation of osteoclastmediated bone resorption; however, at present, no candidate genes for GD have been identified. Kaposi sarcoma (KS): a case of mixed identity. KS is a tumor caused by human herpes virus 8 (HHV8 or et al., 2009). Lymphatic vessel density is normal or increased in LD patients; however, lymphatic transport is inefficient because of lymph reflux, likely caused by incompetent lymphatic valves. LD patients also have venous reflux, suggesting a common mechanism for the morphogenesis of venous and lymphatic valves (Brice et al., 2002;Mellor et al., 2007) Dominantnegative mutations in SOX18 occur in hypo trichosis-lymphedema-telangiectasia syndrome (HLT) character ized by sparse hair, swelling of legs, and dilation of small blood vessels. Based on phenotypic similarities with mice producing a dominantnegative form of Sox18, HLT patients likely have lym phatic capillary hypoplasia (Irrthum et al., 2003;François et al., 2008). Novel causes of hereditary lymphedema include muta tions in gap junction protein GJC2 and protein tyrosine phospha tase PTPN14 (Au et al., 2010;Ferrell et al., 2010). GJC2 is highly expressed by oligodendrocytes, and recessive lossoffunction mutations in GJC2 cause hereditary Pelizaeus-Merzbacherlike disease, which is characterized by central nervous system de myelination. Given the dominant character of GJC2 mutations in lymphedema, mutant proteins might exert a dominantnegative effect either on remaining wildtype GJC2 molecules or other connexins. A subset of Ptpn14deficient mice has hyperplas tic lymphatic vasculature, and a role for PTPN14 in restricting Vegfr3 activation has been proposed (Au et al., 2010). Lymphangioleiomyomatosis (LAM). LAM is a rare lung disease affecting women of childbearing age characterized by the proliferation of smooth muscle-like cells and lymphatic vessels as well as the formation of pulmonary cysts. LAM can also occur in the axial lymphatics and is associated with a benign kidney tumor angiomyolipoma (Seyama et al., 2010). The origin of the SMCs in LAM lesions is unknown, but they respond to estrogen and express multiple chemokine receptors and lym phangiogenic growth factors VEGFC and VEGFD, which may explain the highly metastatic behavior of LAM cells and their close association with lymphatic vessels (PachecoRodriguez et al., 2009;Yu et al., 2009). LAM is a benign neoplasm. However, LAM cells frequently disseminate through lymphatic vessels to Figure 3. Causes of human hereditary lymphedemas. Lymph transport can be impaired because of a hypoplastic initial lymphatic capillary network, because of abnormal coverage of lymphatic capillaries with basement membrane components and SMCs or because of a lack of or malfunctioning lymphatic valves. Defective lymphatic drainage leads to tissue fibrosis and fat deposition caused by the abnormal local chronic inflammatory response. Genes that are mutated in human hereditary lymphedema are indicated in blue next to the processes to which they are thought to be causally related. Mechanisms of the action of GJC2, PTPN14, and IKBKG are not fully understood. production of proangiogenic and inflammatory cytokines, and un restricted replicative potential (Mesri et al., 2010). Notably, some of these molecules control endothelial cell differentiation in vitro: four KS microRNAs target the transcription factor MAF and con tribute to reprogramming of the LEC to BEC phenotype, whereas kaposinB stabilizes PROX1 mRNA, which has a key role in lymphatic endothelial identity (Hansen et al., 2010;Yoo et al., 2010). Overall, these data provide an intriguing example of virus mediated change of the endothelial cell differentiation program. Open questions and outlook Impressive progress has been achieved in the past decade in the field of lymphatic vascular biology, but many questions remain KSassociated herpes virus [KSHV]). The lesions are composed of spindleshaped tumor cells, leaky and highly proliferative vessels, extravasated red blood cells, and inflammatory infiltrate (Mesri et al., 2010). KS cells express markers of both blood (CD34 and CXCR4) and LEC lineages (VEGFR3, LYVE1, and podoplanin). Interestingly, KSHV infection of BECs shifts the transcriptional profile toward a LEC phenotype, whereas KSHV infection of LECs induces transcriptional reprogramming toward a more BEClike phenotype (Hong et al., 2004;Wang et al., 2004). The major latency viral transcripts expressed in KS cells include the latencyassociated nuclear antigen, viral cyclin, vFLIP, viralencoded microRNAs, and kaposinA and B. These transcripts are important for KHSVinduced cell proliferation, unresolved. Development of novel imaging techniques and analysis of signaling pathways in situ will certainly provide ad ditional insights into the mechanisms of lymphangiogenesis. Considerable phenotypic plasticity of endothelial cells is now obvious; however, the genetic and epigenetic mechanisms of LEC differentiation are far from being fully understood. Contri butions of other cell types in regulating lymphatic development and function need to be addressed under physiological and pathological conditions. Finally, organ and diseasespecific features and responses of lymphatic endothelium have not been studied in detail, although this knowledge may have a critical impact on developing better treatments for human pathologies, including lymphedema, cancer, and inflammation. We thank Caroline Heckman and Jeremiah Bernier-Latmani for discussions, and we apologize to colleagues whose work could not be cited because of space limitations. The authors' work is supported by the Swiss National Science Foundation (grant PPP0033-114898), the Medic, Telethon, and Emma Muschamp Foundations, the National Center of Competence in Research Molecular Oncology, the Oncosuisse, the Swiss Cancer League, the Association for International Cancer Research, the Royal Netherlands Academy of Arts and Sciences, and the Wageningen University and Research Centre.
2014-10-01T00:00:00.000Z
2011-05-16T00:00:00.000
{ "year": 2011, "sha1": "8218b0533bdb946988ac41d42f9bfa8c84176bba", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/jcb/193/4/607.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "832a51424b20861f730e62b759a75ee034bf8c15", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
37274509
pes2o/s2orc
v3-fos-license
The effect of cycle lanes on the proximity between motor traffic and cycle traffic An experiment collected proximity data of motor traffic overtaking cycle traffic on roads with and without cycle lanes using an instrumented bicycle. The work enhances previous research which has considered the riding position of the cyclist and whether or not the cyclist was helmeted, while controlling for vehicle type. The analysis shows that significantly wider passing distances are adopted by motorists in the condition without a 1.45 metre cycle lane, with posted speed limits of 40mph and 50mph with a 9.5 metre wide carriageway. These findings were not replicated for a similar width road with a posted speed limit of 30mph and a 1.3 metre cycle lane. The results suggest that in the presence of a cycle lane, drivers may be driving within the confines of their own marked lane with less recognition being given to the need to provide a comfortable passing distance to cycle traffic in the adjacent cycle lane. Background Cycling has environmental, social, energy and congestion benefits through reduced motor vehicle use and confers health benefits on the user. However, the perceived risk of cycling is a deterrent to its wider uptake, as discussed in, for example Henson et al. (1997), Davies et al. (1997) and Gardner (1998). Routes for cycle traffic include every part of the highway network (apart from, for example, motorways, which are restricted to motor traffic use) and other off-highway routes which may form convenient shorter routes between parts of the highway network (such as permissive routes across, for Cycle lanes may offer a greater degree of separation between the cyclist and the motorist. They may also: usefully direct cycle traffic to the most appropriate position within the carriageway; provide a legal means for cycle traffic to undertake motor 3 traffic in queues approaching junctions; provide a degree of continuity and conspicuousness of routes for cycle traffic (Lancashire County Council, 2005). However, as cycle traffic may wish to carry out a variety of manoeuvres within the carriageway, then the presence of a cycle lane may adversely affect the way cyclists use the carriageway, for example, feeling unnecessarily inhibited in moving to the right of a motor traffic lane when carrying out a right turning manoeuvre (left hand rule of the road). Motorists may also wrongly assume that the presence of a cycle lane means that the remaining parts of the carriageway will be free of cycle traffic. The experience of one of the author's in training UK traffic engineers indicates that Dutch cycle design guidance is often regarded as being the most appropriate guidance available for western countries. This may be based on the false notion that high bicycle usage in The Netherlands is entirely due to high design standards and implementation. There could be many other explanations for high levels of use in The Netherlands including a long history of a culture of cycling. The guidance (CROW, 1993 and2006) does, however, helpfully differentiate between highway cross-sections as follows: 'spacious', which allow motor traffic and cycle traffic to pass each other comfortably without encroaching into oncoming traffic in the adjacent lanes of an undivided carriageway; 'tight', which are so narrow as to require cycle traffic to follow motor traffic and vice versa; and 'critical', which are some way between 'spacious' and 'tight' and may create the most risky situations because overtaking may occur, but within a carriageway which is not sufficiently wide to allow this to happen comfortably. 4 Dutch research (CROW, 2006) shows that motorised traffic will nearly always pass cycle traffic when the bicycle to motor vehicle distance is 0.85 metres or greater. At 30mph, and where the overall width permits, the passing distance is typically around 1.05 metres. According to the Federal Highway Administration (FHWA, 1975), such a clearance produces a lateral force of under 9 Newtons (2lb). While this force may be relatively low and less than the force under normal service braking, the fact that it is induced in the cyclist by the actions of others (the passing traffic), then, from a psychological point of view, this may exceed a level deemed comfortable. The passing dimensions are usefully depicted in cycle guidance prepared by Lancashire County Council (LCC, 2005) as shown in Figure 1 for a carriageway 8.5 metres wide. [Insert Figure 1 here] A wider carriageway of 9.5 metres, and assuming equal gaps between motor vehicle and bicycle and motor vehicle and motor vehicle, suggests a passing distance of 1.38 metres. With a significant presence of Heavy Goods Vehicles of width 2.6 metres, the carriageway width would need to be 10.1 metres wide. The Dutch advice has also been carried through into recent current United Kingdom guidance (DfT, 2008), which suggests ideal total minimum widths for overtaking on a carriageway with a speed limit of 30mph (48kph) of 4.3 metres, or 5.05 metres with significant numbers of heavy goods vehicles. 5 The new UK guidance suggests cycle lanes should be 2 metres wide on busy roads or where traffic is travelling in excess of 40 mph (64 kph), but that 1.5 metre lanes may generally be acceptable on roads with a 30mph speed limit. The guidance notes that cyclists may need to move away from the kerb to avoid surface hazards and this may 'give motorists misplaced confidence to provide less clearance while overtaking than they would give in the absence of a cycle lane'. This assertion has potentially serious implications, particularly for narrower cycle lanes, and is tested by the research presented here. Most Northern European countries assume driver liability in collisions with pedestrians and cycle traffic for insurance purposes, with the burden of proof falling on the driver to prove that he or she was not liable. This is sometimes (inaccurately) referred to as 'strict The data collected for the analysis presented here were obtained using an instrumented bicycle to measure the passing distance of vehicles relative to a cyclist along six sections of road. The sections have posted speed limits of 30mph, 40mph and 50mph and were sub-divided into sections with and without cycle lanes. Section 2 reviews previous research in the field. Section 3 describes the methodology, Section 4 presents an analysis of the results. A discussion of the results is presented in Section 5 and Section 6 draws conclusions. Previous research Motor traffic passing a cyclist exerts a lateral force because of the air turbulence created. The Federal Highway Administration report (FHWA, 1975) suggest a tolerance limit is defined as 16 Newtons (3.5lbs), equivalent to heavy goods vehicle traffic travelling at 50 mph, 1.2 metres from the cyclist. This is a little less than the force experienced during normal service braking. Considering the physical presence and effect of traffic in a psychological way, Sorton and Walsh (1994) showed that cyclists could recognise aspects of the mental effort of cycling as being related to levels of traffic volume, motor vehicle speed and lane width. The Federal Highway Administration reviewed the operation and safety of similar sections of route with and without cycle lanes (FHWA, 1999) and found significantly higher rates of conflict between cycle traffic and motor traffic at sites with bikes lanes as compared to sites without, although the rates were small compared with rates with and without cycle lanes on the approaches to junctions. 8 In the United Kingdom, Guthrie et al. (2001) attempted to create an index of 'cyclability' on a ten point scale (1 bad for cycling, 10 very good for cycling) based on cyclists' assessment of road and traffic conditions. Fifty-one cyclists rode a 9.2 kilometre route comprising eleven links which were generally non-urban in nature and the sample was biased towards male frequent cyclists. Lane width was included as a linear parameter in the model and contributes 1.03 times the lane width to the cyclability score, a higher proportion than estimated by Landis et al. (1997) andHarkey et al. (1998). In addition, separate 'safety', 'effort' and 'pleasure' ratings were considered and lane width was found to correlate significantly (p<0.001) with safety (-0.46) and pleasure (-0.42). In a simulated environment, Basford et al. (2002) found that the provision of cycle lanes appears to increase driver confidence and hence risky behaviour such as higher speeds and less speed reduction when a cyclist is encountered. Stone and Broughton (2003) tabulate incidence and fatality rates for cycling accidents during 1990-1999 from over 30,000 accidents reported using the United Kingdom STATS19 road accident reporting mechanism. They note with interest the much greater fatality rate for cyclists hit from the rear than from the front. As part of work for the Warrington Cycle Campaign, Owens (2005) asserts from photographic evidence alone that cycle lanes have the effect of reducing overtaking distances, suggesting a demand for further knowledge and understanding about overtaking distances amongst the cycling community. In a survey using an instrumented 9 bicycle on roads in Bristol and Salisbury, Walker (2007) found that the further out in the carriageway the cyclist rode, the less space is received from overtaking vehicles; drivers generally pass closer to a helmeted cyclist; and drivers of buses and heavy goods vehicles pass closer than other types of vehicle. The work did not take account of available carriageway widths or the widths of the passing vehicles. He recommended that the effect of on-road cycle lanes be investigated, and this demands that proper attention is given to available road width. In order to improve on research into the perception of cycling that had hitherto only considered links, Parkin et al. (2008) used video clips of routes and junctions from the point of view of a cyclist and presented them to cycling and non-cycling commuters. The Risk Ratings for combinations of routes and junctions which mimicked real potential journeys by the respondents were on a scale of 1 (lowest perceived risk) to 10 (highest perceived risk) and were used as the dependent variable in a logistic regression model constrained to lie within the Risk Rating range. The model did not explicitly consider width, but flow passing the cyclist was found to be significant. The literature suggests no common definition of the disutility associated with cycling: sometimes it is considered on a measure purporting to be a 'level of service', sometimes a 'compatibility' or 'cyclability' index, the components of which include issues connected with safety, effort and pleasure, or it has been considered as a risk rating. There is no commonly emerging functional form for the inclusion of passing distance as a measure of disutility, and the contribution of passing distance to the overall disutility appears to vary between studies. Design standards appear to be based on observed passing distances, but there is no correlation suggested between these observed distances and perceived comfort for the cyclist. The effect on passing distance of the presence of a cycle lane needs to be more fully understood. Methodology An Using footage of overtaking manoeuvres collected whilst cycling on roads both with and without cycle lanes and using the front wheel of the passing vehicle as a reference point, the proximities of the overtaking vehicles were established. The bicycle was checked to ensure that it was calibrated correctly after each period of data collection. Three sites were selected for analysis and had posted speed limits of 30mph, 40mph and 50mph (48kph, 64kph and 80kph). The characteristics of the spread in speed for the roads surveyed is provided in Table 1. Each site contained stretches of road with and without cycle lane. The sites were all virtually straight and flat in order to eliminate horizontal and vertical geometry variables. [Insert Table 1 here] Site 1 (50mph) is on the A6 at Cabus, near Garstang, Lancashire, England. The width of the cycle lane is 1.45 metres with an overall road width of 9.57 metres 1 . In the area without a cycle lane, the overall road width is 9.64 metres. Figure 3 shows the two sites. Site 2 (40mph) is on the A6 at Broughton, north of Preston, Lancashire. The average width of the cycle lane is 1.45 metres with an overall road width of 9.57 metres. In the area without a cycle lane, the overall road width is 9.37 metres. Figure 4 shows the two sites. Site 3 (30mph) is in Westgate, a suburb of Morecambe in Lancashire. The width of the cycle lane is 1.30 metres with an overall road width of 9.45 metres. In the area without a cycle lane, the overall road width is 9.49 metres. Figure 5 shows the two sites. 15 The significantly wider passing distance offered by motorists on the A6 at Broughton without a cycle lane is all the more noteworthy when it is realised that the carriageway without the cycle lane is 200 millimetres narrower than the carriageway with the cycle lane. The passing distance will be influenced by the available width to the motor vehicle driver, with passing distances being smaller on narrower carriageways. The available gap will also vary depending on whether or not traffic is coming towards the overtaking vehicle, a variability which we have not be able to account for in this experiment. Considering a car overtaking the bicyclist on the A6 at Cabus in the condition without a cycle lane, it may be seen that the total gap available in the lane is (9570/2)-800-1819=2166mm. The proportion of this dimension which the motor vehicle driver leaves between the motor vehicle and the cyclist provides an indicator of the way that the motorist uses the available road space. Table 3 shows the mean passing distance of cars as a proportion of available space in lane. [Insert Table 3 here] The mean proportions are all greater than 0.5 and this implies that the motorist is leaving more than half of the available space between the motor car and the cyclist as Table 3. The difference between the proportions on the A6 at Cabus with and without a cycle lane (0.701 and 0.772) is significant (p=0.000), as is the difference on the A6 at Broughton (0.520 and 0.579, p=0.000). This is not the case at Westgate. These results simply parallel the results based on the measured passing distance, which is to be expected because the widths of neither the roads nor the motor vehicles themselves vary greatly, at least not in comparison with the variation in measured passing distances. Further inconclusive analysis has been performed to determine the distribution of the passing distances. It was hypothesised that in the circumstance where there is a superabundance of space within the lane, the passing distance would be distributed standard normal, but in cases where the cross-section is tight a skewed distribution would obtain. The data do not support such hypotheses. Discussion The data collected provide evidence that motor traffic passes cycle traffic at closer The lack of a significant difference in passing distances between the with and without cycle lane condition on the road with a posted speed limit of 30mph may be due to drivers not making a conscious overtaking manoeuvre in the condition without a cycle lane. The data do not support a view as to what a comfortable passing distance should be and this would require further research considering objective measures of comfort, such as lateral force and noise, as well as self-reported ratings of comfort. Conclusions It may be concluded that in circumstances where a cycle lane is insufficiently wide for the speed of general motor traffic, drivers provide greater passing distances to cyclists on stretches of road without cycle lanes. Cycle lanes therefore do not appear to provide greater space for cyclists in all conditions. The limited data available on different vehicle types suggest that motor vehicle overtaking proximity also varies depending on vehicle type, and this confirms Walker's finding. These results should encourage further investigation into the effectiveness of cycle lanes in separating cycle traffic from motor traffic. Differences in lateral separation may affect risk of collision, but may equally affect the perception of journey ambience for cyclists, also an important consideration.
2018-04-03T04:01:06.201Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "84e216f026a3f7fda6d59a7e0b8b33aacfe0186a", "oa_license": "CCBY", "oa_url": "https://uwe-repository.worktribe.com/preview/987464/Parkin%20and%20Meyers%20full%20paper%20for%20AAP.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "d24ca0f46823fa50adf99bed6cb8889fe0cdafb4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Engineering" ] }
254641225
pes2o/s2orc
v3-fos-license
Maltreated and non-maltreated children’s truthful and dishonest reports: Linguistic and syntactic differences Introduction Adults are typically poor judges of the veracity of statements, requiring the need for alternative methods for detecting lies. One alternative method to human lie-detectors is using computer-based linguistic analysis which may present a more reliable method for detecting dishonesty. Moreover, while previous research has examined linguistic differences between typically developing children’s and adults’ truthful and dishonest reports, no study to date has examined whether maltreated children exhibit different linguistic cues to dishonesty. Thus, the current study examined maltreated and nonmaltreated children’s linguistic and syntactic cues to children’s truthful and dishonest reports. Methods Nine- to 12-year-olds, half of whom were maltreated, played a computer game with a confederate: half of the children experienced a transgression (i.e., playing a forbidden game and crashing the computer) and were coached to conceal it, and half of the children experienced no transgression (i.e., simply played a computer game). All children were then interviewed about the event. The current study utilized automated linguistic and syntactic analysis software to compare children’s truthful reports (no transgression occurred) with dishonest reports. Results and Discussion Results indicated that maltreated and non-maltreated children did not differ in their indicators of dishonesty. Dishonest reporters used more first-person plural pronouns and cognitive mechanism terms and had less syntactically complex reports compared to truthful reporters. Finally, first-personal plural pronouns, cognitive mechanism terms, and syntactic complexity accurately classified (74.2%) the veracity of children’s reports. The current findings present a new indicator of dishonesty (syntactic complexity) and suggest that indicators from typically developing populations may apply to maltreated children when coaching occurred. Introduction The ability to identify children who are dishonest about or reluctant to disclose negative experiences has important implications in forensic contexts. For example, failing to identify children who conceal maltreatment can lead to a child being left in a harmful environment. This can lead to further abuse, resulting in negative developmental outcomes including internalizing and externalizing problems (Vilariño et al., 2022). Establishing markers of dishonesty in cases where a child may be concealing some details while falsifying others may assist in providing a tool for professionals to identify cases that may require further investigation. One potential method for identifying dishonesty is assessing verbal differences in honest and dishonest reports. In fact, previous research suggests that verbal cues may be more reliable and accurate than non-verbal cues when attempting to detect children's dishonest reports, given that truth-and lie-tellers do not differ on many non-verbal cues to deception (e.g., eye movement, body language; Talwar and Lee, 2002). While progress has been made in identifying verbal markers of deception with typically developing children, no study to date has examined whether these markers are also relevant for maltreated children. Given that maltreated children often experience delays in language development (Rogosch et al., 1995;Geeraert et al., 2004) they may exhibit different verbal cues than their typically developing peers. Thus, the aim of the current study was to examine linguistic and syntactic cues to dishonesty (when children are coached to falsify details to conceal a transgression) in maltreated and non-maltreated children's reports of an adult interaction. Current research examining linguistic cues to dishonesty with children has primarily utilized paradigms in which children provide reports of a true event as well as a false event after being coached by a parent or researcher. These reports are then compared for linguistic cues that can be used to differentiate the veracity of the statements (Bruck et al., 2002;Evans et al., 2012;Brunet et al., 2013;Williams et al., 2014;Talwar et al., 2018). For example, Evans et al. (2012) and Saykaly et al. (2013) had children play a game with an experimenter where stickers were placed on the child's body (e.g., their arm). The children were also coached by a parent to falsely report playing an additional game they had not played. As such, these studies compared children's reports of a true experience to fully fabricated reports. However, when being dishonest children may not always completely falsify an event; they may falsify some details to conceal true aspects of the event. There may be different cues to dishonesty when children are coached to conceal only a portion of a true event by providing false information instead, such as a transgression that occurs within the event. Such reports are distinct in several important ways. First, children's dishonesty is motivated by a desire to avoid a negative consequence of a transgression, rather than providing a story about a neutral event without consequence. Second, children are only told to be dishonest regarding a portion of the event; they can reveal some details but must monitor their reports to withhold the details that must be concealed. While they are monitoring what to conceal, they must also provide the coached falsified details. The increased complexity of this task as well as the motivation behind it may lead to different linguistic or syntactic patterns. Importantly, being able to detect instances when children falsify some details to conceal a transgression would be particularly useful for interviewing children about serious events, such as maltreatment. Linguistic cues to children's dishonesty According to the Activation-Decision-Construction-Action Theory (ADCAT; Walczyk and Fargerson, 2019) telling a lie is a cognitively demanding task, making it difficult to conceal potential markers to deception. First, when a question is asked, working memory is activated to hold the truth in the mind. If the decision to lie is made, the lie-teller must inhibit the truth and construct a plausible alternative response. During the construction of the lie, theory of mind is required to understand the recipient's knowledge or belief to construct a believable lie. Finally, the action stage involves providing the constructed lie to the recipient while monitoring any verbal or non-verbal cues that might reveal the lie (Walczyk et al., 2003(Walczyk et al., , 2009(Walczyk et al., , 2014. Given the many cognitive abilities at work while lying, children may find it difficult to monitor verbal cues that may reveal their lie. Below we review the relevant literature on linguistic differences between children's honest and deceptive reports. One goal when lying is to distance the self from the lie, resulting in the observed reduction of first-person pronouns in adults' dishonest statements (Hauch et al., 2015). However, studies examining linguistic cues to children's dishonesty have found children's lies tend to include more self-references (first-person pronouns) compared to truthful statements (Brunet et al., 2013;Williams et al., 2014;Talwar et al., 2018). Importantly, previous research often examines total self-references as a combination of singular (e.g., I, me) and plural (e.g., we, our) pronouns (Brunet et al., 2013;Talwar et al., 2018). Williams et al. (2014) parsed apart these findings by examining singular and plural pronouns separately and found that children who were coached to fabricate stories about events (e.g., sports, parties) used more first-person plural pronouns than those who truthfully reported; they did not find differences in the use of singular pronouns. One possible explanation for this increase in first-person plural pronouns in particular when being dishonest may be that children are attempting to disperse blame or responsibility (Talwar et al., 2004;Evans et al., 2021) for the dishonest statement or actions. This may be particularly relevant when children are coached, or a transgression has occurred. Another theoretical difference between reports of true and fabricated events is the processes used to provide the report. The Reality Monitoring approach to deception detection stipulates that there are different processes that govern reports of truly experienced events compared to fabricated ones. Specifically, truly experienced events are formed based on external experiences and Frontiers in Psychology 03 frontiersin.org information, while untrue events are internally formulated based on thoughts or cognition. Given this, reports of these events should contain information that demonstrates these processes (Johnson and Raye, 1981). Specifically, recalling true experiences should theoretically rely on external memory attributes, such as sensory and affective processes, because the description is based on real memories of places, events, and emotions (Vrij et al., 2004;Strömwall and Granhag, 2005). In contrast, reporting an untrue event may contain more internal memory attributes, such as cognitive information; thus, the language used to fabricate information may contain more cognitive and fewer sensory and affective words. In adults, using the reality monitoring criteria has been found to effectively differentiate between true and fabricated reports (Vrij et al., 2000;Granhag et al., 2001;Oberlader et al., 2016); however, some of the individual scales, such as affective information, have not been found to uniquely differentiate true and false reports (DePaulo et al., 2003;Masip et al., 2005;Gancedo et al., 2021). Despite findings that affective and cognitive information may not uniquely identify dishonest reports in adults, previous research examining children's language suggests that the presence of cognitive or affective words may differ in true and false reports. Children tend to use more cognitive terms in dishonest than truthful statements (Vrij et al., 2004;Williams et al., 2014;Talwar et al., 2018), which supports the notion that lying relies more heavily on internal memory attributes such as cognitive processes. Additionally, research with children supports the idea that reports of true memories rely on external memory attributes to describe true experiences; children tend to use more affect (emotion) words when describing true events compared to false ones (Masip et al., 2005;Williams et al., 2014). In fact, Williams et al. (2014) found that 4-to 7-year-old children who provided false reports about typically occurring events (e.g., sports, birthday parties) used fewer positive and negative emotion words compared to children who told the truth. However, contrary evidence suggests that children may use emotion words when being dishonest, but may lack the ability to describe emotions that are relevant to the event they are lying about; for example, children's false reports about a serious injury contained more positive emotion words than truthful reports (e.g., breaking a bone; Warren et al., 2018). The final three word types of interest (tentative, exclusion, and negation terms) have either not been found to differ or have not yet been examined in studies exploring linguistic differences in children's truthful and dishonest reports. While adult lie-tellers have been shown to use fewer tentative (Hauch et al., 2015) and exclusion terms (Newman et al., 2003;Bond and Lee, 2005;Schelleman-Offermans and Merckelbach, 2010;Hauch et al., 2015), studies examining exclusion terms in children's reports have failed to find significant differences (Brunet et al., 2013;Williams et al., 2014). Tentative terms may be avoided by lie-tellers because they suggest that the lie-teller is not confident about their narrative. Similarly, exclusion words (e.g., but, except, and without) may suggest that the lie-teller is presenting conflicting information and, thus, are also avoided. Adults have been found to use more negation terms when lying compared to telling the truth (Ali and Levine, 2008;Hancock et al., 2008;Hauch et al., 2015). This may also be the case among children as they may use negation terms to ensure the interviewer that nothing bad happened, particularly when being dishonest to conceal a transgression (e.g., "Nothing bad happened" or "He did that without me"). However, negation terms have not yet been examined in children's reports. Syntactic cues to dishonesty In addition to the linguistic features of a report, the number of words and syntactic complexity (range and sophistication of the structures that make up sentences; Van Valin, 2001;Ortega, 2003) may also help identify children's dishonesty or reluctance to disclose. There is some evidence that both adult and child lie-tellers tend to keep their story simple and ambiguous to avoid leaking incriminating details (Vrij et al., 2010;Gongola et al., 2021). If they provide less information, it is easier to maintain the lie across questions or time. However, previous research has found inconsistent support for whether children's reports differ in length (word count); some studies find that lie-tellers' reports are shorter than truth-tellers' reports (Brunet et al., 2013), while others find no difference (Evans et al., 2012;Saykaly et al., 2013). Importantly, Brunet et al. (2013) asked children to provide truthful or fabricated reports of a stressful event (i.e., true or fabricated reports of being bullied), without being coached, and found that truth-tellers' reports were longer than lie-tellers' . In contrast, the studies that found no differences between truthful and fabricated reports included parental coaching. Thus, coaching may enable children to provide enough information to match the length of their report to truth-tellers, though this pattern has not yet been examined in the context of lying to conceal a transgression. Further investigation is required to more completely understand the influence of coaching on the length of children's dishonest reports specifically when they have been coached to falsify and conceal details to cover a transgression. Another cue that may be influenced by the cognitive load of deception is the syntactic complexity of sentence structures. The syntactic sentence structure refers to the rules that govern the ways in which words are arranged within a sentence in a given language (Van Valin, 2001;Ortega, 2003). Previous research has yet to examine (in adults or children) whether the complexity of sentence structure within a report is an indicator of deception. As previously mentioned, lie-telling is a cognitively demanding task for young children (Walczyk et al., 2003(Walczyk et al., , 2009); this complexity may require cognitive resources that limit lie-tellers' abilities to produce more complex sentence structure. This may be especially true for children as they require greater cognitive resources to employ the cognitive functions involved in lie-telling, leaving less resources available for syntactic complexity. Truth-tellers, by comparison, only need to focus on conveying the relevant information. Because they do not need to Frontiers in Psychology 04 frontiersin.org focus on the additional tasks of inhibiting the truth and fabricating plausible details, truth-tellers may use more complex sentence structure in their reports. Furthermore, like the overall length of the report, it is possible that coaching may reduce some of the cognitive load that children experience when dishonestly reporting on an event, and therefore lie-tellers may be able to match their syntax to that of truthful reports when coached. Maltreatment The limited research exploring linguistic cues to dishonesty has solely focused on typically developing populations. However, the linguistic cues used to identify dishonesty with typically developing children may not apply to other populations, such as maltreated children, who tend to exhibit delays in language development (Rogosch et al., 1995;Geeraert et al., 2004). Compared to their non-maltreated peers, maltreated children learn fewer words (Coster et al., 1989;Beeghly and Cicchetti, 1994) and exhibit poorer performance on measures of expressive language (for meta-analysis see Sylvestre et al., 2016). Furthermore, there is evidence beginning in early childhood that maltreated children produce less complex utterances compared to non-maltreated children (Coster et al., 1989;Beeghly and Cicchetti, 1994;Eigsti and Cicchetti, 2004). Thus, even with coaching, maltreated children may not exhibit the same linguistic patterns between truthful and dishonest reports as their peers due to delayed language development. Honesty promotion While identifying dishonesty is one method for ensuring reluctant children are identified, another method is to support children in truthfully reporting their experiences. To date, there are several honesty promotion techniques that have been shown to be useful to encourage children to provide truthful reports of transgressions including the putative confession (Lyon et al., 2014;Rush et al., 2017;Cleveland et al., 2018;Quas et al., 2018;Evans and Lyon, 2019). The putative confession involves the interviewer telling the child that their co-transgressor has already told the interviewer everything that happened and wants the child to tell the truth. Across numerous studies, this technique has been found to be effective in increasing honesty with children 4-to 10-years of age (Lyon et al., 2014;Rush et al., 2017;Stolzenberg et al., 2017;Quas et al., 2018;McWilliams et al., 2021). While this method encourages honesty, it may also influence the language children use within their reports. Specifically, children's cognitive load is increased by this statement because the child not only needs to provide a report, but also has to think about what their co-transgressor may have reported. This increased monitoring may be more cognitively taxing and influence the linguistic and syntactic makeup of children's reports. The current study The current study examined linguistic and syntactic cues to 9-to 12-year-old maltreated and non-maltreated children's dishonest reports to conceal a transgression, as well as the potential influence of the putative confession on those reports. Specifically, we examined whether truthful and dishonest reporters differed in linguistic and syntactic cues. Additionally, we examined whether the linguistic and syntactic cues differed based on honesty promotion technique (the putative confession vs. no honesty promotion) technique, age, and maltreatment status. The present study used a forensically relevant paradigm where children were involved in a co-transgression with an adult and were coached to conceal it. To identify potential markers of dishonesty, truthful reporters (n = 164) were compared to dishonest reporters (who lied about the transgression; n = 84). Linguist Inquiry Word Count (LIWC; Pennebaker et al., 2001) software was used to analyze the frequency of singular (e.g., I, me) and plural (e.g., we, our) firstperson pronouns, cognitive mechanism terms (e.g., cause, know, and ought), affect terms (e.g., happy, worry, and sad), tentative terms (e.g., maybe, perhaps, and guess), exclusive terms (e.g., but, without, and exclude), and negations (e.g., no, not, and never), as well as the overall word count. Connexor Machinese Syntax Software (Samuelsson and Voutilainen, 1997) was used to analyze the syntactic structure of each sentence in children's reports. The second set of predictions examined differences in the length and complexity of truthful and dishonest reporters. First, it was predicted that dishonest reporters would provide significantly shorter reports than truthful reporters (i.e., higher word count, H4; Vrij, 2005;Brunet et al., 2013). Second, while previous research has not yet examined syntactic complexity as an indicator of dishonesty, we expected honest reports to be more complex, while dishonest reporters' reports would be less complex Frontiers in Psychology 05 frontiersin.org due to the greater cognitive load associated with lie-telling (H5; Walczyk and Fargerson, 2019). Importantly, dishonest reporters received coaching regarding details about the game they were supposed to play. Research has shown that linguistic differences tend to disappear when children receive coaching (e.g., first-person pronouns, cognitive mechanism terms; Talwar et al., 2018). However, this has not yet been examined in maltreated samples. We examined the possibility that the linguistic differences between coached dishonest reporters and truthful reporters described above may only emerge in the maltreated sample (H6). The language delays experienced by maltreated children may make it more difficult for coaching to eliminate or minimize linguistic differences between true and false reports. Developmental differences We also examined developmental differences among the indications of interest, beginning with age differences. There is limited evidence that young children use more emotion words in their reports (Williams et al., 2014), thus we predicted we might also find that younger children use more emotion words than older children (H7). Furthermore, we expected that older children's reports would be longer and more syntactically complex than younger children's reports (H8). No other age differences were predicted for linguistic or syntactic differences as there has been no support for such predictions in previous findings in our participants' age range. Given that maltreatment is related to delayed language development (Rogosch et al., 1995;Geeraert et al., 2004;Sylvestre et al., 2016), we expected maltreated children would provide shorter reports and use significantly less complex syntactic structure compared to non-maltreated children (H9). Honesty promotion Tentative, exclusive, and negation terms in particular may be influenced by the putative confession. Children who believe their co-transgressor told the interviewer about the transgression may be uncertain about what details to provide. Thus, they may be more likely to use tentative and exclusive terms in their report. Additionally, they may be even more adamant that they are not to blame for the transgression and may be more likely to use more negation terms to avoid blame (Honesty Promotion predictions = H10). Materials and methods Participants A total of 321 9-to 12-year-olds (M = 10.50, SD = 1.12, 153 males) participated in the original study (Evans and Lyon, 2019). Given that the current study was interested in differences between truthful reporters (no transgression) and dishonest reports (children who lied about the transgression), the children who were in the Break condition and truthfully disclosed the transgression were excluded. Thus, a total of 248 children were included in the current study. Half of the children were maltreated (N = 124, 64 9-10-yearolds, M = 7.45, SD = 0.50, 33 males; 60 11-12-year-olds, M = 11.47, SD = 0.50, 31 males). Maltreated children were recruited from the Los Angeles County dependency court. Given that children were removed from parental custody due to substantiated cases of abuse or neglect, the Presiding Judge of Juvenile Court and the Los Angeles County Children's Law Center granted consent. Maltreated children were ineligible if they were awaiting an adjudication or contested disposition hearing on the date of testing (because they might be asked to testify) or if interpreter services were provided to their family and they were unable to communicate with the researchers in English. The sample was 56.5% Latino, 27.4% African American, 8.8% Caucasian, and 7.3% other. The non-maltreated sample was recruited from schools in mainly low-income ethnic minority neighborhoods (N = 124, 67 9-10-year-olds, M = 9.49, SD = 0.50, 33 males; 57 10-11-year-olds, M = 11.42, SD = 0.50, 25 males). Ethnic background was comparable to the maltreated sample: 58.9% Latino, 37.1% African American, 1.6% Caucasian, and 4% other. Non-maltreated children's parents provided written consent and all children provided verbal assent prior to participating. All study procedures were approved by the University of Southern California's Institutional Review Board. Procedure Transgression paradigm Children began by completing several tasks unrelated to the current study with a female interviewer for approximately 10 min. Following the completion of these tasks, a male confederate entered the room to complete a video game activity. The female interviewer introduced the child to the male confederate and explained that when she returned, she would ask the child some questions about the video game they played while she was gone. She then left the room. The confederate opened a laptop to play one of two games: the Ball game or the Jewel game (the game played was counterbalanced between participants). All children were randomly assigned to either the Break or No-Break Control condition. The confederate told children in the Break condition that he had played the game they were supposed to play too many times and wanted to play a different game instead. During the game, the confederate noted eight target details for the child to remember (e.g., "Check out the birds"). After 2 min, the confederate told the child to click a square that resulted in the computer crashing (a blue error screen appeared), following which the confederate explained they were not supposed to play the game because the computer crashes and the data on the computer was lost. He then explained to the child that the female interviewer was his boss and would be coming back to ask about the game they played. He asked the child to keep secret the fact Frontiers in Psychology 06 frontiersin.org that they had played the forbidden game and coached the child on details to provide during the interview. Specifically, he told them not to mention 4 details about the game they had played (e.g., "Do not say that there were birds") and provided 4 details they should mention about the game they were supposed to have played (e.g., "Say you saw blocks falling"). The confederate then closed the computer and left the room. In the No-Break Control condition, the child and confederate played a video game that did not cause the computer to crash. The confederate pointed out the same 8 target details for the game they played. After they finished the game, he said that the female interviewer would be returning to ask the child about the game. He then closed the computer and left the room. Interview Children's interviews were designed to be similar to best practice forensic interviews, with the use of rapport building and initial use of broad open-ended requests for recall, similar to the National Institute of Child Health and Human Development (NICHD) Structured Protocol, an internationally used evidence-based protocol for forensic interviews with children (Lamb et al., 2007). Rapport phase The female interviewer from the beginning of the session returned to the room. She began the interview with a 2-min rapport-building phase by asking the child to talk about the last time he or she felt really good or bad at school. Recall The recall phase began with an instruction based on one of two honesty conditions: Putative Confession or Control. In the control condition, the interview began with the following instruction: "Now that I know you a little better, [child's name], tell me everything that happened while I was out of the room from the very beginning to the very end. " In the Putative Confession condition, children were told, "Now that I know you a little better, [child's name], let me tell you something. The man, [confederate's name], who came in here, told me everything that happened and he said he wants you to tell the truth. Tell me everything that happened while I was out of the room from the very beginning to the very end. " Interviewers used facilitators (e.g., "uh-huh") and additional prompts (e.g., "What happened next?") to encourage the child to continue until they completed their initial narrative. Children were then asked what the first thing that happened was followed by a series of what happened next prompts until the child exhausted their narrative (M prompts = 2.75, SD = 2.35). The interviewer then used two follow-up open-ended prompts [e.g., "You said (action/verb). Tell me more about (action/verb). "]. Finally, children were asked to tell the interviewer everything they heard and everything they saw while the interviewer was gone (2 separate questions). Two groups of children were included in the study based on their condition and their disclosure during the interview phase. In the Break condition, only children who concealed the transgression, dishonest reporters, were included (children who disclosed were not). The second group included children who were in the No-Break Control condition. These groups were chosen to compare because children who truthfully reported the event where no transgression occurred (No-Break Control) and children who experienced the transgression but concealed it (dishonest reporters in Break Condition) provided similar reports of the event. Specifically, both describe an event during which they played a computer game, but only one group is honestly reporting that event. Thus, the truthful reporters (No-Break Control) and dishonest reporters (Break) were compared in the current study. Software analysis Each child's interview was transcribed verbatim to be analyzed by two software programs. Linguistic inquiry word count LIWC software is designed to analyze words within a transcript and code them into word categories (Pennebaker et al., 2001). Each word is compared to the words within the program's internal library and subsequently placed into the relevant word categories. The output provides a frequency with which each word category was used within the report. For the present study, we focused on 7 of these word categories [first-person singular (e.g., I, me) and plural pronouns (e.g., we, our), cognitive mechanism terms (e.g., cause, know, and), affect terms (e.g., happy, worry, and sad), tentative terms (e.g., maybe, perhaps, and guess), exclusive terms (e.g., but, without, and exclude), and negations (e.g., no, not, and never)]. Additionally, LIWC provides a count of the total words within the transcript. The reliability of the word categories used in the current study range from α = 0.43-0.67 (note: evaluating behavior, such as language, is distinct from evaluating psychological measurement; acceptable internal consistency for word types is lower given that repetition typical of psychological measures is not present in verbal behaviors; Boyd et al., 2022). Connexor machinese syntax software Connexor software was used to analyze the syntactic complexity of children's reports. It also produces a syntax tree to represent the complexity of the sentence structure itself, which is what is used in the current study to determine the syntactic complexity of children's reports. The software output provides the number of layers in each sentence within the transcript, which represent the number of noun and verb phrases in each sentence. Connexor's syntactic accuracy is 93.5% (Samuelsson and Voutilainen, 1997). Each transcript was analyzed using the Connexor program to obtain the number of layers per sentence for each child's report. We then calculated the mean number of layers used per sentence across the report for each child. This mean was used in the analyses to represent syntactic complexity, such that a higher score indicated that the child's sentences were more complex. Results All analyses were conducted using SPSS (v28). First, to ensure univariate normality and remove extreme outliers, we performed a square-root transformation on all dependent variables. We assessed multivariate normality by calculating Mahalanobis distance for each participant's scores and comparing the highest value to the critical chi square table (Pallant, 2007). With nine dependent variables, values above 27.88 are considered outliers. Two participants in our dataset were above this value (max value = 28.70); however, given that these participants were above the critical value by less than 1, we decided to retain these data points as has been done in previous research (e.g., Hashemian et al., 2012). Differences between groups on word types and syntactic complexity were assessed using a 4 (Age: 9, 10, 11, 12) by 2 (Honesty: Truthful Reporters vs. Dishonest Reporters) by 2 (Maltreatment Status: Maltreated vs. Non-Maltreated) by 2 (Honesty Promotion: Putative Confession vs. Control) MANOVA. The outcomes of interest were square-root transformed first-person pronouns, cognitive mechanism, affect, tentative, exclusive, and negation terms, as well as word count and complexity (average number of layers in children's sentences). The Interaction The main effect of honesty and age on the use of first-person plural pronouns were qualified by a significant 4-way interaction (Honesty x Age x Maltreatment Status x Honesty Promotion), F(27, 630) = 1.52, p = 0.046, η p 2 = 0.061. To examine the effect of the interaction on first-person plural pronouns, follow-up univariate ANOVA were conducted. First, the effect of Honesty, Maltreatment, and Age were examined separately for each Honesty Promotion condition. In the control condition, there was a significant main effect of Honesty, F(1, 121) = 6.45, p = 0.012, η p 2 = 0.051, such that dishonest reporters (M = 1.47, SD = 0.53) used more first-person plural pronouns than truthful reporters(M = 1.25, SD = 0.45). No other effects were significant in the control condition. In the Putative Confession condition, there was a significant main effect of Age, F(3, 95) = 3.48, p = 0.019, η p 2 = 0.099, which was subsumed by a significant 3-way interaction, F(3, 95) = 2.76, p = 0.046, η p 2 = 0.080. Follow-up ANOVAs were conducted to further examine this interaction; however, when further split to examine significant effects of Age, Honesty, and Maltreatment, these ANOVAs revealed no significant differences. Predicting veracity The final analysis involved using a binary logistic regression to predict dishonest and truthful reporters using the linguistic and syntactic variables on which they significantly differed. Specifically, first-person plural pronouns, cognitive mechanism terms, and syntactic complexity were entered as predictors with Honesty as the dependent variable (0 = truth-tellers, 1 = dishonest reporters). The overall model was significant in predicting truth-tellers and dishonest reporters, χ 2 (3, N = 248) = 61.43, Nagelkerke R 2 = 0.30, p < 0.001, with 74.2% of children being correctly classified. Interestingly, only syntactic complexity emerged as a significant predictor above and beyond the common contribution of all other variables, such that as syntactic complexity decreased children were 8 times more likely to be dishonest, B = −2.09, Wald = 37.29, p < 0.001, OR = 8.33. The use of cognitive mechanism terms, B = 0.06, Wald = 1.65, p = 0.199, OR = 1.06, and of first-personal plural pronouns, B = 0.109, Wald = 0.96, p = 0.328, OR = 1.12, did not uniquely predict group membership. Discussion The current study examined linguistic and syntactic differences in maltreated and non-maltreated children's truthful and dishonest coached reports of an interaction with an adult. Children's dishonest reports included significantly more firstperson plural pronouns and cognitive mechanism terms and were significantly less syntactically complex compared to truthful reports. Importantly, only syntactic complexity significantly differentiated truthful and dishonest reporters above and beyond the common contribution of all other variables in a logistic regression. The remaining linguistic cues examined did not differ between truthful and dishonest reporters, but some differences emerged based on age, maltreatment status, and honesty promotion. Linguistic cues to dishonesty The overarching goal of the current research was to examine how linguistic cues differed between truthful and dishonest reporters. Several important findings emerged. First, it was predicted that lie-tellers would use more first-person pronouns than truth-tellers, as has been found in previous research examining children's dishonest reports (Brunet et al., 2013;Williams et al., 2014;Talwar et al., 2018). Given that children were discussing an event in which they co-transgressed with an adult, both plural and singular first-person pronouns were examined separately. Interestingly, consistent with previous findings (Williams et al., 2014) dishonest reporters used more first-person plural pronouns than truthful reporters, but no differences were found for singular pronouns. The increased use of first-person plural pronouns may be particularly relevant when children are coached to dishonestly conceal a co-transgression. In the present study, children were coached to dishonestly report an event during which they played games and transgressed with a confederate. Thus, children likely referred to both themself and the confederate when providing their report due to the nature of the paradigm. Additionally, they may have preferred plural pronouns in case the transgression was discovered; including the confederate in their report ensured the interviewer would know that both individuals participated and thus the child could not be solely blamed for the transgression. Future studies in which a child is solely responsible for a transgression and no coaching occurred are necessary to more completely understand the role of first-person singular pronouns. It was also predicted that, due to differences in perceptual experiences, dishonest reporters would use more cognitive mechanism terms and fewer affect terms than truth-tellers. This prediction was only supported for cognitive mechanisms: dishonest reporters used more cognitive mechanism terms than truthful reporters. Previous research on linguistic cues suggests that lie-telling relies on cognitive processes to fabricate events that were not experienced, rather than sensory or affective processes that would be used to recall true events (Vrij et al., 2004;Evans et al., 2012;Williams et al., 2014). These processes are thought to be reflected in the language used; while this was supported in the current study in children's use of cognitive mechanism terms, we did not find differences in the use of affect terms. This finding aligns with previous research on the Reality Monitoring approach suggesting that these cognitive and affective processes are not uniquely able to differentiate between truth and lie-tellers (Gancedo et al., 2021). This may be due to the event being reported; both the truth-tellers and dishonest reporters experienced the same event during which they played a game; thus, both groups would rely on the sensory and affective processes used for true memory recall and would not differ between groups. The dishonest reporters, however, (1) omitted an aspect of the event (the transgression) and (2) provided the coached details. Omission would not require a change in words used as they simply did not mention the transgression. However, providing the coached details may have led to the increased cognitive mechanism terms (e.g., cause, know, and ought) as they had to provide details that had not been experienced. Given this pattern of findings, it is important to continue to examine instances of dishonesty in which a child is coached to conceal an aspect of an event and provide false details. For example, when children are interviewed about transgressions like sexual abuse, they may be coached by their abuser to conceal the abuse while still honestly reporting some information about what happened while they were together. Contrary to predictions, we failed to find differences in the use of tentative and negation terms. In the only previous study to examine tentative terms with children, consistent with our findings, no significant differences were found between truth-and lie-tellers (Brunet et al., 2013), suggesting that tentative terms may not be a helpful cue in examining the veracity of children's reports. Negation terms have been shown to be used more by adults in false reports (Ali and Levine, 2008;Hauch et al., 2015), but have not been examined in children's reports. It was expected that perhaps children would use more negation terms to ensure the experimenter knew that they were not involved in the transgression ("I did not touch the button). This, however, was not the case; it appears that children may use language besides negation terms to accomplish this goal. For example, perhaps they blame others rather than emphasizing that they were not involved (Evans et al., 2021). Syntactic complexity and word count The current study is the first to examine syntactic complexity as an indicator of dishonesty. Consistent with predictions, dishonest reporters used simpler sentence structure than truthful reporters. Given that lie-telling is a cognitively demanding task for children, they may devote cognitive resources to their report by monitoring what details they provide and ensuring they do not reveal the transgression. This may result in children using more simple statements, as these may be easier for them to monitor and ensure they conceal the relevant details. Future studies could test this explanation by examining whether the increased cognitive load results in simpler sentence structure by increasing children's cognitive load when they report on an event. It should be noted that there were also developmental findings; older children's and non-maltreated children's statements were more complex than younger and non-maltreated children's statements, respectively. Given these developmental findings, complexity may be a less reliable indicator of dishonesty; understanding how complex a child's report should be given their age would be important for examining whether their report is too simplistic to be truthful. Thus, future research should continue to examine syntactic complexity as an indicator of children's dishonesty to understand how this may be useful in a practical context. Unlike complexity, word count did not differ between truthful and dishonest reporters. Some studies have found that dishonest reports are shorter than truthful ones (Brunet et al., 2013), and some approaches, such as CBCA, use report length as an indicator of dishonesty (Vrij, 2005). However, word count differences have typically been found in studies where children fabricate the full event without being coached (Brunet et al., 2013). When children are coached to fabricate their full report, word count differences have not emerged (Evans et al., 2012;Saykaly et al., 2013;Williams et al., 2014;Talwar et al., 2018). In the present study, children (1) experienced the event and thus had the same amount of information as truthful reports and (2) were coached on details to provide and conceal. The coaching they received likely allowed them to provide a similar amount of information as the truthful reporters, leading their reports to be of similar length. This is an important finding given that when children are interviewed about events, it is unlikely that they will fabricate an entire event. Additionally, if they fabricate parts of an event and conceal some details, it is likely that they will have been coached by an adult to do so, particularly in cases of maltreatment. Previous research and the current study suggest that in these cases, word count is not a reliable indicator of dishonesty; when children receive some support to fabricate a cover story they will be able to provide the same amount of information as a child who tells the truth. Predicting dishonest vs. truthful reports Given the differences found between truthful and dishonest reporters, we examined the extent to which the indicators that differed between the two groups could be used to predict group membership (cognitive mechanism terms, syntactic complexity, and exclusive terms). We found a higher rate of accuracy that is typically found in human lie detection research (~50%; Gongola et al., 2017). Interestingly, only syntactic complexity emerged as a significant predictor; as complexity decreased, children were 8 times more likely to be classified as dishonest reporters. This finding suggests that syntactic complexity may be a new, effective method for detecting deception in children. While the model predicted about 74% of children's group membership accurately, it could be that finding other linguistic indicators of dishonesty in this type of paradigm would improve this model's ability to predict deception. Future research should focus on a broader range of linguistic indicators to explore how to improve this model's ability to predict truth and lie-tellers. Maltreatment Interestingly, we did not find any differences in indicators of dishonesty between the maltreated and non-maltreated samples. The lack of differences is somewhat surprising given that maltreated children's language development often differs significantly than non-maltreated children, both in terms of the scope of words learned and the complexity of their speech (Coster et al., 1989;Eigsti and Cicchetti, 2004;Sylvestre et al., 2016). Despite this, it is important to acknowledge that this finding is positive; maltreated children do not differ significantly in the types of words that are used when providing dishonest reports, and thus the indicators that have been found in previous research are likely also evident in maltreated children. However, it may be the case that we did not find differences because of coaching; coaching may have supported maltreated children in producing similar statements to that of non-maltreated children. Future research should examine whether this is the case by comparing maltreated children's reports with and without coaching. It is important to note that identifying linguistic or syntactic patterns to identify when children are being dishonest are also useful to identify when children are being honest. Identifying methods for differentiating truth and lie-tellers is useful for identifying instances of false allegations, honest or credible reports of abuse, as well as children who are lying to conceal abuse. Identifying children experiencing maltreatment, both by knowing when they are concealing and when they are honestly reporting, is vital for ensuring children are protected when necessary. These cues, specifically the use of first-person plural pronouns, cognitive mechanism terms, and syntactic complexity, may aid in identifying these cases. Limitations There are several limitations of the current study to note. Children's language proficiency was not assessed. Children with poorer language development (regardless of maltreatment) may have had less complex reports overall. Future studies should aim to account for children's language proficiency. Similarly, the results likely do not generalize to other languages. The rules governing the syntactic structure of sentences varies across languages; thus, syntactic complexity may look different depending on language. Another important limitation lies in the laboratory design (simulated transgression paradigm). These paradigms are useful in that the ground truth is known, so researchers can know with certainty which children are being truthful and which are being dishonest. However, these designs may lack external validity, particularly when being applied to reports of maltreatment, given the difference in the nature of the experience. Additionally, children may adjust their behavior in an experimental setting and not report on an event in the same manner they would during a forensic interview. Furthermore, the current study used an interview protocol based on the NICHD Structure Protocol, an interview which emphasizes the use of broad open-ended requests for recall. It is possible that the linguistic structure of children's honest and dishonest reports may vary based on the interview protocol used. Thus, in the future, researchers should examine whether the current study's findings replicate with other interview protocols. Conclusion The present investigation found support for children's use of firstperson plural pronouns and cognitive mechanisms terms as an indicator of dishonesty. The current study also identified a novel indicator of dishonesty, syntactic structure, which was highly accurate in classifying truthful and dishonest reports. This finding suggests an additional cue to examine when detecting deception in children, although further research is needed to be able to use this to discover a threshold of complexity that might distinguish truth and lie-tellers. Furthermore, the current findings suggest that, for the cues examined, linguistic cues to dishonesty may not differ for maltreated and non-maltreated children, providing the first evidence that previous research using linguistic cues is useful for both populations. Data availability statement The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by University of Southern California institutional review board. Written informed consent to participate in this study was provided by the parent/legal guardian or Court/attorneys (maltreated children consent given by court).
2022-12-15T15:05:02.794Z
2022-12-14T00:00:00.000
{ "year": 2022, "sha1": "41ffb578d229ff87ede15464a983c8b03a24c794", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "41ffb578d229ff87ede15464a983c8b03a24c794", "s2fieldsofstudy": [ "Psychology", "Linguistics" ], "extfieldsofstudy": [ "Medicine" ] }
220069466
pes2o/s2orc
v3-fos-license
GWAS of retinal vessel tortuosity identifies 85 novel loci recovering variants associated with disease Fundus pictures of the eye allow for non-invasive inspection of the microvasculature system of the retina which is informative on cardiovascular health. Automated image processing enables the extraction of morphometric properties of this system as quantitative features that can be used for modelling disease risks. Here we report the results of the largest genome-wide association study (GWAS) of retinal vessel tortuosity conducted to date using data from the UK Biobank (N=63,899). We identified 87 loci associated with this trait (85 of which are novel). The heritability of the trait was h2=0.23 (0.02). We carried out a replication study on a small independent population-based cohort, SKIPOGH (N=436). While the power of this study was too small to replicate individual hits, the effect size estimates correlated significantly between the two studies (Pearson correlation r =0.55, p=4.6E-6). Using LD score regression, we showed that the alleles associated with retinal vessel tortuosity point to a common genetic architecture of this trait with CVD and related traits. Our results shed new light on the genetics of cardiovascular risk factors and disease. Results 2 Automated processing of retina images defines retinal tortuosity phenotype 2 Retinal vessel tortuosity GWAS identifies 85 novel loci 2 Trend of effect sizes replicates in the SKIPOGH cohort 3 Tortuosity variants are associated with numerous diseases 3 Genetic signal is shared with hypertension and CVD 3 Materials and Methods 5 Definition of tortuosity 5 UK Biobank phenotypes 5 Data extraction 5 UK Biobank genotype data 6 The SKIPOGH study 6 Genome-wide association analysis 6 Heritability 7 Shared genetic architecture with disease 7 Introduction The fundus of the eye is covered by blood vessels which are essential for bringing oxygen and nutrients to the various tissues of the retina. Fundus photography allows easy and non-invasive inspection of this retinal microvasculature, and it is well known that there are disease-related changes in the morphometric properties of the vessels. The fundus is of interest beyond the field of ophthalmology, since pathological changes in the retinal vessels often reflect those in microvasculature of other organs of major importance. Indeed, based on the homology between the microvasculature in the retina and that found in other organs, retinal analysis has the potential of becoming a powerful screening tool for diseases elsewhere in the body, notably the brain 1 , 2 , 3 , 4 , 5 , kidney 6 , 7 and ear 8 . The retinal microvasculature can therefore provide signs of systemic disease, including increased risk of diabetes 9 , 10 , 11,12 , obesity 13 and cardiovascular disease (CVD) 14 , 15 , 16 , 17 , 18 , specifically stroke 16,19 , 19,20 , 21 , coronary heart disease 22 , coronary artery disease 23 , hypertension 11 , 24 , 25 , 20 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , atherosclerosis 19,20,35 and myocardial infarction 36,37 . It is also informative of specific eye conditions such as Plus disease in the case of retinopathy of prematurity 38 , 39 , 38 . In recent years, measuring retinal features in large genotyped cohorts has paved the way for studying the genetic underpinning of these phenotypes and Genome Wide Association Studies (GWAS) have already identified a number of loci 40,41 associated with renal vessel size 42,43 44 , optic disc morphology 45,46 and vessel tortuosity 23 . In this study we carried out the largest GWAS on median retinal vessel tortuosity to date, confirming two known variants and discovering 85 new ones. Our discovery cohort was the UK Biobank, which provides a large collection of retinal images suitable for automatic analysis of morphometric properties of the vasculature of the human eye 47 . We used data from the much smaller, yet independent, population-based cohort SKIPOGH 48 , 49 for replication analysis. Many of the variants we identified with our GWAS have previously been associated with other traits, specifically CVD and some of its risk factors. Automated processing of retina images defines retinal tortuosity phenotype We applied the ARIA 50 software for automated processing of 175,821 images from 63,899 individuals available in the UK Biobank. We modified ARIA to operate in batch mode, annotating the blood vessels in each image by extracting a list of points along the midline of each vessel. Using these data , we measured the tortuosity for each vessel (or annotated segment thereof) in terms of the so-called Distance Factor, i.e. the ratio between the path length along the vessel and the distance between its start and end point, as was first suggested in 51 used the median retinal vessel tortuosity over all annotated vessels (averaged over multiple images if available) as trait (see Methods for more details). Retinal vessel tortuosity GWAS identifies 85 novel loci Applying linear regression of quantile-normalized median retinal vessel tortuosity on the genotypes of the matching subjects imputed to a panel of 15M genetic variants, we identified 6481 significantly associated SNPs (see Supplementary File 1). Applying LD pruning with a threshold of R 2 < 0.01 within a window of 500K bases to define independence, we obtained a list of 87 independent lead SNPs (the top 10 are listed in table 1 , ordered by statistical significance, and a full list can be found in Supplementary File 2). Two of the 6481 significant variants, namely rs1808382 and rs7991229, had previously been associated with retinal vessel tortuosity 23 , while the remaining 85 SNPS represent novel loci associated with this trait (see figure 1 for a Manhattan plot of our genome-wide signals). According to the GWAS Catalogue, there is a third known locus for retinal vessel tortuosity, namely rs73157566. Yet, the association signal of this SNP was only marginally significant and it did not replicate in the replication cohort of the study which reported it 23 . We also did not replicate this finding (for details about these three variants and the respective genomic regions, see Supplementary Material on known associations with retinal vessel tortuosity). Trend of effect sizes replicates in the SKIPOGH cohort We attempted replication of our lead SNPs from the UKBB analysis in the SKIPOGH cohort 48 , 49 (436 individuals, multiple images per eye, for a total of 1,352 images). 60 out of the 87 lead SNPs were available for comparison. Given the limited sample size of the replication cohort, we lacked power to replicate the individual associations found in the discovery cohort, as none of them survived Bonferroni correction (p = 0.05/60 = 8.3E-4). Nevertheless, the effect size estimates using SKIPOGH data showed good concordance with those from the UKBB (see Supplementary File 3). First, 42 of 60 lead SNPs had the same sign of their effect size estimate in both studies (binomial test p = 5.3E-4). Second, we observed a Pearson correlation of r = 0.55 (p = 4.6E-6) across these estimates (see figure 2 ). Both results stay significant when removing outliers (see Supplementary Material, replication of effect sizes without outliers). Tortuosity variants are associated with numerous diseases A shared genetic basis of retinal tortuosity and Coronary Artery Disease had already been noted for locus rs1808382 (mapped to the ACTN4/CAPN12 genes), underlining the usefulness of retinal vascular traits as biomarkers for cardiovascular diseases 23 . We replicated this finding and asked to what extent it also applies to the large panel of new variants we associated with retinal vessel tortuosity. Querying the GWAS Catalogue 52 for our hits revealed 9 loci linked to genes that had been reported as genome-wide significant in associations with other diseases including coronary heart disease, myocardial infarction, arterial hypertension, type 2 diabetes, chronic lymphocytic leukemia, Alzheimer's disease, diverticular disease, glaucoma and myopia (see table 2 ). Besides these 9 loci, we also uncovered 26 additional SNPs with pleiotropic effects on various diseases which could not be confidently mapped to a specific gene (see full list in Supplementary Material variants associated with disease outcome). We next expanded our query to include phenotypes known to confer a disease risk. We report a list of 12 loci linked to genes influencing both tortuosity and disease risk factors (see table 3 ). Furthermore, we uncovered another 9 SNPs showing similar pleiotropic properties, which could not be confidently mapped to a specific gene (see Supplementary Material). SNP-level statistics were aggregated to obtain gene-wise association scores using the tool Pascal 53 . The results of our gene-wise association are summarized in figure 3 : red squares mark disease genes, reported in table 2 . Similarly, green squares indicate risk factor genes from table 3 . how many of the variants identified by our retinal tortuosity GWAS had previously been reported as being associated with any of the large range of complex traits in the GWAS Catalogue 52 . A number of traits that stand out. Firstly, both diastolic (49) and systolic blood pressure (46) have been associated with many of our newly identified loci. Second, also pulse pressure and BMI share 19 associated loci. We note that both elevated blood pressure and BMI are well-known CVD risk factors, which we purposefully did not use as covariates in our GWAS, so as to be able to study the overlap in signal, even though correcting for these traits would have left the association signals largely unchanged (see Supplementary Material on Analysis of potential confounders). Furthermore, we observe a sizable number of tortuosity-associated variants overlapping with coronary artery disease variants, in line with what has recently been reported by a smaller scale GWAS on retinal vessel tortuosity 23 . For two additional phenotypes with a sizable overlap of trait associated variants, namely blood protein levels and bone mineral density, the relationship to tortuosity is less obvious, but might point to common pathogenic mechanisms. Similarly, for some of the other traits sharing several associated variants, notably, Type I and Type II diabetes, colorectal cancer, cholesterol, lung function, skin and eye pigmentation, and autoimmune diseases, there could be joint genetically modulated pathways, but some of these common associations may also just be spurious (see Supplementary File 4 for the full list of phenotypes and references to publications). GWAS Analysis Adapting the retina image processing tool ARIA to run on a cluster facilitated the extraction of median retinal vessel tortuosity estimates for close 64 thousand subjects of the UK Biobank, enabling a GWAS for this trait with substantially increasing power compared to previous studies. This gain in power resulted in the identification of 85 novel loci and the replication of 2 out of 3 associations known from previous studies, providing a substantially improved picture of the genetic architecture of this trait. We detected pleiotropic effects of 10 tortuosity variants associated with disease, specifically CVD-related diseases (Coronary Artery disease, Coronary Heart disease, Myocardial infarction, Hypertension), systemic diseases (diabetes, chronic lymphocytic leukemia, Alzheimer's disease) and ophthalmological conditions (myopia, glaucoma). Our results only link these diseases through common associated genetic variants to retinal vessel tortuosity. While one might speculate that this trait may reflect pathological developments of blood vessels that are causally upstream of some of these diseases, establishing such causal links will require more work, including the application of Mendelian randomization 54,55 . This study was subject to several limitations. First, our tortuosity measurements combine those of arteries and veins, while most ophthalmological studies distinguish between arterial and venular tortuosity. This compromise was made because we could not fully automate vessel classification. Indeed this is a difficult problem that still seems to require some expert input at least for some images or vessels, which would have prevented us from analysing such a large set of retinal images. Yet, the noise introduced by mixing arteries and veins, apparently was outweighed by the gain in sensitivity we achieved, as evidenced by the large number of associated loci. Subsequent studies using vessel annotations distinguishing their type may test whether these loci obtain different effect sizes for arterial and venular tortuosity. The second limitation of our study was that the software tool we used estimated tortuosity by the Distance Factor 51 , a global measure which may not be ideal to capture vessel pathology (that may be better described by more local measures such as curvature 56 , based on which, potentially more disease-relevant measures have been proposed 57 , 58 ). We provide statistics about the distributions of vessel lengths in our dataset and repeated our GWAS using only a subset of relatively short vessel segments on which we measured Distance Factor tortuosity, but observed no dramatic change in the observed effect size 4 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 26, 2020. . estimates of our top hits (see Supplementary Material on Tortuosity of short vessels). Also, we did not adjust for spherical equivalent refractive error, which might have confounded our measurements to some degree. The third limitation of our study was the small size of the replication cohort, which prevented us from replicating any individual hits. Nevertheless, the effects sizes in the replication study correlated strongly with those in the discovery cohort, providing independent evidence that they were not driven by any artifact specific to UKBB. In conclusion, our highly powered GWAS on median retinal vessel tortuosity identified 85 novel loci in the maintenance of the microvasculature system, or failure thereof, as precursors or symptoms of complex diseases. Definition of tortuosity Our study assessed tortuosity using the measurements provided by the ARIA software 50 . We estimated tortuosity as the total vessel length divided by the euclidean distance between the vessel segment endpoints. A number of measures have been designed to estimate vascular tortuosity. The measure adopted by the ARIA tool, in particular, is reported in a recent review as the AOC measure (Arc Over Chord ratio) 56 . In an earlier work on retinal vascular tortuosity, this measure was referred to as Tau 1 59 175,821 fundus eye images (87,562 images of left eyes and 88,259 images of right eyes) were available at the time of required data extraction. We processed all images, including those from the reassessment time point. Other phenotypes were used to correct biases in the genetic associations (age, sex, PCs of genotypes) or to study correlation with disease and lifestyle (all other phenotypes). The following health statistics are of interest to interpret the medical implications of our analysis: 26 Data extraction The tortuosity phenotype measure was extracted using a modified version of the software ARIA by Peter Bunkehad 50 . We modified this tool to run in batch mode dumping vessel statistics to disk in the process, processing images without the need for human interaction. The ARIA parameters used for vessel extraction were the default applied by the software for tests to images from the database REVIEW 60 . We now describe the phenotype extraction quality control procedure. The ARIA tool was used to perform segmentation of blood vessels and measurement of both their tortuosity and diameters. The software is designed to perform vessel diameter measurements at regular 5 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 26, 2020. . intervals along the centerline of each vessel, so that the number of measured diameters could be used as a proxy for the total length of the vascular system depicted in one image. Images for which less than 11 thousand equally spaced diameters could be measured were discarded. This threshold was (conservatively) set by visual inspection to discriminate lower quality images that were too dark, too light or out of focus. Roughly two out of three images passed this quality control, with a total of 120,363 images being sent forward in the pipeline. Postprocessing of the data consisted in averaging the values derived from the left and right eye of each participant (for the resulting distribution, refer to Supplementary Material on Tortuosity estimates in UKBB) The data extraction pipeline was written in python and bash and was run on a cluster using SLURM. UK Biobank genotype data Around 488,000 participants were genotyped on Axiom arrays for a total of 805,426 markers. From this, about 96 million genotypes were imputed using a combined reference panel from the 1000 Genomes and UK10K projects 61 . The annotation used to report variant positions is the Genome Reference Consortium Human genome build 37 (known as GRChb37 or hg37). We subset the genotypes using the software BGENIX, shrinking the list of investigated variants to those that have been assigned an rsID (around 15 million SNPs). An additional Quality Control was performed via a postprocessing step on the GWAS output. We filtered out SNPs with MAF < 5E-4 (which given the sample size of 63,899 subjects, translates to having an average of over 30 individuals having at least one minor allele). We filtered out SNPs with imputation quality < 0.3, as used in Ref. [ 62 ]. The SKIPOGH study We performed replication of the GWAS results in the SKIPOGH cohort 48 , 49 (Swiss Kidney Project On Genes in Hypertension) which is a Swiss family-based population-based cohort that includes 1'042 participants (493 males and 549 females), aged between 18 and 96 years old, which have been extensively phenotyped at baseline and in a 3-year follow-up. Participants were recruited from three different locations in Switzerland, namely, Bern, Geneva and Lausanne. The genotyping was performed with the Illumina 2.5 omni chip, followed by an imputation based on HRC v1.1 panel using Minimach3. The annotation used to report variant positions is the Genome Reference Consortium Human genome build 37 (GRChb37). Genome-wide association analysis The raw tortuosity measures extracted from the image data were transformed in order to correct for confounding effects that would bias the genetic association analysis (for an overview of the confounder analysis that we performed, please refer to Supplementary Material on Analysis of potential confounders). Only variables that showed a statistically significant correlation to tortuosity were corrected for. Specifically, we applied the linear model: tortuosity ~ age + sex + genetic PCs. A rank-based inverse normal transformation was applied to the residuals of this linear model and the GWAS was run on the output as a univariate linear regression without confounders (refer to Supplementary Material to inspect the resulting distributions). The genetic association study was run using this software BGENIE 63 . The (unpruned) output consisted of 6481 significant SNPs (see Supplementary File 1). 6 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 26, 2020. . The list of independent SNPs was calculated by performing LD pruning using the LDpair function of the R package LDlinkR 64 , selecting as a reference panel "GBR" (Great Britain) from the 1k genome project. Two SNPs were considered as independent if they had LD R 2 < 0.01 or were more than 500K bases apart (see Supplementary File 2). This resulted in 87 independent lead SNPs. LD pruning was repeated with an alternative LD threshold of R2 < 0.1 (for comparison with other GWAS studies, where such value is used) resulting in a list of 124 significant SNPs (see Supplementary File 5). The association analysis at replication stage for the SKIPOGH cohort was performed using the Emmax function of the Epacts software in order to account for family structure by using the kinship matrix in the model. Additionally, the recruitment center was included as covariable. Summary plots were generated using the R packages qqman 65 and the GWASTools 66 . Heritability We carried out LD score regression using the software LDHub 67 . The portion of phenotypic variance cumulatively explained by the SNPs was h 2 =0.2293 (0.0229). The measure of inflation was lambda_GC=1.1364; lambda GC measures the effect of confounding and polygenicity acting on the trait. The mean chi-square statistic mean_X 2 =1.2941. The LD Score regression intercept was 1.0135 (0.0103); an intercept close 1 indicates little influence of confounders (mostly of population stratification). The ratio of the proportion of the inflation in the mean X 2 that is not due to polygenicity was 0.0459 (0.0352); a ratio close to 0 is desirable as it indicates low inflation from population stratification. Shared genetic architecture with disease SNP variants overlap with disease phenotypes (same rsID) was analysed using the EBI's GWAS Catalogue 52 . We report independent SNPs in the tortuosity GWAS that are part of the GWAS Catalogue because associated to disease (or to a disease-related phenotype). This analysis was extended to genes and pathways using FUMA 68 . We list independent SNPs in the tortuosity GWAS who were in LD with SNPs that had already been reported in the GWAS Catalogue (see Supplementary File 4 and Supplementary File 6). 7 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Supplementary Material Please refer to the Supplementary material file. Figure 1 : Manhattan plot of genome-wide association study for retinal vessel tortuosity corrected for phenotypic variables that showed a statistically significant association, i.e. age, sex, and a subset of principal components of genotypes (PCs: 1,2,5,6,7,8,16,17,18). Refer to Supplementary Material analysis for correlation with potential confounders. The red line is the genome-wide significance level (P = 5E-8). For a zoom of the genomic location, refer to Supplementary Material on known associations with retinal vessel tortuosity. Figure 2 : Statistically significant correlation between the measured effect sizes in the discovery cohort (UKBB, N=63,899) and replication cohort (SKIPOGH, N=436). We considered all lead (independent) SNPs in the UKBB. Of the 87, we could find 60 with matching rsIDs in SKIPOGH. The resulting correlation has Pearson r = 0.55 and P = 4.6E-6. Figures 8 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 26, 2020. . . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 26, 2020. Table 2 : List of variants identified by the UKBB tortuosity GWAS which were independently found to be associated with disease outcome in an independent study. We report only exact variants (same rsID in both tortuosity and disease GWAS). We report only variants which we could confidently map to a gene. Gene p-values were computed by using two tools, and both results are given for comparison: the first value was computer by Pascal 53 and the second (in parenthesis) by MAGMA 78 . Variants associated with more than one disease are marked with a star. Table 3 : List of variants identified by the UKBB tortuosity GWAS which were independently found to be associated with disease risk factors in an independent study. We report only 10 SHARED SNP ref RISK FACTOR (diseases) GENE GENE P-VALUE . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 26, 2020. . exact variants (same rsID in both tortuosity and disease GWAS). We report only variants which we could confidently map to a gene. Variants associated with more than one disease are marked with a star. Author contributions MT, MuB and SB designed the study. MT performed tortuosity measurements of the raw image data from the UKBB and the SKIPOGH cohort and post-processed it. MT carried out the GWAS and subsequent bioinformatics analyses, with the guidance of SB, NM and EP. TC performed the replication analysis in SKIPOGH. HAZ provided medical and ophthalmological expertise. MiB helped in quality controlling the retina images. MT and SB wrote the manuscript.
2020-06-26T19:01:39.841Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "8d7aa58a4b2eeebca37b5bc5c50795e6ae0f2535", "oa_license": "CCBYNC", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/03/24/2020.06.25.20139725.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "8d7aa58a4b2eeebca37b5bc5c50795e6ae0f2535", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
16528305
pes2o/s2orc
v3-fos-license
Inferring Demography from Runs of Homozygosity in Whole-Genome Sequence, with Correction for Sequence Errors Whole-genome sequence is potentially the richest source of genetic data for inferring ancestral demography. However, full sequence also presents significant challenges to fully utilize such large data sets and to ensure that sequencing errors do not introduce bias into the inferred demography. Using whole-genome sequence data from two Holstein cattle, we demonstrate a new method to correct for bias caused by hidden errors and then infer stepwise changes in ancestral demography up to present. There was a strong upward bias in estimates of recent effective population size (Ne) if the correction method was not applied to the data, both for our method and the Li and Durbin (Inference of human population history from individual whole-genome sequences. Nature 475:493–496) pairwise sequentially Markovian coalescent method. To infer demography, we use an analytical predictor of multiloci linkage disequilibrium (LD) based on a simple coalescent model that allows for changes in Ne. The LD statistic summarizes the distribution of runs of homozygosity for any given demography. We infer a best fit demography as one that predicts a match with the observed distribution of runs of homozygosity in the corrected sequence data. We use multiloci LD because it potentially holds more information about ancestral demography than pairwise LD. The inferred demography indicates a strong reduction in the Ne around 170,000 years ago, possibly related to the divergence of African and European Bos taurus cattle. This is followed by a further reduction coinciding with the period of cattle domestication, with Ne of between 3,500 and 6,000. The most recent reduction of Ne to approximately 100 in the Holstein breed agrees well with estimates from pedigrees. Our approach can be applied to whole-genome sequence from any diploid species and can be scaled up to use sequence from multiple individuals. Introduction In diploid populations, the strength and patterns of linkage disequilibrium (LD) between loci are strongly influenced by effective population size (Hill 1975). Consequently, the extent of LD in a population can be used to estimate past effective population size (N e ) (Hill 1981). Although knowledge of the ancestral demography is of interest in itself, it is also of importance, for example, when studying patterns of LD for evidence of selection (Grossman et al. 2010). The null hypothesis of no selection requires an accurate demographic model because variation in N e can result in LD patterns that mimic selection (Pritchard and Przeworski 2001). Although LD measured between pairs of loci, such as r 2 , has been used to infer complex demography (Schaffner et al. 2005;Voight et al. 2005), multiloci measures of LD potentially capture more population genetic information and therefore have also been used to infer ancestral demography (Hayes et al. 2003;Meuwissen and Goddard 2007;Lohmueller et al. 2009;MacLeod et al. 2009). LD arises as a result of individuals in a finite population sharing chromosome segments inherited identical by descent (IBD) from a common ancestor. Longer segments arise as a result of more recent coalescent events while very short IBD segments are more likely to date back to very distant coalescent events. Therefore, the pattern of multiloci LD can be described by the distribution of the lengths of chromosome segments that are IBD, which in turn can be used to infer demography (Hayes et al. 2003). In practice, if a pair of chromosome segments carry the same alleles at all positions we observe a pairwise "run of homozygosity" (RoH), which is identical by state (IBS) but not always entirely IBD from a single common ancestor because recombination can be masked. Multiloci LD can therefore be summarized by the observed distribution of lengths of pairwise RoH separated by heterozygous sites. MacLeod et al. (2009) developed an analytical method to calculate the probability of observing pairwise RoH of n or more loci using simplified coalescent theory, which accounts for stepwise changes in N e . They call this summary statistic "haplotype homozygosity" or HH n and demonstrated that the analytical method could be exploited to infer a demography that was consistent with the observed distribution of RoH in empirical single-nucleotide polymorphism (SNP) array data. This method avoids the computational burden of Approximate Bayesian Computing approaches that simulate many replicates of genetic data (Beaumont et al. 2002). This study is the first application of the MacLeod et al. (2009) method to whole-genome sequence data. The genome sequence of even a single individual in an outbred population provides vast numbers of pairwise RoH (up to millions), each with its own coalescent history, from which we can summarize the genome wide pattern of multiloci LD. Until recently, demographic inference studies have used either subsets of genome wide loci known to be polymorphic (such as SNP arrays) or polymorphic loci in samples of relatively short resequenced genome segments (Gronau et al. 2011). One previous study used individual wholegenome sequences from several humans to reconstruct demography, although they first condense nonoverlapping windows of 100 bp into a single "locus" defined as homozygous or heterozygous (Li and Durbin 2011). These authors apply a pairwise sequentially Markovian coalescent (PSMC) model that relies on the distribution of heterozygous sites within an individual sequence to infer historical N e (Li and Durbin 2011). Their method appears to work well for human ancestral demographic inference, although they found estimates of human N e in recent times were not reliable (from 800 generations ago). We compare our inferred demography with that from the PSMC model (Li and Durbin 2011). Importantly, in this study we introduce a new method to first correct for false-positive heterozygous errors in the sequence prior to inferring demography. These errors have a disproportionate impact on the longer RoH and even after careful quality control, a low level of false positives remains in sequence data. We demonstrate that even a low error rate can cause a serious bias in estimates of more recent N e because the longer RoH that inform these N e estimates are those most likely to be broken up by the false positives. Our results indicate that introducing the error correction method for false positives also considerably improves the accuracy of the PSMC estimates of recent N e in cattle. Using whole-genome sequence from two Holstein bulls, we applied stringent filters to remove common sequencing errors and then applied our correction method to account for residual false-positive heterozygous errors. We then inferred a stepwise pattern of changing ancestral N e , which predicts a distribution of RoH matching that observed in the corrected sequence data. The sequenced bulls represent two key ancestors in the Holstein breed (Larkin et al. 2012): Walkway Chief Mark ("Mark") and Pawnee Farm Arlinda Chief ("Chief," the sire of Mark). We inferred the stepwise pattern of ancestral N e using Mark's sequence, and then cross-validated the demographic model in Chief's sequence data. We also used simulated sequence data to further test the methodology (supplementary information, Supplementary Material online). Our method would be useful for a range of outbred diploid species and can be readily scaled up to use sequence data from multiple individuals, for example, to estimate the change in N e over time for wild animal populations. Results The results from the data analysis ( fig. 1) are presented in three sections: error estimation and correction, observed summary statistics (HH n ) calculated from RoH, and demographic inference. Sequence Error Correction The sequence of both bulls was independently subjected to stringent filtering to reduce common sequencing errors, referred to henceforth as "filtered sequence." Residual error rates in the filtered sequence were then calculated for each bull: that is, remaining false-positive heterozygous SNP and false-negative missed heterozygous SNP. The numbers of heterozygous SNP detected in the filtered sequence (table 1) provided more than 1 million RoH from which to estimate HH n in Mark. The proportion of missed heterozygous SNP positions (false negatives), estimated by comparing sequence positions matching independent 50,000 SNP array genotypes (SNP50), is higher in Chief's sequence compared with Mark's (table 1) due to the difference in average read depth (~7Â for Chief and~13Â for Mark). The false-negative rate was relatively high because we used stringent filters to minimize falsepositive heterozygous SNP that can cause a serious bias in the distribution of longer RoH. It was important to estimate the false-negative rate because this also affects the observed distribution of RoH, and this was corrected for by scaling the mutation rate (Materials and Methods). The false-positive error rate in filtered sequence estimated from comparison with the SNP50 genotypes is relatively low for both bulls (table 1) and not very different to the potential error rate of 0.1% in SNP50 data (http://res.illumina.com/ documents/products/datasheets/datasheet_bovine_snp5o. pdf, last accessed July 23, 2013). Therefore, these estimates of false-positive rate (table 1) may be inflated by the assumption that SNP50 data is error free, and may also be imprecise because of the relatively low number of loci available for validation with SNP50 genotypes. We therefore computed a more robust estimate of the false-positive error rate using observed average sequence heterozygosity in regions matching long RoH in the SNP50 data (>10 Mb) (i.e., long runs of adjacent homozygous genotypes that were likely to be IBD regions). In filtered sequence, we found an average of one heterozygous sequence SNP per 55,118 bp in Mark and one per 87,464 bp in Chief in regions corresponding to long SNP50 RoH. This was in stark contrast with the remainder of the genome which had an average of one heterozygous SNP per 1,957 bp for Mark and one per 3,214 bp for Chief. Therefore, matching the long SNP50 RoH regions to filtered sequence provided estimates of average residual false-positive error rate of 1.8 Â 10 À5 in Mark and 1.1 Â 10 À5 in Chief. Our method of stochastically correcting for residual falsepositive errors remaining in filtered sequence (see Materials and Methods) had a significant impact on restoring the longer RoH and therefore moving the distribution of RoH closer to the true distribution. Figure 2 illustrates how the long RoH are much more likely to be disrupted by errors than very short RoH, by comparing "corrected sequence" and filtered sequence in a chromosome region with a long RoH (~33 Mb) in Mark's SNP50 data. In the corrected sequence, several very long RoH are evident ( fig. 2A) compared with filtered sequence ( fig. 2B), which displays >100 shorter homozygous segments in this same region. In figure 2C, it is clear that correction of the sequence data without controlling for uniform distribution of false positives (random deletion of SNP without using nonoverlapping windows) does not uncover the long RoH. The correction method is not 1. The computational workflow first identified heterozygous positions within each genome sequence of two bulls (A). Heterozygous positions were then validated across independent SNP50 genotypes (B). After filtering to remove heterozygous errors, the residual false-positive rate in sequence was estimated and corrected for (C), and the summary LD statistic (HH n ) was calculated (D). The ancestral demography was inferred using an analytical model (E) and validated using the sequence of the second bull. 2211 Inferring Demography from Runs of Homozygosity . doi:10.1093/molbev/mst125 MBE therefore randomly creating long homozygous runs, but rather is unmasking those that were previously hidden by a low level of false-positive errors. In the regions flanking the SNP50 RoH, there is little discernable difference in corrected compared with filtered sequence because these very much shorter RoH are rarely affected by the low error rates in filtered sequence ( fig. 2). Similar patterns as seen in figure 2 were observed across the genome for both bulls for all regions with long RoH in SNP50 data (Mark's chromosomes 1 to 10 shown in supplementary figs. S5, S6, and S7, Supplementary Material online). After correction for false-positive errors there remained 1,198,677 RoH for Mark and 728,059 for Chief, referred to henceforth as "corrected sequence." Table 2 gives estimates of single base heterozygosity for each bull. The observed single base heterozygosity rates are very different because Chief was sequenced at about half the read depth of Mark resulting in divergent false-negative error rates. However, the estimates of true single base heterozygosity were very similar for both bulls. This is important because it lends credibility to the independently estimated error rates and data correction for each bull. Estimated true heterozygosity indicates that on average we would expect to find one heterozygous base pair every 1,070 bp: that is, approximately 2.47 million heterozygous SNP across all autosomes assuming a total length of 2,545,896,661 bp based on the Btau 4. . In (C), the same proportion of heterozygous errors was randomly deleted from the filtered data as for corrected data (A), but without enforcing uniform deletion from nonoverlapping windows. This demonstrates that our correction method effectively unmasks long RoH in the sequence data (A). Very much shorter RoH are observed in the regions flanking the SNP50 RoH even in the corrected data, because these are rarely affected by the low level of residual errors. chromosomes are observed IBS for at least n base pairs to the right of a site chosen uniformly at random. The HH n summary statistic was independently estimated in each bull from the observed RoH in filtered sequence and also in corrected sequence. To test the precision of our stochastic correction method, used to restore the distribution of RoH closer to the true distribution, we replicated the data correction 25 times for each bull sequence and calculated HH n in each replicate. The coefficient of variation of HH n across the 25 replicated data sets increased slightly with the size of segment (n), but was never greater than 0.3% in Chief, 0.2% in Mark. Also, we used our goodness of fit measure, Q (adapted from eq. 1 in Materials and Methods) to estimate the pairwise deviation between Mark's corrected replicates: The maximum value of Q for between replicate HH n in corrected sequence did not exceed 1 Â 10 À4 . This was important because we also use the Q value for demographic inference to measure of the goodness of fit between observed HH n and an analytical prediction of HH n for a specified demography. Our experience with real and simulated data indicated that an appropriate threshold for Q was 0.001. The low variation between replicates of stochastically corrected data demonstrated that the correction method is robust and indicated that a single stochastically corrected sequence would have been adequate for our estimate of HH n . However, we used the averaged HH n across replicates for our demographic inference. The observed HH n for filtered and corrected sequence in figure 3 demonstrates that although there was only a very low level of residual false-positive heterozygous errors in filtered sequence, these errors still create a significant bias. False positives particularly affect the probability of observing longer RoH ( fig. 3) and therefore may bias the estimates of more recent N e . For example, the probability of observing RoH of 200 kb or longer is approximately 10% in the corrected data for both bulls but is only around 5% for the filtered data. Lengths of more than 0.5 Mb are almost never observed in the filtered data whereas after correction, there is a 6% probability of observing them ( fig. 3). Demographic Inference Our method of inferring stepwise N e searches for a demographic model that analytically predicts an HH n in close concordance with observed HH n for a range of segment sizes up to 1 Mb, with "goodness of fit" Q (eq. 1, Materials and Methods) assessed by a threshold parameter ( 0.001). The reliability of the HH n analytical predictions was demonstrated by comparing these with empirical HH n in sequence data simulated using our inferred demography (supplementary fig. S2, Supplementary Material online). Importantly, the goodness of fit test (Q) between predicted and empirical HH n in each of the simulated data sets also met our threshold parameter ( 0.001). We used Mark's corrected sequence to infer demography and calculated the upper and lower range of N e within a given Phase (time period) that met our goodness of fit threshold between predicted and observed HH n ( fig. 4A). The upper and lower limits for N e were estimated while fixing the original N e estimate in all other time periods. This was considered a reasonable approach because estimates of N e in any given time period are most dependent on the distribution of particular lengths of RoH. In several very short stepwise changes, where N e ) number of generations for that time period, it was only possible to give a lower limit but not an upper limit. This occurs because generally LD patterns are not affected by relatively brief surges in population size or bottlenecks that occur over very brief time periods ( N e (Nordborg and Tavare 2002). On reaching the best fit model in figure 4A, we then adjusted two or more neighboring Phases of differing N e to a single N e value, provided HH n remained within our threshold goodness of fit. We then re-estimated the predicted upper and lower N e limits in this revised demography ( fig. 4B). This illustrates that it is difficult to determine exact time boundaries for changes in N e , although the overall demographic pattern remains similar in figure 4A and B. The inferred demography found to closely predict the observed HH n for Mark's corrected data ( fig. 4A) reduces from the predefined large ancestral N e (~62,000) to very small (~90) in present day ("present" being 1978, Mark's birth year). We have converted cattle generations into years before present using an average generation interval of 5 years: a reasonable estimate based on average generation intervals in modern cattle (Gutierrez et al. 2003;Mc Parland et al. 2007) as well as likely reproductive behavior in their wild ancestors. However, even if the estimated generation interval is increased to 7 years, this would not much affect our conclusions. Our results suggest that some 166,000 years ago (33,200 generations ago) there was a sharp reduction in N e to~17,000 and this remained stable until around 12,000 years ago. Over the following 3,000 years there was a further steep decline in N e to~3,500. From that time the reduction in N e became The observed pattern of HH n in the filtered and corrected sequence for Mark and Chief indicates that even when a low level of falsepositive heterozygous errors remains in the sequence (filtered data), the observed pattern of HH n is biased downwards, particularly in segments > 10 kb. The HH n curves differ between bulls because Chief was sequenced at half the read depth of Mark, and the observed corrected data does not account for missed heterozygous SNP (false negatives). 2213 Inferring Demography from Runs of Homozygosity . doi:10.1093/molbev/mst125 MBE more gradual but some 120 years ago the N e dropped rapidly again from around 1,500 to the current estimate of around 90. In figure 5, we contrast the inferred demography using corrected sequence with that inferred from filtered sequence (i.e., not corrected for residual false positives), and also compare our results with those using the PSMC demographic inference method of Li and Durbin (2011). Parameter settings used for the PSMC method are given in the supplementary information, section 7, Supplementary Material online. Clearly the residual false positives in the filtered data have a major effect on estimates of N e in recent time for both methods: in contrast to the sharp reduction in recent N e , inferred from corrected data, both models infer a rapid expansion around 1,000 years ago to a present day N e of around 20,000. Notably, the PSMC bootstrapping method (Li and Durbin 2011) gave a highly variable N e estimate across the most recent 1,000 years However, in the PSMC method the final 1,000 years was modeled as a single fixed time period so there was no scope for more recent changes in N e . Li and Durbin (2011) found that PSMC estimates of N e in humans were not reliable more ancestrally than 120,000 generations ago and our estimates of N e are similar at this time point (~600,000 years ago) but then diverge ( fig. 5). We constrained our most ancestral estimate of N e to a time period of approximately 1 million (red) and lower (blue) limits for predicted N e within each stepwise estimate. The ranges shown for N e estimates are those for which the predicted HH n summary statistic meets our goodness of fit threshold ( 0.001). (A) is the initially inferred demography, while in (B) the adjacent time Phases of N e were adjusted to a single N e value where possible, while still ensuring the demography met the goodness of fit criteria. Note that in short time periods where N e considerably exceeds the number of generations for that time period, it is not possible to define a clear upper limit because sharp surges in N e over few generations leave no signature on the distribution of RoH. generations ago because the limited proportion of extremely short RoH limits the reliability of more ancestral N e estimates. A cross-validation of the inferred demography from Mark's corrected sequence was provided by using the inferred N e parameters in the analytical model to predict the expected pattern of HH n in Chief's corrected data, but re-scaling mutation rate to match Chief's observed single base heterozygosity (table 2). The recombination rate (r) was fixed at 1 Â 10 À8 between base pairs for both bulls, while rescaled mutation rates per site per generation were: R = 4.1 Â 10 À9 in Mark and R = 2.15 Â 10 À9 . These mutation rates were lower than the assumed true mutation rate of 1 Â 10 À8 (Kumar and Subramanian 2002;Roach et al. 2010;Campbell et al. 2012) because they are scaled to account for the false-negative error rates in the sequence. The prediction of Chief's HH n using Mark's inferred demography ( fig. 6), deviated by Q < 0.001 when compared with observed HH n across the range of segment sizes up to 0.01 Morgan. The pattern of observed and predicted HH n for each bull in figure 6 demonstrates the closeness of fit for the inferred demographic model HH n . We also used Chief's corrected sequence to independently infer a demography, fixing the most ancestral N e to 62,000 ( fig. 7). The inferred demography, although broadly similar to Mark's, shows departures in the estimated time boundaries for changes in N e and does not always fall within the upper and lower limits of N e estimated with Mark's data. However, Chief's sequence has a very high false-negative rate (~70%, due to lower coverage than Mark) and we would therefore expect that this inferred demography is less accurate than Mark's. With such a high false-negative error rate the large proportion of missing heterozygous positions is likely to have a more variable effect on the distribution of observed RoH. We further explored the accuracy of the method to infer demography using simulated data with added false-negative errors (supplementary information, section 5, Supplementary Material online). Finally, we accounted for the false-negative error rate in corrected sequence by scaling up the mutation rate. We calibrated the analytical model to predict a match with the estimated true single base heterozygosity, resulting in a mutation rate for error free sequence data of = 9.4 Â 10 À9 per base per generation (close to our assumed true mutation rate: 1 Â 10 À8 ). Although this correction has no influence on the estimates of N e , it is critical because it provides the final mutation rate required to produce a simulation of error free sequence data. Supplementary figure S1, Supplementary Material online, compares the predicted HH n for Mark's corrected sequence with the predicted true HH n if all (2011) PSMC method (bold pink line). Also shown is the inferred demography from Mark's filtered sequence (with residual false-positive errors) using both our method (blue) and the PSMC method (maroon). There is a sharp contrast is the recent time N e estimate between corrected and filtered sequence because of bias due to false-positive errors. 2215 Inferring Demography from Runs of Homozygosity . doi:10.1093/molbev/mst125 MBE heterozygous SNP had been detected, given our inferred demography. The predicted true HH n curve for Chief is the same as for Mark (not shown) because the estimated true heterozygosity for each bull differs by only 5 Â 10 À6 (table 2). Discussion In this study, we use whole-genome sequence to infer a stepwise demographic model by matching an empirical and analytically predicted summary statistic of the RoH distribution (HH n ). Importantly, our study shows that even a low level of residual errors in sequence data (after stringent filtering) can lead to a considerable bias in estimates of the more recent N e . We therefore developed a simple but robust method to correct for false-positive heterozygous errors. Furthermore, we demonstrate that this error correction method considerably improved the accuracy of the Li and Durbin (2011) PSMC method of demographic inference. Comparison with Previous Methods Two recent studies have estimated ancestral patterns of demography using genome sequence from several humans with diverse ethnic backgrounds (Gronau et al. 2011;Li and Durbin 2011). Gronau et al. (2011) modified the Rannala and Yang (2003) method which implements a Bayesian Markov chain Monte Carlo (MCMC) approach, sampling from many possible genealogies to determine the likelihood that a given set of demographic parameters could have given rise to the observed properties of the sequence data. However, because this approach is too computationally demanding to implement with entire genome sequence, the authors selected 37,500 "neutral loci" from the sequence data, each of 1 kb length (~1.5% of each genome). They exploit the pattern of mutations to infer demographic history under the assumption that the "neutral loci" are in linkage equilibrium and that intra-locus recombination can be ignored. Importantly therefore, their method is not able to capture additional demographic information from LD and they emphasize that their primary focus is to estimate divergence times and migration rates between diverse human populations. Their model only allows for changes in N e at the time of divergence, but otherwise N e remains constant. Li and Durbin (2011) inferred ancestral demography from sequence by assessing the distribution of heterozygosity along the genome sequence of a single individual. Using a hidden Markov model, they infer time to most recent common ancestor and use the distribution of coalescent times to estimate a stepwise demographic pattern (PSMC). This shares some similarities with our approach, however they first condense the sequence data by redefining nonoverlapping windows of 100 bp as a "single locus" being either heterozygous or homozygous. We use all heterozygous sites in our observed sequence to measure RoH so that no data are "lost." The Li and Durbin binning of 100 bp into a single locus may incur some loss of data where there are stretches of very high heterozygosity because these are potentially a result of very distant ancestral coalescent events. Generally their N e estimates for human data showed reasonable accuracy between 800 and 120,000 generations ago but were unreliable for more distant or more recent time periods. Recent N e in human populations may be more difficult to reliably estimate due to the expanding N e resulting in relatively few recent coalescent events present in the sequence (Li and Durbin 2011). Our relatively narrow estimates for recent time N e in Holstein cattle were potentially aided by the rapidly reducing N e in Holsteins resulting in many more recent coalescent events compared with human populations. Li and Durbin (2011) do not correct their sequence data for residual false-positive errors, although using simulations they show that these errors result in an upward bias in their more recent N e estimates. Importantly, we confirm that our method of correcting for residual false positives had a strong impact on improving the precision of the PSMC method to estimate recent time N e (supplementary information and figs. S10 and S11, Supplementary Material online). There was reasonable agreement between the inferred bovine demography using PSMC and our method between 200 and 120,000 generations ago. However, using simulated data we show that time boundaries for significant changes in N e are not very precise with either our method or PSMC (supplementary information, section 5, and fig. S4A and B, Supplementary Material online). Our method can also be extended to estimate population divergence times in a similar way as the Li and Durbin (2011) approach, by combining data from male X chromosomes of two individuals from different populations. At the time of divergence, we would also expect to find a sudden increase in the estimated N e , therefore we believe that it would be important to first correct for false positives. We believe that a strength of our approach is the modeling of multiloci LD across a wide range of segment sizes, rather than pairwise LD. Schaffner et al. (2005) calibrated a human demographic model to match pairwise LD and empirical allele frequencies for SNP across a wide range of genomic segments. Recently, Pool et al. (2010) used this human demography to simulate data and compared the distribution of long RoH in simulated data with empirical HapMap data. They found that the simulations considerably underestimated the proportion of longer RoH observed in empirical human data. Pool et al. (2010) argue that more detailed population genetic information for recent times may be gleaned by considering patterns of multiloci LD. Hayes et al. (2003) demonstrated that their multiloci measure of LD could be used for estimating past N e , and had a lower coefficient of variation compared with the pairwise r 2 LD measure. However, their methodology is computationally more demanding than ours and their analytical model does not include mutation. Correcting for Errors and Potential Biases Our results demonstrate the importance of correcting for even very low levels of false-positive errors. This is likely to be most important in species where there have been recent sharp reductions in N e such as cattle or substantial recent bottlenecks. We estimated error rates and corrected the sequence data of each bull independently, allowing us to test the effectiveness of our stochastic false-positive error correction, as well as our demographic model. Although read depth in Chief was low (~7Â) and approximately half that in Mark, our inferred demography from Mark's HH n predicted a close match with Chief's observed corrected HH n . When we tested our false-positive correction method in simulated sequence data, the true distribution of RoH was not completely restored (supplementary information, section 6, and fig. S8, Supplementary Material online). In contrast, error correction in Mark's data appears to restore the length of RoH closer to the true length (assuming true length = SNP50 RoH) than for simulated data (supplementary information, section 6, Supplementary Material online). We believe that our correction method worked better in real sequence data because rather than a very uniform spread of errors, there was some marked clustering of heterozygous SNP in small subregions within several sequence regions that matched a long SNP50 RoH. These clusters of heterozygotes may have arisen for several reasons: they may be real heterozygotes or false positives due to mapping errors of, for example, segmental duplications (SD; supplementary figs. S5 and S7, Supplementary Material online). If some clusters of heterozygosity were true heterozygotes the residual false-positive error rate may have been slightly overestimated, and this could have resulted in some RoH lengths being overestimated. Although this would not have had a major impact on the distribution of the longer RoH in our data, it may have resulted in a small downward bias in more recent N e estimates. Conversely, if error rates were correctly estimated but we have only partially restored the true distribution of RoH to that of error free sequence, then our more recent N e estimates will tend to have an upward bias. Although we found some evidence of clusters of heterozygotes in SD > 1 kb regions (SD as identified by Liu et al. 2009), the filtering of sequence at least partially addressed likely errors in more highly repeated SD regions by excluding SNP with excessively high coverage. This removed 43% (Mark) and 37% (Chief) of heterozygous SNP within SD > 1 kb. Furthermore, these regions appear to only account for approximately 3% of the bovine genome (Liu et al. 2009), so overall these errors are unlikely to have had a very significant impact on our estimates of N e . In studies with larger SNP arrays, higher coverage and/or more individuals sequenced, it is possible to use more sophisticated methods to minimize false-positive heterozygous errors but an estimate of residual errors would still be required. A range of error detection strategies (such as machine learning and hidden Markov models) may be applied depending on the data available (Lynch 2008;Hoberman et al. 2009). Validation with individual SNP on commercial arrays should be used with caution because these SNP are generally chosen as reliably "well behaved" and may therefore result in a downward bias in estimated false-positive rates, because they are also less likely to produce errors in sequence data (due to other variants in close proximity for example). In theory, false negatives (missing heterozygotes) should not affect the demographic estimates except as a timescale effect, so can be corrected for by scaling the mutation rate (Li and Durbin 2011). To test the theory, we simulated sequence data with 50% false-positive errors and then used this data to infer the demography. Having inferred the demographic model from simulated data with 50% false-negative errors, we were then able to analytically predict a close match to observed HH n in error-free simulated sequence by scaling up the mutation rate (supplementary information, section 5, Supplementary Material online). However, the higher the false-negative error rate in sequence the more difficult it will be to adequately interpret and account for the effect of large amounts of missing heterozygous data. Although the independently inferred demography using Chief's sequence with 70% false-negative error rate was similar to Mark's, we advise caution in using sequence data with any more than 50% false negatives. Our analytical method of predicting HH n and inferring demography assumes a known and constant genome wide recombination and mutation rate. If our rescaled average mutation rate is lower or higher than the true value, this could result in some timescale changes but would still be expected to predict a pattern of demography reducing in size to present day (supplementary fig. S3, Supplementary Material online). Variable recombination rates due to hotspots could arguably increase the proportion of some lengths of RoH, resulting in more variable estimates of N e across time. Evidence to date indicates that although hotspots occur more in intergenic regions, their density and intensity varies across the genome and they appear to be evolving quite rapidly (International HapMap Consortium 2007). Therefore, overall we expect variable recombination rate to have a minimal impact on the overall demographic model, because we are using whole-genome sequence and a summary statistic of LD. We suggest a similar argument holds for variable mutation rates across the genome, but this could be further tested through simulations. A limitation of our study is that we used data from two bulls only, one for inference and one for validation. Our estimate of more ancestral demography is unlikely to be affected because there are hundreds of thousands of small homozygous segments in the sequence of these animals that are inherited independently from much more distant ancestors. Our estimates of very recent effective population size (N e ) could be biased because, by chance, these bulls could be more or less inbred than other cattle of this breed. Very long RoH arise from one or a combination of; small recent N e , inbreeding and recent intense selection. However, both bulls have made a major contribution to the North American Holstein population so should be representative of the current population (Young and Seykora 1996). Furthermore, our recent time N e estimate is close to that expected from major pedigree analysis of the Holstein breed (Weigel 2001;Stachowicz et al. 2011). If multiple genome sequences are available for a population (with the same coverage and false-negative rates), then a combined estimate of the distribution of RoH would be preferable to improve the recent time N e estimates. Inferred Bovine Demography From approximately 1,500 generations ago to present, our predicted demography broadly follows a similar pattern of decreasing N e found in several other studies based on pairwise or multiloci LD in marker data (Hayes et al. 2003;Gautier et al. 2007;Kim and Kirkpatrick 2009;MacLeod et al. 2009;Villa-Angulo et al. 2009). However, our full sequence data should provide better estimates of LD across very short segments compared with the less dense marker data used in these previous studies. This should allow more certainty of parameter estimates in the more distant past because the very short homozygous segments trace back to very distant ancestors. MacLeod et al. (2009) used RoH in SNP50 data to estimate bovine demography and found very low sensitivity for estimates of N e beyond 3,600 years ago (720 generations), because their data contained no SNP in very close proximity. We inferred a sharp reduction in N e approximately 170,000 years ago, possibly marking the period of divergence between African and European Bos taurus cattle estimated to have taken place between 26,000 and 250,000 years ago (Bradley et al. 1996;MacHugh et al. 1997;Troy et al. 2001). Alternatively, it may be linked to the divergence between B. indicus and B. taurus cattle estimated to have occurred between 100,000 and 500,000 years ago (Ritz et al. 2000;Ho et al. 2008;Murray et al. 2010). To test if this was simply a reflection of our most ancestral N e being over-estimated, we also attempted to infer a bovine demography beginning with a fixed most ancestral N e of 15,000, rather than 62,000. However, the demography could not be inferred without the estimated true mutation rate rising to above 5 Â 10 À8 , which is most unlikely. In figure 4A, we also infer a steep decline in N e (9,000 to 17,000 years ago) around the time of cattle domestication, estimated to have taken place in Neolithic times some 10,000 years before present in the Near East (Perkins 1969;Bruford et al. 2003). It is possible that this bottleneck begins a little earlier in our demography due to wild B. primigenius populations being affected by post glacial warming and expansion of human populations that were taking place at this time (Soares et al. 2010). A severe population bottleneck around the time of domestication might be expected due to the difficulties involved in the initial capture and taming of wild cattle, and also because evidence suggests that B. taurus domestication was confined to one or two very local regions (Bruford et al. 2003). A recent genetic study of cattle domestication in the Near East using mitochondrial DNA (mtDNA) samples from ancient and modern domestic cattle, from the Near East only, estimated the domestication founder female N e to be around 80 with confidence limits of 23 to 452 (Bollongino et al. 2012). Although this is a very severe bottleneck, it may not be in conflict with our results because this is a mitochondrial DNA estimate of founding female N e . Bollongino et al (2012) assumed a single domestication event and mitochondrial DNA samples were restricted to ancient and modern cattle sampled from close to the original site of domestication. It is likely that their estimate is therefore for a more localized period than would be possible using our method. Our method would not detect a relatively brief bottleneck nor can it estimate female only N e , and it is likely that this female founder population was quite rapidly increased by keeping and rearing of female offspring. It seems highly plausible that the females were more easily managed than males, but it would be difficult to impossible to prevent tamed females and their female offspring from breeding at random with wild males. It is possible, given our N e estimate around the period of domestication was approximately 3,500, that for some time the female N e could have been as small as 1,200 while male N e would then be 3,200 (based on the approximation: N e Total = 4N e female N e male /[N e female + N e male ]). Only after many generations of breeding for tameness would it have become easier to manage and retain males in a domestic setting. Thus, for quite some time during the domestication process the male N e was likely larger than the female N e : a situation that would become gradually reversed up to modern day cattle breeding where the very small elite male population now limits the overall N e . In apparent conflict with our results, Murray et al. (2010) detected no domestication bottleneck for taurine cattle, although they did detect a severe bottleneck around 30,000 to 50,000 years ago. There are a range of difficulties in determining the accurate timing of bottleneck events, including knowledge of mutation rates for the loci studied. The study by Murray et al. (2010) may have lacked power because they used gene regions which may have been subject to selection, and their methodology inferred demography from the summary statistic of "site frequency spectrum" (SFS) only, which exploits information regarding mutation but not recombination. Furthermore, they combined genomic data across a range of B. taurus breeds from Europe and Africa, which may have concealed any domestication bottleneck because there is some molecular evidence suggesting different domestication origins for European and African taurine cattle (Bradley et al. 1996;Beja-Pereira et al. 2006). The wide variation in past estimates of the timing of bovine bottlenecks highlights the need for further studies to confirm the accuracy of our estimates. Following the domestication period our estimates of further gradual decline in N e are potentially due to increasing genetic isolation from the wild population, limited numbers of domestic cattle being taken to northern Europe, and the start of breed formation (Beja-Pereira et al. 2006). In the last 100 years, the further decrease in N e is likely a result of breed registration rules requiring that animals are purebred, as well as the high selection intensity in modern breeding programs through very extensive use of artificial insemination (Goddard 1992;Young and Seykora 1996;Stachowicz et al. 2011). Although Finlay et al. (2007) report a sharp increase in cattle population size from domestication to present day, their study is based on mtDNA and is therefore indicative of the expanding female population size with no adjustment made for the likely decreasing to very small male effective population size. They also combined mtDNA from a number of different cattle breeds and analyzed this as one population which therefore reflects the present day population size of domestic cattle generally, such that in this context an expanding population is not surprising. Similarly, Murray et al. (2010) estimated a large present day N e for domestic cattle but they also combined genomic data from a range of breeds to define a "taurine" population. Our estimate of present day population size of between 80 and 220 is close to several independent estimates for this breed using both LD methods (Hayes et al. 2003;de Roos et al. 2008;Kim and Kirkpatrick 2009;MacLeod et al. 2009) as well as extensive pedigree records (Weigel 2001;Stachowicz et al. 2011). This current day N e indicates the importance of taking steps to monitor and minimize inbreeding in Holstein cattle to avoid potential negative effects of inbreeding depression on economic traits. Two traits of particular concern in modern Holstein dairy cattle breeding are fertility and longevity (VanRaden 2004) both of which could potentially be affected by inbreeding depression as a result of low N e . Conclusions Our study demonstrates a computationally efficient method of inferring ancestral demography from empirical observations of the distribution of RoH in whole-genome sequence. We also demonstrate a method to correct for the potential serious bias of residual false-positive errors on recent estimates of N e . Our inference method can be applied to any outbred diploid species for one or multiple individuals without the need to phase the data into haplotypes. That is, HH n can be summarized from all RoH measured within individuals, provided their sequences have similar false-negative error rates. If sequence haplotypes are available from unrelated individuals the RoH can be measured both within and between pairs of individuals. Our method provides a demographic model that can be used to simulate sequence data generated under a null hypothesis with realistic multiloci LD patterns, to calibrate significance tests for evidence of selection or for a range of other genetic studies. Materials and Methods For this study, our definition of RoH in sequence data refers to an observed unbroken run of homozygous (i.e., IBS) base pairs along a pair of homologous chromosome segments within a diploid individual. We define our HH n summary statistic in sequence data as the probability that a pair of homologous chromosomes are observed IBS for at least n base pairs to the right of a site chosen uniformly at random. Three key steps in our methodology for demographic prediction are as follows: 1) Detection of heterozygous SNP and error rates in wholegenome sequence, 2) Correction of false positives and calculation of the summary statistic, HH n , from the distribution of RoH for a range of segment sizes (n), and 3) Inference of a demographic model that predicts HH n matching that observed in corrected sequence. A summary of the workflow is given in figure 1. Whole-Genome Sequence Whole-genome sequence was generated for two Holstein-Friesian bulls using a 454 FLX-Titanium platform (Larkin et al. 2012). The bulls were Pawnee Farm Arlinda Chief ("Chief") and one of his offspring, Walkway Chief Mark ("Mark"). Mark was sequenced at approximately 13Â coverage, while Chief was sequenced at approximately 7Â coverage. Details of sequencing, alignment, mapping, and SNP discovery are published in Larkin et al. (2012). For our study, we extracted all information on heterozygous SNP within each animal's autosomal genome sequence, and applied stringent quality control filters to remove likely errors (Larkin et al. 2012, supplement). Filtered heterozygous sequence SNP were then used to measure base pair (bp) length of all intervening RoH within each animal's sequence. Genotyping with SNP50 BeadChip DNA from Chief, Mark and 92 of Mark's offspring was used to generate approximately 50,000 SNP genotypes each, using the Infinium BovineSNP50 BeadChip (http://www.illumina. com/products/bovine_snp50_whole-genome_genotyping_ kits.ilmn, last accessed July 23, 2013) (described in Larkin et al. 2012). The "SNP50" genotypes, generated independently of the sequence data, were used to validate the corresponding sequence SNP genotypes, but were first subject to strict quality control. SNP50 genotypes from Mark's offspring were used to identify and remove inconsistencies in Mark's SNP50 genotypes, and checks were also made for any discordant genotypes between Chief and Mark. Additionally, based on preliminary checks of concordance between SNP50 and sequence SNP, SNP50 genotypes were discarded if the Illumina "Gen Train" and "GenCall" scores were less than 0.8 (http://www.illumina.com/Documents/products/technotes/ technote_gencall_data_analysis_software.pdf, last accessed July 23, 2013). All SNP50 genotypes with unknown reference genome position were also eliminated. There remained 38,956 and 39,026 of Mark and Chief's SNP50 genotypes. The lengths of all RoH between SNP50 heterozygous positions were recorded within each animal. Correcting Sequence for Errors Two types of error in the sequence data must be quantified; "false-negative" errors (heterozygous positions missed in the sequence data and called as homozygous) and "false-positive" errors (homozygous positions wrongly called as heterozygous in the sequence data). The pattern of heterozygous sites (clustered or not, dense to sparse) occurring along a pair of sequences has a direct impact on the distribution of observed lengths of RoH. In a neutral model, the pattern of heterozygous sites across the genome can be affected by changes in past N e , such as bottlenecks, and recombination rate. If false-positive errors arise randomly across the genome they will break up long RoH into a number of shorter RoH, whereas in regions with more dense true heterozygous sites the false-positive errors will have much less impact on the shorter RoH. Thus false positives may particularly bias the estimation of more recent N e , because the longer RoH coalesce to more recent ancestors. The distribution of false negatives (missed heterozygous sites) should follow the same distribution as discovered true heterozygous sites assuming no bias in the sequencing method or SNP discovery. These errors will therefore more similarly affect both shorter and longer RoH and thus will have little impact on our estimates of ancestral N e . However, the number of heterozygous sites present in the genome under a given demography, is directly proportional to the mutation rate per site per generation, and false-negative errors reduce the number of observed heterozygous sites. These missed heterozygous sites represent an independent thinning of the frequency of true heterozygous sites, which is equivalent to a reduced mutation rate. It should therefore be possible to correct for false negatives by simply rescaling the true mutation rate ( T ) before inferring the demography. The scaled mutation rate ( R ) should be in the order of: R = T (1 À q), where the false-negative error rate (q) was estimated by validation with SNP50 data (assumed error free), so that 1 À q is the proportion of sequence SNP observed as heterozygous given the SNP50 position was heterozygous. We assumed T & 1.0 Â 10 À8 per base per generation based on recent estimates of mammalian mutation rates (Kumar and Subramanian 2002;1000Genomes Project Consortium 2010Roach et al. 2010;Campbell et al. 2012). Similarly, we estimated true single base heterozygosity (H T ) in the sequence of each bull by scaling the observed heterozygosity (H O ): The residual false-positive error rate in our filtered sequence was expected to be very low and therefore the accuracy of estimated error rate by direct validation with SNP50 data was limited by the small number of SNP50 heterozygous positions (table 1). There is also potential bias due to unidentified errors in SNP50 data and the nonrandom choice of "better behaved" SNP on commercial SNP arrays. We therefore developed a simple method to quantify residual false-positive error rates in filtered sequence data, using long RoH in the SNP50 data. An RoH in our SNP50 data was defined as a run of adjacent homozygous SNP genotypes within an individual. We identified several SNP50 RoH spanning >10 Mb; five in Chief and four in Mark. There is a high probability that these SNP50 long RoH identify IBD chromosome segments, that is, these regions are also very likely to be homozygous in sequence data. This assumption is justified because within the pedigree of each bull there are several inbreeding loops to recent common ancestors three to six generations ago (https://www.holstein.ca/, last accessed July 23, 2013). In cattle, one Morgan is approximately equal to 1 Â 10 8 base pairs (Arias et al. 2009). Although mutations may occur in the generations since inheriting chromosome segments IBD from these common ancestors, this is likely to account for only a single heterozygous position per 10 Mb every 5 generations, assuming a mutation rate of 1.0 Â 10 À8 per base per generation. Average single base heterozygosity across these SNP50 long RoH in the filtered sequence data therefore provided an estimate of the residual false-positive error rate. We excluded the outer 0.15 Mb of the SNP50 RoH region when estimating the error rate to avoid the possibility of including a region beyond the end of the sequence RoH, that by chance appeared as part of the SNP50 long RoH due to the average distance between SNP50 being 0.07 Mb. To remove bias due to false-positive heterozygous errors in the sequence, we aimed to restore the distribution of RoH rather than identify actual false positives remaining. Filtered sequence was "corrected" by removing the expected proportion of residual heterozygous errors, assuming uniform distribution across the genome. Thus, in nonoverlapping windows of three times the average length that contains one false heterozygous error, we randomly deleted three heterozygous SNP per window (or less if fewer existed in a window). We chose this window length having first tested the method with a range of window sizes (1 to 5) and compared the resulting RoH pattern across the entire genome of Mark and Chief with those of RoH in the SNP50 data (e.g., supplementary figs. S5, S6, and S7, Supplementary Material online). We also compared our "3 error window" correction method with randomly deleting the same proportion of heterozygous SNP without implementing uniform deletion from nonoverlapping windows: that is, removing the restriction that false-positive errors arise with equal probability across the genome. We tested the variability in the resulting RoH distribution after applying the 3 error window correction method by replicating the data correction 25 times, resulting in 25 sets of "corrected sequence" for each bull. Henceforth, these data sets are referred to collectively as "corrected sequence" data. Further validations of the correction methods for false negatives and false positives were carried out using simulated data (supplementary information, section 6, Supplementary Material online). Observed HH n Summary Statistic in Sequence We use the HH n summary statistic (MacLeod et al. 2009) to describe the distribution of observed RoH in diploid wholegenome sequence. For any given value of n, HH n is calculated as the proportion of sites in the diploid genome for which at least n bases to the right are observed homozygous, expressed relative to the total possible number of such sites if the entire genome were homozygous. Take a trivial example of calculating HH 5 (i.e., n is 5) in one individual with a single chromosome only 10 bp long in which we observe only one RoH of 6 bp. For this individual, HH 5 = 2/6 because moving left to right across each base pair on this chromosome there will be only two sites at which we will observe at least 5 homozygous base pairs to the right, and the maximum possible number of sites if the entire 10 bp had been homozygous is six. We calculated HH n in both filtered and corrected sequence data, for a range of segment sizes between 1 and 1,000,000 bp. This maximum segment length was chosen because this should be informative up to recent times given that LD on a segment is most influenced by N e approximately 1/(2c) generations ago assuming a linearly changing N e , where c is the segment length in Morgans (Hayes et al. 2003). Also, in cattle the RoH distribution becomes relatively flat at this length because although there are some rare much longer RoH, the majority of RoH are shorter. The HH n was calculated for each of the 25 replicates of "corrected sequence" and was then averaged across replicates for each bull. To evaluate the robustness of the correction method we measured the variability between replicates of the summary statistic, HH n , across a range of segment sizes (n) up to 1 Mb, because HH n provides the basis for inferring the demography. Variability was assessed as the coefficient of variation across the 25 replicates. Analytical HH n Prediction For demographic inference, we compared the observed HH n in sequence with an analytically predicted HH n . The analytical HH n prediction is based on a simplified coalescence method that accommodates stepwise changes in historical N e , with constant mutation and recombination rates (MacLeod et al. 2009). We implemented a small modification to the original method to increase the computational speed of the calculation without compromising the accuracy of prediction (details are given in supplementary information, section 1, Supplementary Material online). With the modified method, we predicted HH n for n = 1, 2, 3. . . to 1,000 bp, and then for 1,000, 2,000, 3,000,. . . to 1,000,000 bp. This allowed us to rapidly test a range of demographic models to search for one predicting a good match to observed HH n . Demographic Inference The inference approach is similar to those where demographic parameters are sampled from a grid of prior parameters to determine those most likely to have given rise to summary statistics in observed data (Beaumont 2004). However, rather than simulating data with each new set of N e parameters sampled, we use the analytical model to predict the HH n summary statistic for any sampled set of demographic parameters and determine the goodness of fit with the observed HH n across a range of segment lengths (MacLeod et al. 2009). We used the averaged HH n from the 25 replicates of Mark's corrected sequence to infer the demography. Parameters in the model include; effective population size over variable time periods, "Phases" (N e Phase i ) with time measured in generations (G Phase i ), as well as mutation rate () and recombination rate (r). The model assumes a single population with no selection or migration. Coalescent time scales are dependent on N e , , and r, therefore we assume both and r are constant across time and across the genome. We fixed r between any base pair as 1 Â 10 À8 , so that 1 Morgan was assumed to be approximately equal to 1 Â 10 8 base pairs (Arias et al. 2009). We assumed true mutation rate ( T ) to be in the order of 1.0 Â 10 À8 per single base per generation based on recent mammalian estimates (Kumar and Subramanian 2002;Roach et al. 2010;Campbell et al. 2012). Therefore, the scaled mutation rate accounting for false negatives in the inferred demography was expected to be in the order of R = T (1 À q). The accuracy of this scaling depends on how close the estimated false-negative rate, q, is to the true value. The analytical model cannot infer a starting value for N e , therefore we ran some preliminary checks with a range of simple models with the most ancestral N e between 50,000 and 100,000 and compared single locus heterozygosity with that observed in the sequence. We used this N e range because the ancestral N e of domestic cattle has been estimated to be between 50,000 and 100,000 based on evidence from several independent studies (de Roos et al. 2008;MacEachern et al. 2009), around the time of Bos species divergence from Bubalus (buffalo) 1 to 5 Ma (Ritz et al. 2000). Based on these preliminary checks, our starting assumption was that the most ancestral N e = 62,000. The method is not expected to predict N e further back than around 1 to 1.5 million generations ago given a rough rule of thumb that LD on segment lengths of cMorgans will inform the N e estimates 1/(2c) generations ago. It was therefore assumed that the most ancestral population had reached a drift-recombination-mutation equilibrium, over a time period fixed as >10N e generations. Our demographic inference makes no prior assumption of the maximum number of time intervals (Phases) or the specific boundary of time intervals at which there could be an instantaneous change in N e . Rather, we begin with constant N e and exploit the theory that LD over shorter distances reflects more ancestral population parameters than LD at larger distances. We employed the iterative approach of MacLeod et al. (2009) to search the parameter space for the best fit demography, with variable N e : 1. Use the analytical model with the starting parameters to predict HH n for segments of length n, where n was 1-1,000 bp, and then 1, 2,. . ., 1,000 kb. 2. Test the match between the predicted HH n and the observed HH n in the sequence, across the range of segment lengths. The HH n summary statistic is a continuous variable for each segment length tested, therefore a "match" was defined as meeting a threshold goodness of fit test (Q ) for each HH n in the range of segment lengths tested): where is a stringent predetermined threshold that we set to 0.001. This threshold choice was based on our prior experience with the model using simulated data. It was also confirmed as reasonable because HH n in replicated simulations using our inferred demography generally differed by Q for all segment lengths up to 0.01 Morgan. 3. If the threshold was not met at any one or more HH n for a given segment size n, the N e was resampled over one or more time periods (Phases) since the ancestral population. It is expected that LD on a segment size of c Morgans is most affected by the population size approximately 1/(2c) generations in the past (Hayes et al. 2003). We therefore conditioned the re-sampling of N e over a variable time period that corresponded to the range of segment lengths where there was a mismatch in observed and predicted HH n . Therefore, the time boundaries for changes in N e were not predetermined, but rather were estimated as the approximate time periods corresponding to segment lengths n where HH n mismatched. For example, if HH n was under (over) predicted compared with the observed HH n for segments of lengths of 1.0 Â 10 À5 to 0.01 Morgans, we assumed that from approximately present day to 50,000 generations ago [1/(2c)] the N e should be reduced (increased) in size from the current value to N* e Phase i . However, if the reduced (increased) N* e was close in value to N e Phase i + 1 or N e Phase iÀ1 then we first tried to match this to minimize the number of additional Phases. If HH n mismatched across one or more time periods between others that matched, the N e was first adjusted in the poorest fitting, most ancestral time period followed by a return to Step 2. Steps 2 and 3 were repeated until a demographic model was found that met the goodness of fit criteria for HH n across all segments lengths tested. If required, the mutation rate was adjusted slightly to ensure a close match (±5.0 Â 10 À5 ) with the single locus heterozygosity observed in the corrected sequence. Step 2 and 3 could also be implemented with a systematic sampling from a grid of prior N e values over a pre-defined number of Phases (each of G generations) with fixed time boundaries. The goodness-of-fit parameter (Q) can be summed across the range of HH n for each tested demography ( P k n¼1 Q HH n , where k is the total number of HH n values tested) and minimized to provide a means of ranking each demographic model tested. Having successfully inferred a demography from Mark's data that met our threshold , we estimated upper and lower limits for each stepwise change in N e based on the maximum and minimum N e value possible where the threshold of 0.001 was still met. All other stepwise values of N e were held constant when estimating these upper and lower bounds. Slight adjustment was made to the mutation rate if necessary to ensure a match with observed single locus heterozygosity. For computational efficiency, we tested intervals of ±~10% of each stepwise N e , or less if the first increase/ reduction resulted in > 0.001. After inferring a good fit demography, we then attempted to combine two or more adjacent Phases of differing N e to a single N e value to test the resolution of determining time boundaries for changes in N e . Finally, using our inferred demographic model, we predicted expected HH n for the corrected sequence data had there been no false-negative errors (missing heterozygous SNP). We did this by scaling up the mutation rate to match the estimated true single base heterozygosity in Mark's sequence. This step has no effect on the N e estimates but is of considerable importance for estimating the true mutation rate required to simulate sequence that mimics error free sequence data, where all true heterozygous sites are discovered. Validation of the Inferred Demographic Model We cross validated the inferred demography, using the analytical model to predict the expected HH n in Chief's independently corrected sequence. Assuming we have accurately corrected for the different false-positive error rates in the two bull sequences, the difference between the distribution of RoH in Mark and Chief's corrected sequence is due only to a higher false-negative rate in Chief's sequence (lower coverage). If false-negative errors are equivalent to a lowered mutation rate, then Mark's demographic model should predict Chief's HH n by simply rescaling the mutation rate to account for the higher proportion of false negatives in Chief's sequence. We therefore rescaled the mutation rate to match Chief's observed single base heterozygosity, predicted HH n and estimated the goodness of fit parameter (Q) between predicted and observed HH n . We also used Chief's corrected sequence HH n to independently infer demography, although acknowledging that this data would be less reliable than Mark's because the lower coverage resulted in a high false-negative error rate (~70%). However, we also inferred demography using simulated data (based on Mark's inferred demography) in which we had randomly changed heterozygous SNP to homozygous to mimic the false-negative rate in Mark's corrected sequence (supplementary information, section 5, Supplementary Material online).
2016-05-17T05:22:17.066Z
2013-07-10T00:00:00.000
{ "year": 2013, "sha1": "c56ced8c682928ab5e45307bef2fdb0e64f893f1", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/mbe/article-pdf/30/9/2209/13176911/mst125.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "77711359bde28c0c13199a9b0eee7f9e7a993a18", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16422119
pes2o/s2orc
v3-fos-license
Tadalafil Promotes the Recovery of Peripheral Neuropathy in Type II Diabetic Mice We previously demonstrated that treatment of diabetic peripheral neuropathy with the short (4 hours) half-life phosphodiesterase 5 (PDE5) inhibitor, sildenafil, improved functional outcome in diabetic db/db mice. To further examine the effect of PDE5 inhibition on diabetic peripheral neuropathy, we investigated the effect of another potent PDE5 inhibitor, tadalafil, on diabetic peripheral neuropathy. Tadalafil is pharmacokinetically distinct from sildenafil and has a longer half-life (17+hours) than sildenafil. Diabetic mice (BKS.Cg-m+/+Leprdb/J, db/db) at age 20 weeks were treated with tadalafil every 48 hours for 8 consecutive weeks. Compared with diabetic mice treated with saline, tadalafil treatment significantly improved motor and sensory conduction velocities in the sciatic nerve and peripheral thermal sensitivity. Tadalafil treatment also markedly increased local blood flow and the density of FITC-dextran perfused vessels in the sciatic nerve concomitantly with increased intraepidermal nerve fiber density. Moreover, tadalafil reversed the diabetes-induced reductions of axon diameter and myelin thickness and reversed the diabetes-induced increased g-ratio in the sciatic nerve. Furthermore, tadalafil enhanced diabetes-reduced nerve growth factor (NGF) and platelet-derived growth factor-C (PDGF-C) protein levels in diabetic sciatic nerve tissue. The present study demonstrates that tadalafil increases regional blood flow in the sciatic nerve tissue, which may contribute to the improvement of peripheral nerve function and the amelioration of diabetic peripheral neuropathy. Introduction Phosphodiesterase-5 (PDE5) is highly specific for hydrolysis of cyclic nucleotides monophosphate (cGMP). PDE5 inhibitors are primarily used as pharmacological agents for the treatment of erectile dysfunction (ED) and pulmonary hypertension [1][2][3]. Peripheral neuropathy is a chronic complication of diabetes. Human and experimental studies have demonstrated that the PDE5 inhibitor, sildenafil, reduces symptoms of peripheral neuropathy [4,5]. Our previous studies showed that hyperglycemia upregulated PDE5 expression and suppression of PDE5 by sildenafil increased cGMP levels and significantly improved neurovascular function and neurological outcome in diabetic mice, indicating that sildenafil has a beneficial effect on the treatment of diabetic peripheral neuropathy [6,7]. To verify that the effect of sildenafil on diabetic peripheral neuropathy is not specific to sildenafil, we employ another PDE5 inhibitor, tadalafil, which is structurally and pharmacokinetically distinct from sildenafil. Tadalafil has a half-life of over 17 hours and its effect can last for up to 36 hours, while sildenafil has a half-life of 4 hours [8]. Diabetic peripheral neuropathy is a chronic disease and a short acting treatment may not be an optimal therapeutic approach. The considerably longer duration of action for tadalafil may permit less frequent dosing, and could potentially reduce adverse effects associated with treatment. For example, the efficacy of sildenafil is affected by certain foods [9]. However, the absorption and activity of tadalafil is unaffected by food ingestion, age, diabetes, or mild to moderate hepatic insufficiency [8,10]. Tadalafil is safe and effective for the treatment of ED. Moreover, tadalafil is less expensive than sildenafil. Therefore, studying the effect of tadalafil on diabetic peripheral neuropathy is warranted as a potential treatment. Tadalafil increases blood flow in ischemic brain and improves neurological outcome in ischemic rats [11]. Patients with ED treated with tadalafil have reduced diabetic peripheral neuropathy symptoms [12]. However, the role of tadalafil in diabetic neuropathy has not been fully investigated. In this study, we investigated whether tadalafil is effective and safe for the treatment of diabetic neuropathy. Ethics statement All experimental procedures were carried out in accordance with NIH Guidelines for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Henry Ford Hospital (IACUC#1398). Male BKS. Cg-m+/+Lepr db /J (db/db) mice (Jackson Laboratories, Bar Harbor, Maine) aged 20 weeks were used. Age-matched heterozygote mice (db/m), a non-penetrant genotype (Jackson Laboratories), were used as the control animals. Tadalafil treatment db/db mice at the age of 20 weeks were treated with tadalafil at a dose of 10 mg/kg (orally administered, p.o. Lilly USA, LLC.), every other day for 8 weeks (n = 15/group). db/db mice (n = 15/group) at the same age treated with the same volume of saline were used as a control group. Age-matched non-diabetic db/m mice treated with the same volume of saline (n = 15/ group) were used as an additional control group. All mice were sacrificed 8 weeks after treatment. Dose and frequency of tadalafil were selected based on our published studies [11]. Levels of blood glucose, triglyceride, and A1C were measured using an instant check meter (Roche Diagnostics, Indianapolis, IN), CardioChek PA Analyzer and Triglyceride Test Strips (Polymer 285 Technology system), and A1C Now+ MULTI-TEST A1C SYSTEM, respectively, according to the manufacturer's instructions. Blood glucose levels, body weight and functional tests were measured before the treatment as a baseline and then every 2 weeks until sacrifice. Triglyceride and A1C levels were measured prior to the treatment and at the end of the experiment (8 weeks after the initial treatment). Electrophysiological measurements were performed before the treatment and then every 4 weeks until sacrifice. All procedures and analyses were performed by investigators who were blinded to the treatment administered. Neurophysiological measurements Sciatic nerve conduction velocity was assessed with orthodromic recording techniques, as previously described [6,13]. Briefly, mice were anesthetized with ketamine/xylazine (i.p., 100/10 mg/kg). The stimulating electrodes were plated at the knee and sciatic notch. Trigger single square wave current pulses were delivered using an isolated pulse stimulator (Model 2100, A-M Systems, Everett, WA). The simultaneous electromyographys were recorded by two sterilized electrodes placed in the dorsum of the foot with a Grass Amplifier (Model P5, Grass Instruments, Quincy, MA). During the measurements, animal rectal temperature was maintained at 37 ± 1.0°C using a feedback controlled water bath. Motor nerve conduction velocity (MCV) and sensory nerve conduction velocity (SCV) were calculated according to a published study [13]. Measurement of thermal sensitivity To examine the sensitivity to noxious heat, plantar and tail flick tests were measured using a thermal stimulation meter (IITC model 336 TG combination tail-flick and paw algesia meter; IITC Life Science) according to published methods [14]. Briefly, mice were placed within a plexiglass chamber on a transparent glass surface and allowed to acclimate for at least 20 min. For plantar test, the meter was activated after placing the stimulator directly beneath the plantar surface of the hind paw. The paw-withdrawal latency in response to the radiant heat (15% intensity, cut-off time 30 sec) was recorded. For tail-flick test, the meter was set at 40% heating intensity with a cut-off at 10 sec. For both tests, at least five readings per animal were taken at 15 min intervals, and the average was calculated [15]. Measurement of regional blood flow by laser Doppler flowmetry Regional blood flow in the sciatic nerve was measured at the end of the experiments (8 weeks after the treatment) using laser Doppler flowmetry (LDF PeriFlux PF4, Perimed AB, Järfälla, Sweden) [15,16]. Briefly, under anesthesia (ketamine/xylazine, i.p., 100/10 mg/kg, JHP Pharmaceuticals LLC. MI; LLOYD Inc. Lowa), the mouse was mounted on a Kopf stereotaxic apparatus. The left sciatic nerve was exposed in the mid-thigh region and animal rectal temperature was maintained at 37 ± 1.0°C during the measurement period using a feedback controlled water bath. Using a micromanipulator, a LDF probe was placed at the surface of the sciatic nerve and relative flow values expressed as perfusion units were recorded every 5 minutes for a total of 5 records. Regional blood flow values from non-diabetic mice were used as baseline values and data are presented as a percentage of baseline values. Staining myelin sheaths The sciatic nerves were fixed in the 2.5% glutaraldehyde and 0.5% sucrose (Sigma) on PBS buffer for 6-8 hours, and then immersed in 2% osmium tetroxide (Sigma) for 2 hours. The specimens were then dehydrated with numerous alcohol passages and embedded in paraffin [17]. Semi-thin transverse sections (2-μm thick) were cut and stained with 1% toluidine blue and three semi-thin sections per mouse were analyzed. This method has been demonstrated as a reliable way to measure myelin sheaths [7,18]. Immunohistochemistry The sciatic nerve tissues were fixed in 4% paraformaldehyde for immunohistochemistry and then embedded in paraffin according to published protocol [6]. Three cross sections (6-μmthick) or three longitudinal sections (6-μm-thick) at 60 μm apart per animal were used [6]. Footpads were fixed in Zamboni's fixative for 2 hours, washed in PBS and then kept in 30% sucrose/PBS overnight at 4°C. The samples were embedded in OCT compound and stored at −80°C. Three longitudinal 20-μm-thick footpad sections from each mouse were prepared. Image acquisition and quantification To examine microvascular perfusion in the sciatic nerve, fluorescein isothiocyanate (FITC)dextran (2x10 6 molecular weight, Sigma; 0.2 mL of 50 mg/mL) was administered intravenously to the mice 10 min before sacrifice [15,19]. The sciatic nerve tissue was rapidly removed and placed in 2% of paraformaldehyde for 2 hours. The nerve tissue was whole-mounted and imaged under a 10x objective using a laser-scanning confocal microscope (Zeiss LSM 510 NLO, Carl Zeiss, Germany) [15,19]. Thereafter, the nerves were embedded in OCT compound and cross cryosections (20 μm thick) prepared. Three sections at 60 μm intervals from each mouse were used for further image analysis. The cross sections were imaged under a 20x microscope objective (Carl Zeiss, Inc.) of a fluorescent microscope that was equipped with a MicroComputer Imaging Device (MCID, Imaging Research Inc.) [20]. Using the MCID system, the total number of FITC-dextran perfused vessels was counted and divided by the total tissue-area to determine vascular density [7]. For analysis of CD31 immunoreactive vascular perimeters, three cross sections spaced at 60 μm intervals from each mouse were used. Three fields of the view per section were randomly imaged under a 40x objective. CD31 immunoreactive vascular perimeters were measured using the MCID system [15]. For morphometric analysis of sciatic nerves, three sections spaced at 60 μm interval for each staining were used for analysis from each mouse. Three fields of view per section were randomly imaged under a 100x oil immersion objective (BX40; Olympus Optical Co. Ltd) via the MCID system. Myelinated fiber diameter, axon diameter, and myelin sheath thickness were measured. The g ratio (the quotient axon diameter/fiber diameter) was calculated to measure the degree of myelination. At least 200 myelinated fibers were measured per animal [6,21]. Intraepidermal nerve fiber profiles were imaged under a 40x objective (Carl Zeiss,Inc.) via the MCID system. The number of nerve fibers crossing the dermal-epidermal junction were counted and the density of nerves is expressed as fibers/mm length of section [7]. Representative images of intraepidermal nerve fibers were obtained using a laser-scanning confocal microscope (Zeiss LSM 510 NLO, Carl Zeiss, Germany). All analysis was conducted by an examiner who was blinded to the identity of the samples being studied. Statistical analysis For functional tests, data were evaluated for normality. Ranked data or nonparametric approach will be considered if the data are not normally distributed. The repeated measure analysis of variance (ANOVA) was considered with dependent factor of time and independent factor of groups. The analysis started testing for group by time interaction, followed by the testing the main effect of group and subgroup analyses. Two-sample t-test or analysis of variance (ANOVA) was used to study the group difference on LDF, immunostaining, biochemistry and Western blot, respectively. The data are presented as mean ± SE. A value of p<0.05 was taken as significant. Tadalafil improves neurological function in the diabetic mouse To examine the effect of tadalafil on diabetic peripheral neuropathy, diabetic db/db mice at aged 20 weeks, which exhibited severe peripheral nerve neurological deficits, were orally (p.o.) administered with tadalafil at a dose of 10 mg/kg every 48 hours for 8 consecutive weeks. Tadalafil treatment significantly improved diabetic-reduced motor and sensory conduction velocity (MCV and SCV) in the sciatic nerve measured by electrophysiological tests, while significant improvements of sensory function as measured by Plantar and Tail-Flick tests were evident starting at 6 weeks after the initial treatment compared with saline-treated diabetic mice (Fig 1). However, treatment with tadalafil did not significantly alter animal body weight (Table 1), blood glucose levels ( Table 2), A1C and triglyceride levels ( Table 3) compared to the saline treatment. Tadalafil improves neurovascular function in the sciatic nerve Neurovascular dysfunction is a major cause of diabetic peripheral neuropathy [22,23]. Using a laser Doppler flowmetry (LDF) system, longitudinal measurements of regional blood flow at the sciatic nerve revealed that tadalafil treatment significantly improved blood flow in the diabetic sciatic nerve (Fig 2). In parallel with blood flow results, confocal images acquired from whole-mount of the sciatic nerve tissue showed that diabetes induced substantial reduction of FITC-perfused blood vessels compared to that in non-diabetic mice, whereas tadalafil treatment markedly increased the number of FITC-dextran perfused vessels in diabetic sciatic nerve tissue (Fig 2). In addition, quantitative analysis of FITC-perfused blood vessels and CD31 immunoreactive vessels in cross sections of the sciatic nerve tissue revealed that tadalafil treatment markedly increased microvascular density and vascular perimeters in the sciatic nerve tissue compared to the diabetic mice treated with saline, respectively (Fig 2). Collectively, these data indicate that tadalafil improves vascular perfusion in the sciatic nerves tissue of diabetic mice. The increase of vascular perfusion is associated with axonal regeneration in the diabetic patient [24]. To examine whether an increased vascular perfusion is associated with alteration of the peripheral nerve, morphometric changes of the sciatic nerves and intraepidermal nerve fibers were analyzed. Histopathological analysis of sciatic nerve on semi-thin coronal sections showed that diabetic mice treated with tadalafil exhibited increased sciatic nerve fiber diameters and myelin sheath thickness, and decreased the g-ratio (axon diameter/fiber diameter) (Fig 3). Compared to diabetic mice treated with saline, diabetic mice treated with tadalafil also revealed a marked increase of PGP 9.5 positive intraepidermal nerve fiber density in the plantar skin tissue (Fig 4). Together, these data suggest that tadalafil-improved vascular perfusion is associated with enhancement of axonal remodeling in sciatic nerves. Tadalafil treatment increases NGF and PDGF-C proteins NGF and PDGF-C are neurotrophic factors, and have been shown to regulate neurovascular function. Using Western blot, we found that diabetic mice had a significant reduction of NGF and PDGF-C proteins in the sciatic nerve tissue compared to non-diabetic mice, whereas tadalafil treatment considerably increased NGF and PDGF-C compared to the saline treatment ( Fig 5). Moreover, double immunofluorescent staining showed that NGF and PDGF-C immunoreactivity was co-localized to S100 immunopositive Schwann cells in the sciatic nerve tissue ( Fig 5). These data suggest that NGF and PDGF-C in Schwann cells may be involved in therapeutic effect of tadalafil on diabetic peripheral neuropathy. Discussion The present study demonstrates that tadalafil is effective and safe for the treatment of diabetic neuropathy. Tadalafil significantly enhanced regional blood flow and functional vascular density in the sciatic nerve tissue as well as intraepidermal nerve density and sciatic nerve axonal remodeling, and concomitantly improved neurological functional recovery in diabetic mice with peripheral neuropathy. We previously demonstrated that diabetes upregulated PDE5 expression in the sciatic nerve tissue, and suppression of PDE5 by sildenafil augmented vascular function and axonal remodel in the sciatic nerve tissue and subsequently improved neurological outcome in diabetic mice, indicating that sildenafil has a therapeutic effect on the diabetic neuropathy [6,7]. The present study extends our previous work by showing that another PDE5 inhibitor, tadalafil, effectively improved functional recovery, concurrent with increase of neurovascular function without the reduction of blood glucose. Thus, tadalafil can achieve the comparable therapeutic effect on diabetic peripheral neuropathy as sildenafil does [6,7]. Tadalafil is a long-acting PDE5 inhibitor and has a different chemical structure from sildenafil. In clinical trials, tadalafil is effective up to 36 hours after dosing, whereas the temporal Tadalafil improves vascular function in the sciatic nerve tissue. Panels A to I show FITC-dextran perfused vessels from whole mounted (A to C) and cross sections (D to F) of the sciatic nerve tissue, and CD31 immunoreactive blood vessels at the cross section (G and I) of the sciatic nerve tissue from a representative non-diabetic mouse treated with saline (DM, A, D and G), diabetic mouse treated with saline (DB, B, E and H), and diabetic mouse treated with tadalafil (DBTA, 10 mg/kg, C, F and I). Panels J to L show quantitative data of density of FITC-dextran perfused vessels in cross section (J, n = 6/group), CD31 immunoreactive vascular perimeters (K, n = 6/group), and percentage changes of sciatic nerve blood flow with a reference of non-diabetic mouse at 100% (L, n = 5/group). *p<0.05 and #p<0.05 versus the nondiabetic mouse (DM) and the diabetic mouse treated with saline (DB), respectively. Bar = 100μm. DBTA = diabetic mouse treated with tadalafil. doi:10.1371/journal.pone.0159665.g002 effectiveness of sildenafil is 4 hours for the treatment of ED [25]. The current study shows that oral administration of tadalafil every 48 hours for 8 weeks improved functional recovery in diabetic mice, which is comparable to the therapeutic effect achieved by sildenafil with a daily dose for 8 weeks [7]. Our data demonstrate that tadalafil treatment provides additional therapeutic opportunities for the use of multiple PDE5 inhibitors in the treatment of peripheral Representative images of double immunofluorescent staining show that NGF and PDGF-C immunoreactivity (B, C, E, F, red, arrows) was co-localized to S100 positive Schwann cells (A, D, green, arrows). Western blot analysis (G) shows NGF and PDGF-C levels in the sciatic nerve tissue and β-actin was used as an internal control. *p<0.05 and #p<0.05 versus the non-diabetic mouse (DM) and the diabetic mouse treated with saline (DB), respectively. n = 6/group. Bar = 50μm. DBTA = diabetic mouse treated with tadalafil. doi:10.1371/journal.pone.0159665.g005 neuropathy and possibly with less frequent administration than the short acting sildenafil. Thus, tadalafil could have potential clinical application for patients with long term diabetic peripheral neuropathy. However, additional experiments are warranted to investigate the effect of tadalafil and sildenafil on diabetic peripheral neuropathy by making a direct comparison between these two PDE5 inhibitors. Although PDE5 is a modulator of the intracellular cGMP signaling pathway, PDE5 inhibitors may act on several downstream signaling effectors and subsequently regulate neurovascular function. NGF and PDGF-C are neurotrophic factors that not only promote vascular growth and maturation, but also directly regulate axonal remodeling by binding to their receptors, TrkA receptor and PDGF-α/β receptors, respectively [26][27][28][29]. In addition, NGF and PDGF mediate sildenafil neuroprotective effects in stroke and anti-proliferative effect on human pulmonary artery smooth muscle cells [30,31]. The cGMP pathway contributes to NGF-mediated neurite outgrowth in DRG neurons derived from mice with sensory nerve defects and regulates PDGF-C-induced vascular smooth muscle and fibroblast cells migration [32,33]. Recent studies have reported that NGF and PDGF-C appear to be important components of neurovascular interaction and crosstalk. Treatment with NGF and PDGF-C induced restoration of nerve function and revascularization, and concomitantly accelerated wound healing of the diabetic neuropathic foot [34,35]. NGF also reversed peripheral neuropathic signs in animal models of diabetic neuropathy [36,37]. The present study showed that diabetes reduced NGF and PDGF-C proteins in Schwann cells of the sciatic nerve tissue, whereas tadalafil treatment increased these two proteins, suggesting that NGF and PDGF-C may be involved in processes of diabetic peripheral neuropathy. In contrast to a study by Varmus et al, who demonstrated that diabetic db/db mice at age of 12 weeks treated with tadalafil (1mg/kg, daily) for 4 weeks showed a significantly decrease in fasting plasma glucose levels and triglycerides [38]. Our results show that tadalafil treatment did not significantly change fasting plasma glucose and body weight in diabetic mice. The reasons for the discrepancy may be attributed to differences in animal age or treatment protocols. The present study demonstrates that treatment with tadalafil augments vascular function and axonal remodeling, which is associated with improved neurological function in diabetic mice with peripheral neuropathy. Thus, tadalafil may provide additional therapeutic opportunities for diabetic peripheral neuropathy.
2018-04-03T06:03:40.322Z
2016-07-20T00:00:00.000
{ "year": 2016, "sha1": "53deccd62b13daf2f8aaeb2c8e5f8b83c321f2c4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0159665&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53deccd62b13daf2f8aaeb2c8e5f8b83c321f2c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5624028
pes2o/s2orc
v3-fos-license
Shp2 suppresses the adipogenic differentiation of preadipocyte 3T3-L1 cells at an early stage Tyrosine phosphatase protein Shp2 is a potential therapeutic target for obesity. However, the mechanism of Shp2 during adipogenesis is not fully understood. The present study investigated the role of Shp2 in the terminal differentiation of preadipocytes. The results showed that Shp2 suppressed adipocyte differentiation in 3T3-L1 cells; overexpression of Shp2 reduced lipid droplet production in 3T3-L1 cells, whereas Shp2 knockdown increased lipid droplet production in 3T3-L1 cells. Furthermore, inhibition of Shp2 activity also enhanced adipocyte differentiation. Interestingly, Shp2 expression was specifically decreased early during differentiation in response to stimulation with the dexamethasone–methylisobutylxanthine–insulin (DMI) hormone cocktail. During the first 2 days of differentiation, Shp2 overexpression impaired the DMI-induced phosphorylation of signal transducer and activator of transcription 3 (STAT3) in 3T3-L1 cells and blocked the peak expression of CCAAT/enhancer-binding proteins β and δ during preadipocyte differentiation. In conclusion, Shp2 downregulated the early stages of hormone-induced differentiation of 3T3-L1 cells and inhibited the expression of the first wave of transcription factors by suppressing the DMI-induced STAT3 signaling pathway. These discoveries point to a novel role of Shp2 during adipogenesis and support the hypothesis that Shp2 could be a therapeutic target for the control of obesity. INTRODUCTION Obesity is a very serious disease that affects a large proportion of the global population. 1 It not only causes a series of health problems, including high blood pressure, glucose and lipid metabolic disorders, but also increases the incidence of many diseases, such as diabetes, cardiovascular disease and cancer. 2,3 Obesity involves a very complex pathological process, which makes the prevention and treatment of obesity too difficult to achieve even now. 4 Disrupted adipogenesis contributes significantly to obesity. 2,5 A full understanding of the mechanisms underlying adipogenesis could benefit the treatment of obesity. 5,6 Adipogenesis occurs in two main stages: an initial commitment step, in which cells are restricted to the adipocyte lineage, and the subsequent differentiation of these preadipocytes, governed by a network of transcription factors (TFs), into the adipocyte phenotype. 5,7 Therefore, two distinct types of adipocyte cell culture models have been developed. C3H10T1/2 cells are the main multipotent stem cell line that can be committed to the adipocyte lineage. 3T3-L1 or F442A cells are preadipocytes that can differentiate into adipocytes. 8,9 Commitment to the adipocyte lineage is mediated by multiple signaling molecules, including bone morphogenetic protein 4 (BMP4), insulin (INS)-like growth factor 1 (IGF1), interleukin 17, fibroblast growth factor 1, activin, Wnt and hedgehog. 5,6 Of these molecules, BMP4 is a crucial regulator that can induce the commitment of C3H10T1/2 cells. 10,11 The initiation of preadipocyte differentiation requires several hormones. 5,12,13 INS, IGF1, glucocorticoids, triiodothyronine and cAMP are efficient inducers in adipocyte cell culture models. 5,13 Preadipocytes cultured in vitro undergo a pre-confluence proliferation, reach confluence growth arrest and then start hormone-induced clonal expansion. 5,6 At that time, the cells synchronously re-enter the cell cycle and start to express the first wave of TFs. CCAAT/enhancer-binding proteins β and δ (C/EBPβ and δ) are the main TFs. 5,6,14 These TFs are expressed early, during clonal expansion, but are not immediately activated. 12,15,16 C/EBPβ and δ are phosphorylated by inducers via cytoplasmic mitogen-activated protein kinase (MAPK), cyclindependent kinase 2 and glycogen synthase kinase 3β, which increase their ability to bind to the promoter regions. 17 As C/EBPβ and δ achieve maximal DNA-binding activity, they initiate the expression of the second wave of TFs, including C/EBPα, peroxisome proliferator-activated receptor γ (PPARγ) and sterol regulatory element-binding protein-1c. These TFs induce the expression of lipid synthesis genes, such as adiponectin and fatty acid synthase. 18 The tyrosine phosphatase Shp2 is a potential target for the treatment of obesity and has a crucial role in glucose and lipid metabolism, and the development of adipose tissues. [19][20][21] However, the regulatory effect of Shp2 on these processes is unclear. Several studies in transgenic mice have found that Shp2 activity is important for balancing food intake and energy expenditure, and that deletion of Shp2 in the forebrain causes obesity and diabetes due to disrupted leptin signaling. [22][23][24] In contrast, Shp2 also promotes adipogenesis in some studies. 25 Deletion of Shp2 in embryonic adipose tissue using aP2-Cre blocks the development of adipose tissue in mice. 25 Shp2 deficiency in embryonic stem cells (ESCs) also suppresses adipogenesis and lipid accumulation. 25,26 However, the deletion of Shp2 in adipocytes using adiponectin Cre does not show any effects. 25,27 These conflicting data may be due to the distinct underlying mechanisms of Shp2 in lipid metabolism, adipose tissue development and adipocyte differentiation. 5,9 Shp2 is a multifunctional signaling protein that usually has dual regulatory effects from single signals (such as INS, growth hormone (GH), transforming growth factor α and leptin). 21,28 As a result, Shp2 may show diverse effects on obesity depending on the context. 21,28 These conflicting results have limited the study of Shp2 as an effective therapeutic target. Therefore, the roles of Shp2 in the different bioprocesses related to obesity need to be more completely understood to effectively treat this disease by targeting Shp2. Using the 3T3-L1 cell culture model, we demonstrated that Shp2 inhibits early adipocyte differentiation by suppressing the hormone-activated signaling pathway. Our results expand our understanding of Shp2 regulation on adipogenesis and enhance the potential utility of Shp2 as a therapeutic target. Shp2 inhibited the adipogenic differentiation of 3T3-L1 cells To determine whether Shp2 regulated the differentiation of preadipocytes, we evaluated the effect of Shp2 protein expression on the differentiation of 3T3-L1 cells. First, 3T3-L1 cells were infected with equivalent titers of the lentivirus overexpression construct PCDH-Shp2 or interfering RNA plasmid sh-Shp2 and their control vectors. Shp2 was successfully overexpressed or knocked down in 3T3-L1 cells (Figure 1d). Cells were then induced to differentiate with the hormone cocktail dexamethasone-methylisobutylxanthine-insulin (DMI), which contains dexamethasone (Dex), methylisobutylxanthine (IBMX) and INS (see details in the Materials and Methods section). On day 8 of treatment, adipocyte differentiation was assessed by staining the lipid droplets with Oil Red O. Abundant lipid droplets accumulated in normal 3T3-L1 cells (Figure 1a, left, first image and right, second image). When Shp2 protein expression was reduced by siRNA, the lipid production was significantly increased (Figure 1a, left, second image). In contrast, the lipid droplets were significantly reduced in cells overexpressing Shp2 protein (Figure 1a, right image). The quantitative assay indicated that the lipid production was increased~30% in Shp2 knockdown cells, but decreased 50% in Shp2 overexpressing cells (Figure 1b). We also checked the expression of biomarker genes of adipocyte differentiation by PCR with reverse transcription (RT-PCR) and found that knockdown or overexpression of Shp2 up-or downregulated the mRNA levels of C/EBPα, PPARγ and adiponectin in differentiated adipocytes, respectively (Figure 1c). These results suggest that Shp2 negatively regulates adipogenic differentiation and that overexpression of Shp2 suppresses the differentiation of preadipocyte 3T3-L1. Suppression of Shp2 activity enhanced the differentiation of 3T3-L1 cells To confirm the regulatory effect of Shp2 on preadipocyte differentiation, we investigated the effects of Shp2 activity inhibitors PHPS1 and NSC87887 on the differentiation of 3T3-L1 cells. PHPS1 or NSC87887 (10 μM) was added to the growth medium during the entire differentiation induction process. At the end of the treatment, adipocyte differentiation was assessed by the production of lipid droplets and the expression of biomarker genes. Oil Red O staining showed that the inhibition of Shp2 activity with PHPS1 and NSC87887 remarkably promoted the production of lipid droplets in adipocytes ( Figure 2a). In addition, these inhibitors significantly increased the mRNA levels of biomarker genes PPARγ, C/EBPα and adiponectin ( Figure 2b). Consistently, PPARγ protein expression was increased by PHPS1 and NSC87887 ( Figure 2c). Together, these data demonstrate that the suppression of Shp2 activity also enhances the differentiation of 3T3-L1 cells. Shp2 expression was specifically downregulated during the first 2 days of 3T3-L1 differentiation Because Shp2 had a regulatory effect on preadipocyte differentiation, we evaluated the expression pattern of Shp2 during the differentiation of 3T3-L1 cells. The protein level of Shp2 was slightly reduced on the first day ( Figure 1a, the second band in the top panel) and reached a significantly lower level on the second day ( Figure 1a, the third band in the top panel). However, Shp2 protein recovered to the normal level beginning on the third day ( Figure 3a, right six bands in the top panel). As a control, Shp2 expression was not changed in 3T3-L1 cells in the absence of DMI (Figure 3b, left panel). These results indicate that the differentiation inducers DMI reduce the expression of Shp2 protein during the initial stage of preadipocyte differentiation. Interestingly, each inducer also decreased the protein level of Shp2 in cells on the first 2 days of 3T3-L1 differentiation (Figure 3c). In addition, the Shp2 mRNA level was also sharply reduced by the inducers in 3T3-L1 cells during the first 2 days (Figure 3d). Furthermore, the proteasome inhibitor MG132 did not block the reduction of Shp2 protein in cells treated with 10 μM MG132 together with DMI ( Figure 3e, right two bands in the top panel). These observations suggest that the differentiation inducers suppressed the gene expression rather than the induced protein degradation of Shp2. As confirmation, Shp2 ubiquitination in 3T3-L1 cells on the second day of differentiation could not be detected with an antibody specific for ubiquitin ( Figure 3f). Shp2 regulated the early preadipocyte differentiation stage Because the expression of shp2/ptpn11 gene was specifically inhibited during the first 2 days of 3T3-L1 cell differentiation, we hypothesized that Shp2 had a role early in preadipocyte differentiation. To test this hypothesis, the Shp2 inhibitor PHPS1 was added to the induction medium at different time points to assess the effects of Shp2 on each stage of preadipocyte differentiation. The scheme for PHPS1 treatment is illustrated in Figure 4a. As expected, the lipid production was promoted when cells were treated with 10 μM PHPS1 from day 0 to day 2 ( Figure 4B and C, c). However, the lipid accumulation was not affected when cells were treated with 10 μM PHPS1 from day − 2 to day 0 ( Figure 4B and C, b) or day 2 to day 4 ( Figure 4B and C, d). Furthermore, the mRNA level of PPARγ, C/EBPα and adiponectin was only increased in cells incubated with PHPS1 from day 0 to day 2 ( Figure 4D, c). Together, these studies confirm that Shp2 suppresses the preadipocyte differentiation at an early stage. Shp2 mediated the cytoplasmic MAPK and STAT3 signaling pathways that are activated by DMI inducers To identify the underlying mechanisms of Shp2 regulation on preadipocyte differentiation, we evaluated the cytoplasmic signaling pathways activated by DMI in preadipocytes. First, we investigated the effects of Shp2 on DMI-induced MAPK pathway. DMI stimulated the activation of extracellular signal-regulated kinase (ERK) in a time-dependent manner in 3T3-L1 cells; ERK1/2 phosphorylation rapidly increased to a peak level at 1 h (Figure 5a, second band in the top panel) and returned to normal at 3 h of treatment (Figure 5a, right, second band in the top panel). However, the Shp2 inhibitor PHPS1 impaired the stimulatory effects of DMI and reduced DMI-induced ERK1/2 phosphorylation (Figure 5a, third, fifth and seventh bands in the top panel). In contrast, Shp2 overexpression increased the DMI-induced phosphorylation of ERK1/2 on the first day (Figure 5b, fourth band in the third panel). Unexpectedly, DMI-induced activation of ERK1/2 was not affected by the knockdown of Shp2 protein (Figure 5c, the third panel). Next, we examined the regulation of Shp2 on the DMI-induced phosphatidylinositol 3-kinase (PI3K)-protein kinase B (PKB/Akt) pathway, and did not find the effects of Shp2 (data not show). Furthermore, STAT3 was activated by DMI early during 3T3-L1 cell differentiation (Figure 5b, third and fifth bands in the top first panel). DMI-induced phosphorylation of STAT3 was significantly reduced in preadipocytes that overexpressed Shp2 protein (Figure 5b, fourth and sixth band in the top first panel). However, Shp2 protein knockdown did not further increase STAT3 phosphorylation induced by DMI in 3T3-L1 cells (Figure 5c, top panel). These results indicate that Shp2 regulates the phosphorylation of ERK and STAT3, but does not affect the activation of Akt induced by DMI during the first 2 days of adipogenic differentiation. Shp2 suppressed the peak expression of C/EBPβ and δ early during preadipocyte differentiation C/EBPβ and δ are part of a first wave of TFs during adipogenesis, and their expression levels peak early during preadipocyte differentiation. We found that Shp2 suppressed the expression of C/EBPβ and δ induced by DMI. The mRNA levels of these TFs were significantly decreased in 3T3-L1 cells that overexpressed Shp2 on day 1 of the differentiation process (Figure 6a, middle and right panels). Unexpectedly, C/EBPβ and δ expression recovered at day 2 (Figure 6a, middle and right panels), which may be due to the reduced expression of Shp2 protein caused by DMI (Figure 6a, left panel). Consistently, C/EBPβ expression was increased in 3T3-L1 cells with knocked down Shp2 on day 1 (Figure 6b, middle and right panels). These data suggest that Shp2 DISCUSSION Preadipocyte 3T3-L1 is a well-known cell culture model for investigating adipocyte differentiation. In the present study, Shp2 expression was reduced in 3T3-L1 cells during the first 2 days of differentiation. The overexpression of Shp2 decreased the lipid droplets, whereas Shp2 knockdown increased the lipid droplets. These discoveries suggest that Shp2 opposes the early differentiation of adipocytes and supports the hypothesis that Shp2 has a novel role in adipogenesis. The initiation of preadipocyte differentiation requires hormonal induction. 29 INS, IBMX (a cAMP phosphodiesterase inhibitor) and Dex (a synthetic glucocorticoid agonist) were combined as an inducer cocktail, and used to treat 3T3-L1 cells for 2 days to induce their re-entry into the clonal expansion and expression of the first wave of TFs (such as C/EBPβ and δ). 5,6,30 In contrast, Shp2 expression was suppressed by DMI, which indicates that Shp2 has an antagonistic suppressor role during the initiation of preadipocyte differentiation by opposing the hormone inducers. A previous studies also found reduced Shp2 protein expression early in the adipogenic differentiation of 3T3-L1 cells. 31 Shp2 is a protein tyrosine phosphatase that has inhibitory effects on many factors, including INS and GF. 21 Thus, INS and other inducers must block the antagonism of Shp2 to trigger signal transduction and the expression of downstream genes during 3T3-L1 differentiation. Interestingly, Shp2 protein expression was restored beginning on the third day of the differentiation process even though cells were still incubated in the presence of INS. This finding indicates that the antagonistic action of Shp2 specifically works early during adipocyte differentiation. As confirmation, an Shp2 inhibitor enhanced the lipid production only in cells treated with PHPS1 during the first 2 days. A previous studies have shown that Shp2 promotes adipogenesis and that the deletion of Shp2 in adipose tissue before birth in mice blocks the development of adipose tissue. 25 Similar to the development of other organs, adipose tissue originates from precursor stem cells that are committed to the adipocyte lineage. 5,6 The underlying mechanism of these processes is different from preadipocyte differentiation. 5,6 Shp2 is required for the differentiation of ESCs and has crucial roles in the development of several organs. 21,[32][33][34][35] Thus, the deletion of Shp2 in adipose tissue in mice would block the production of preadipocytes from adipose stem cells and prevent the development of adipose tissue. Suppression of the Shp2 activity in ESCs also inhibits the adipocyte differentiation by reducing the number of preadipocytes. However, deletion of Shp2 in adipocytes with adiponectin Cre does not affect the development of adipose tissue. 25,27 These discoveries confirm that Shp2 has a different role during preadipocyte differentiation compared with that during the adipose tissue development. Shp2 not only suppresses signal transduction via its phosphatase activity, but also triggers cytoplasmic signaling pathways, such as Akt and MAPK. 21,28 Therefore, Shp2 action involves several cytoplasmic signaling proteins, including Akt, Erk and STAT3. 21,28 We evaluated the effects of Shp2 on the DMI-induced activation of these signaling proteins and found that Shp2 mediated the phosphorylation of STAT3 and Erk in 3T3-L1 cells, but did not affect the activation of DMI-induced Akt during the first 2 days of adipogenic differentiation. Akt and Erk have important roles in adipogenesis, 36,37 but their roles in the regulation of the early differentiation of preadipocytes is unclear, especially concerning the expression of the first wave of TFs. Notably, STAT3 is rapidly activated by adipogenic induction and regulates the transcription of C/EBPβ early during adipogenesis. 13,38,39 Both the STAT3-selective inhibitor and JAK2 activity inhibitors suppress the expression of C/EBPβ and the activation of STAT3 to prevent the adipocyte differentiation. 38 Furthermore, the present analysis showed that Shp2 significantly inhibited the transcription of C/EBPβ and δ early during 3T3-L1 adipocyte differentiation. 17,18 C/EBPβ and δ have an important role early in 3T3-L1 adipocyte differentiation. 17,18 Transcriptional activation of C/EBPβ initiates the expression of C/EBPα and PPARγ, thus initiating the adipocyte differentiation process. 15,40 In summary, Shp2 acts as a suppressor early during the adipogenic differentiation of preadipocytes 3T3-L1. The hormone inducer DMI blocks the inhibitory effect of Shp2 and triggers the peak expression of TFs C/EBPβ and δ to initiate the terminal differentiation of adipocytes ( Figure 7). Together, these data demonstrate a novel role of Shp2 in adipocyte differentiation and support the hypothesis that Shp2 could be a therapeutic target for the control of obesity. Lentivirus vector construction and infection The siRNA of Shp2 is targeted on the sequence 5′-GGACATGAA TATACCAATATT-3′; the control scrambled sequence is 5′-CCTAAG GTTAAGTCGCCCTCG-3′. The annealed double-stranded fragment (shRNA) was cloned into lentiviral vector pSicoR. To construct the Shp2 overexpression lentivirus, Shp2 mRNA was amplified by PCR from wild-type Shp2 plasmid and inserted into the lentivirus expression plasmid pCDH-copGFP. All constructs were confirmed by DNA sequencing. Plasmids were transfected into 293FT cells to produce lentivirus using a Turbofect transfection reagent. In experiments, cells were infected with equivalent titers of virus. Protein expression was determined by western blotting to ensure the similar expression by the control and experimental viruses. Western blotting and ubiquitination assays Western blotting was performed as previously described. In brief, cell lysates with equal concentration of total proteins were separated by 10% SDS-polyacrylamide gel electrophoresis, transferred to polyvinylidene difluoride membranes and analyzed by immunoblotting with specific antibodies. Grayscale bands were quantified using Quantity One software (Bio-Rad). For ubiquitination assays, the Shp2-containing complex was first isolated from total cell lysates by immunoprecipitation and separated on SDS polyacrylamide gels. Ubiquitination of Shp2 was detected by western blotting with a specific antibody against ubiquitin (Santa Cruz Biotechnology, sc-9133, Santa Cruz, CA, USA). Antibodies against Shp2 (SH-PTP2, sc-280), PPARγ (sc-7196), β-actin (sc-8432) and ERK1 (sc-93) were purchased from Santa Cruz Biotechnology. Antibodies against p-ERK1/2 (4370 s), p-STAT3 (9145 s), STAT3 (9139 s) and GAPDH (2118 s) were obtained from Cell Signaling (Bererly, MA, USA). Statistical analysis All values are shown as the mean ± S.E.M. (SPSS 13.0, IBM company, New York City, NY, USA). Paired Student's t-tests were used to compare the mean values, and Po0.05 was considered significant.
2017-11-08T18:24:41.804Z
2016-07-04T00:00:00.000
{ "year": 2016, "sha1": "71e07ee3fbe032fdc80cb798a5dc7893d75fe1ac", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/cddiscovery201651.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71e07ee3fbe032fdc80cb798a5dc7893d75fe1ac", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
232902766
pes2o/s2orc
v3-fos-license
Inhibitory concentrations of gambier ( Uncaria gambir Roxb.) catechins extract against Streptococcus mutans Introduction: Catechin which extracted from gambir ( Uncaria gambir Roxb) is a major component of polyphenol compounds. The catechins compound acts as an antibacterial. The study was to analyze the inhibitory concentration gambir catechin extract against Streptococcus mutans as the bacteria that play a role in the formation of dental caries. Methods: The study was conducted in a laboratory experiment by testing inhibitory concentration gambir catechin extract, through Kirby Bauer disk diffusion method on plates TYCSB for 1 x 24 hours. Test bacteria Streptococcus mutans isolated from saliva. The used suspension of bacteria was made according to the standard turbidity of Mc Farland 0.5 are each 1 ml suspension containing 1.5 x 108 bacteria. The powder obtained from catechin gambir extract through freezing method. Results: Catechins concentrations of 20% produces the lowest inhibition, with an inhibitory diameter of 0,615 cm. The largest inhibition produces by the concentration of catechins with 80% inhibition at 1,085 cm inhibitory diameter. Conclusion: The higher concentration of catechins, the greater inhibition produces. Conversely, the lower concentration of catechins will be lower inhibition produces. the resulting zone of inhibition will be smaller. INTRODUCTION Caries is the most common oral disease in the community and its still a problem today. Dental caries is a severe infection in dental hard tissue which consists of the enamel, dentine, and cementum caused by the activity of a microorganism. The main bacterium that causes dental caries is Streptococcus mutans, in addition, other microorganisms play a role in the process of caries formation. 1,2,3 Streptococcus mutans have cariogenic nature because they produce extracellular enzymes glucosyltransferase (GTF) that could break sucrose into glucan and fructan. Glucan is used to produce energy during anaerobic glycolysis process. Lactic acid as the result of glycolysis process could demineralize email that develops caries. 4 (Gani, 2006). An effort to prevent caries is by applying the principle of minimal intervention. This principle is a noninvasive treatment in the absence of dental hard tissue removal, and conditioning healthy oral cavity environment by controlling the growth of bacteria that cause dental caries. 5,6 (Mount and Ngo, 2000;Samaranayake, 2002). Management to inhibit and suppress the growth activity of Streptococcus mutans that cause caries can be done mechanically or chemically. Chemical control can be done by inhibiting the growth of caries-causing bacteria, such as the use of mouthwash, toothpaste, and other oral cleansers. 7 (Pradewa, 2008). The ingredients used as the basis for making oral cleansers can be derived from traditional plants, because they are natural, more easily absorbed and digested, and have less side effects than modern or synthetic drugs. 8,9 (Thalib and Asmawati, 2002;Purnamasari, 2010). One alternative plant that can be used is gambier (Uncaria gambir Roxb). Gambier plants are used by ancestors as an additive in the chewing ingredients for healthy teeth and gums. The part of the cultivated plant is the sap of leaves and twigs. Gambier is a functional plant part of flavonoid compounds class. The main content of this compound is polyphenols. Studies show that dominant polyphenolic compounds are catechins. 10 (Heyne, 1987in Pambayun, 2007. Catechin content in gambier extract is 7-33 %. 11 (Amos, 2004). The high content of polyphenols in gambier leaves indicates that gambier is alleged to have antibacterial activity. 12 (Hermawan, 2009). Gambier extract has a high inhibitory effect on gram-positive bacteria (Streptococcus mutans, Staphylococcus aureus, Bacillus subtilis) but does not show the significant inhibitory effect on all gram-negative test bacteria (Escherichia coli, Salmonella thypimurium, and Shigella flexneri) . 10 (Pambayun, 2007). The study was to analyze the inhibitory concentration gambir catechin extract against Streptococcus mutans as the bacteria that play a role in the formation of dental caries. METHODS This research is an experimental laboratory study, that testing catechin inhibition power of gambier extract (Uncaria gambir Roxb) to Streptococcus mutans by using diffusion method for Kirby Bauer. This Final Project was conducted in April-August 2012 at the Microbiology Laboratory of Faculty of Dentistry, Padjadjaran University. The population in this research is catechin extract gambier. The sample is a catechin powder extracted by gambir freezing method. The independent variable in this research is catechin extract gambier with the freezing method at different concentration. The dependent variable was Streptococcus mutans isolate. Control variables are culture medium, facultative anaerobic atmosphere, temperature and time of incubation, standard bacteria turbidity test. The operational definition of this research are, antibacterial power is the ability of the material to inhibit the growth or kill bacteria in a certain concentration. The diffusion method for Kirby Bauer is a method for testing the antibacterial power of an ingredient against a particular bacterium. The results that can be seen on the TYCSB plate have clear boundary zone around the circle, indicating that the area is the Minimum Inhibitory Concentration (MIC). The resulting scale is a numerical scale. Catechin powder of gambier extract is a gambier catechin which has been extracted by using the freezing method. The freezing method is a method used to separate catechins with tannins based on their nature of dissolution and polarity. Streptococcus mutans is the most dominant group of bacteria from all microflora in the mouth, gram-positive, non-motile, splitting in one direction, cauliflower-like facultative anaerobic colonies, irregular margin, a bright white color that is very clear with a diameter between 0.5 to 0.8 cm, is firmly attached to the growth medium. The medium used is Tryptone Yeast Cysteine Sucrose Bacitracin (TYCSB) as a selective medium for the isolation of Streptococcus mutans, while sucrose bullion is used as a culture medium for genetically Streptococcus mutans. The test material of this research is catechin powder extract of gambier and bacteria for the test used is Streptococcus isolation and mutans from saliva. This research procedure consists of identification of Streptococcus mutans, as well as catechin inhibition test of gambir extract. The examination material was obtained from saliva, then planted on TYCSB medium and incubated in the anaerobic facultative atmosphere at 37 ° C for 48-72 hours. Colonies that form like white cauliflower, borderless, shiny and firmly attached to the surface of the media. 3 (Wan, 2002). The suspected colony prepared with Gram staining. The results of examination under the microscope seen Gram-positive bacteria shaped coccus by forming a line formation like a chain. Then, the cultures are grown on bulbs containing sucrose, then incubated at 37 ° C. for 48-72 hours in an anaerobic faculty to obtain a high number of pure Streptococcus mutans isolates. The inhibitory test can be determined using the diffusion method for Kirby Bauer. The process is carried out in the following way: First, prepare the 100% concentrated catechin gambier powder from 100% concentration, then dilute the series starting from the 80% catechin concentration is by using six pieces of sterile test tube numbered 1-6. Tube number 1 filled with 2 grams of catechin powder and 2.5 ml of sucrose bullion, 2-6 tubes filled with sucrose bullion every 2 ml. Transfer 2 ml of homogenous catechin solution from tube number 1 to tube number 2 with sterile pipette then shake until homogeneous, so that dilution is obtained two times, then 2 ml of solution in tube number 2 is put into tube number 3 to get four times dilution. Dilution was also performed on the fourth test tube to obtain eight times dilution. Further dilution was performed on the fifth test tube to obtain 16 times dilution, in the sixth tube 2 ml of bullion solution as Positive Control (PC) was applied. The test was performed by diffusion method to Kirby Bauer. Suspension of Streptococcus mutans was prepared according to McFarland 0.5 and incubated turbidity for 1-2 h at 37 ° C.14 (Cappucino, 2004). A total of 0.1 ml of suspension is grown on TYCSB and flattened with sterile lid cotton. The next step, on culture TYCSB, six holes were made with a diameter of 10 mm using a sterile perforator tool. In the first hole, 80% catechin extract solution was poured in approximately 2 ml or until it fills the hole according to the thickness of agar. In the second hole is filled with 40% catechin extract solution of the same amount, in the third hole filled with 20% catechin extract solution, the fourth hole was filled with 10% catechin extract solution, in the fifth hole, 5% catechin extract solution was poured. Then, the sixth hole bullion solution was included as a Positive Control (PC). Then the plates were incubated facultatively anaerobically at 37 ° C for 1x24 hours. After 1 x 24 hours, TYCSB was observed. When the material tested has antibacterial inhibition of the bacteria, it will show there is no bacterial growth in the area around the hole. The results seen on the TYCSB plate, there is a bounded inhibitory zone around the circle, indicating that the area is the MIC. The area is a zone where there is no bacterial growth. The TYCSB plate used amounted to 20 plates, divided into ten samples, performed in 2 repetitions. RESULTS The results of the isolation of Streptococcus mutans on TYCSB showed colonies that were shaped like cauliflower, irregular margin, a very clear white color with a diameter between 0.5 -0.8 cm. Results of planting for 24-36 hours with 37 °C at selective TYCSB medium in anaerobic faculty showed colonies as shown in the figure 1. The Gram staining result performed on the colony microscopically showed the shape of Grampositive cocci bacteria with chain formation as shown in the figure 2. Catechins that have the greatest inhibitory effect are at 80% concentration with a drag of 1.085 cm. The greater the concentration of catechins, the greater the inhibitory effect is produced, and vice versa the less catechin concentration will be the smaller the inhibitory effect. DISCUSSION Gambir (Uncaria gambir Roxb) is a plant belonging to the flavonoid group, whose main content is polyphenols. Polyphenol compounds alter the properties of bacterial cell proteins. 15 (Jenie, 2007). Polyphenols containing active compounds include catechins, which consist of epigallocatechin (EGC), epicatechin gallate (ECG), epicatechin (EC), epigallocatechin gallate (EGCG), catechin (C) and gallocatechin error (GCG) . 15 (Ebadi, 2007). EGCG is believed to have the antibacterial effectiveness that binding peptidoglycan cell wall bacteria that can damage bacterial cell walls. EGCG and ECG can inhibit the action of glucosyltransferase enzyme from Streptococcus mutans. Mechanism of catechin work in inhibiting the growth of Streptococcus mutans through two ways that are as antibacterial in preventing dental plaque formation and inhibit glycosylation process. The ability of catechins as antibacterials is by denaturing proteins in bacterial cells. Catechins which are toxic compounds result in disruption of the threedimensional protein structure of bacterial cell it becomes open and random without damaging its covalent frame structure. That cause bacterial cell protein denatured, resulting in biological activity disruption which causes the protein unable to perform its function properly. Changes in the structure of proteins in bacterial cell wall will increase cells permeability so that cell growth will be inhibited and then the cell becomes damaged. 17,18 (Hasim, 2006;Agustin, 2008). The ability of catechins in inhibiting the glycosylation process is as follows; catechins will work competitively with glucosyltransferase (GTFs) in reducing saccharides which are the basic ingredients of the glycosylation process, so that the formation of extracellular polysaccharides in bacteria is inhibited. Catechin activity in lowering glucose is much higher than the movement of GTFs in using glucose. 19 (Murray et al., 2003). The catechin contains radical gallol or pyrogallol which can inhibit the activity of Lactate Dehydrogenase (LDH) enzyme or bacterial fatty acid enzyme by binding with the catalyst. Catechin content, epigallocatechin galat, and epicatechin can inhibit Streptococcus mutans attachment to the tooth particles because these bacteria have receptors called glucan-binding proteins. Dextrose can provide a carbon source for plaque bacteria by lowering glucan. The bounded glucan is converted by dextranase to glucose again and used in the process of glycolysis, which produces energy and lactic acid. 16 (Ebadi, 2007). Glucosyltransferase is required by bacteria to attach to a surface to form biofilms. If the bacteria cannot be attached to the pellicle, it can not create a large colony so that the dental plaque will be reduced. The inability of bacteria to attach can also kill the bacteria itself because bacteria will not obtain sucrose as their food source. That causes the catechin gambier to show a significant antibacterial effect on Streptococcus mutans. 16 (Ebadi, 2007). According to the results of research on the inhibition of herbs containing toothpaste, it shows that based on the influence of phenol compounds on bacterial growth, zones of inhibition result against Streptococcus mutans increases in size and the number of Streptococcus mutans bacteria is reduced. That is by the results obtained about the inhibitory concentration of catechins from the gambier extract on Streptococcus mutans. It can conclude that the catechin extract gambier is as antibacterial. 20 (Pratiwi, 2008). Catechin used in this study is catechin which is extracted through freezing method. Extracted catechin powder using this freezing approach has several advantages. The advantage is that the powder is purer, because the separation and purification process uses water as a solvent and uses a special machine in the process of processing, so that the components contained in this gambier catechin do not react much with other chemicals that can affect chemical content in catechins. Also, this freezing method is more economical and efficient, because the processing does not take too long and the material is relatively cheap, making it more profitable in the production process. 21 CONCLUSION The higher the concentration of catechins at the time of testing, the greater the MIC. Conversely, the lower the concentration of catechins, the resulting zone of inhibition will be smaller.
2020-05-07T09:04:17.063Z
2012-11-30T00:00:00.000
{ "year": 2012, "sha1": "21cca44f8c0aa761499593f4ac500b8b599a7630", "oa_license": "CCBYSA", "oa_url": "http://jurnal.unpad.ac.id/pjd/article/download/26832/13112", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9041def591b16935c88357d1cdb5594f3824c2ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }