id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
204852317
pes2o/s2orc
v3-fos-license
Virtual Compton Scattering and Nucleon Generalized Polarizabilities This review gives an update on virtual Compton scattering (VCS) off the nucleon, $\gamma^* N \to N \gamma$, in the low-energy regime. We recall the theoretical formalism related to the generalized polarizabilities (GPs) and model predictions for these observables. We present the GP extraction methods that are used in the experiments: the approach based on the low-energy theorem for VCS and the formalism of Dispersion Relations. We then review the experimental results, with a focus on the progress brought by recent experimental data on proton generalized polarizabilities, and we conclude by some perspectives in the field of VCS at low energy. Introduction Virtual Compton Scattering (VCS) on the nucleon became a well-identified field of hadron physics in the 1990's. After first conceptual attempts motivated by the Pegasys and Elfe [1] projects, the field built itself on two different energy regimes: the near threshold with the concept of generalized polarizabilities (GPs), and the high-energy, high-Q 2 regime of deeply virtual Compton scattering. The whole field has seen a continuous and fruitful development with a wealth of new observables to explore the nucleon structure. This review focuses on VCS at low energy and the generalized polarizabilities (GPs) of the nucleon, a topic that has seen substantial progress in the recent years. In light of new experimental data, a consistent picture of the proton scalar GPs starts to emerge and it is an appropriate time to review our knowledge of these observables. This article will be usefully complemented by previous reviews, addressing the subject in more or less depth. Through the two main references connected to the present article, i.e., Refs. [2,3], and a non-exhaustive list of other reviews [4][5][6][7][8][9][10], the reader can seize the temporal evolution of the field. The present article combines a summary of the theoretical framework and the experimental status. The new aspects concern the focus on the use of the Dispersion Relation (DR) model [3] in VCS experiments, and an update of experimental results. In all the following, we consider a proton target, although the same considerations are applicable to the neutron target. The particle four-momentum vectors are denoted as: k µ and k ′µ for the incoming and scattered electrons, q µ and q ′µ for the virtual photon and final real photon, p µ and p ′µ for the initial and final protons. The modulus of the three-momenta is denoted as q = |q| , etc. Variables are indexed "lab" (or not indexed) in the laboratory frame, where the initial proton is at rest. They are indexed "cm" in the center-of-mass frame (c.m.) of the (initial proton + virtual photon), i.e. the c.m. of the Compton process γ * p → pγ. The kinematics of the (ep → epγ) reaction are defined by five independent variables 1 . We will adopt the most usual set of variables: (q cm , q ′ cm , ǫ, cos θ cm , ϕ) where ǫ is the virtual photon polarization parameter, i.e., ǫ = 1/[1 + 2 q 2 lab Q 2 tan 2 (θ ′ e lab /2)], and q cm and q ′ cm are the three-momentum modulus of the virtual photon and final photon in the c.m., respectively. θ cm and ϕ are the angles of the Compton process, i.e., the polar and azimuthal angles of the outgoing real photon w.r.t. the virtual photon in the c.m. 2 , see Fig. 1. The triplet (q cm , q ′ cm , ǫ) defines the leptonic vertex e → e ′ γ * . The c.m. total energy is W = √ s, and M N is the nucleon mass. At fixed beam energy, the five-fold differential cross section is d 5 σ/(dk ′ lab d cos θ ′ e lab dϕ ′ e lab d cos θ cm dϕ) and will be denoted dσ for simplicity. Since the GPs are defined from the VCS amplitude in the limit of q ′ cm = 0, keeping q cm fixed (cf. Sect. 3.2.2), a number of kinematical variables are also defined in this limit. They are designated with a tilde in the low-energy expansion (LEX) formalism. Among them, we find the photon virtuality Q 2 , which takes the form:Q 2 = 2M N ·( M 2 N + q 2 cm −M N ) 3 . Therefore q cm andQ 2 are equivalent variables. Throughout this article we will use the notation "Q 2 " everywhere for simplicity, knowing that when q ′ cm → 0 a "Q 2 " is meant instead 4 . GPs and structure functions will thus depend equivalently on q cm or Q 2 . plane, the reaction (or hadronic) plane, and the Compton scattering in the the center-ofmass system. For polarization experiments, axes are defined such thatẑ cm is along q cm and y cm is orthogonal to the scattering plane. Figure taken from ref. [11]. impressive amount of knowledge on these nucleon polarizabilities, and the field is still at the forefront of hadron physics, see for instance the recent reviews [9,10]. It is well known that the static dipole electric (α E1 ) and magnetic (β M1 ) polarizabilities of the proton are very small quantities: α E1 = 11.2 ± 0.4 and β M1 = 2.5 ± 0.4 in units of 10 −4 fm 3 [12], testifying to the strong binding force of QCD. The smallness of β M1 relative to α E1 is generally understood as coming from two large contributions, of para-and dia-magnetic nature, which are of opposite sign and cancel to a large extent. In VCS the incoming real photon is replaced by a virtual, space-like photon of four-momentum transfer squared Q 2 , produced by an incoming lepton. The virtual photon momentum q sets the scale of the observation, while the outgoing real photon momentum q ′ defines the size of the EM perturbation. Polarizabilities are then generalized to Q 2 = 0 and acquire a meaning analogous to form factors: their Fourier transform will map out the spatial distribution density of the polarization induced by an EM field. The RCS polarizabilities are then seen as the "net result" of such spatial distributions, while the role of GPs is to give access to the details of these spatial dependences. Quoting the illustrative sentence of Ref. [2]: " ... VCS at threshold can be interpreted as electron scattering by a target which is in constant electric and magnetic fields. The physics is exactly the same as if one were performing an elastic electron scattering experiment on a target placed in between the plates of a capacitor or between the poles of a magnet." The GP formalism was first introduced in Ref. [13] for the case of nuclei. The idea was that nuclear excitations could be studied in a more complete way with an incoming virtual photon, instead of a real photon. To this aim, the concept of polarizabilities as a function of excitation energy and momentum transfer was introduced. The formalism was later applied to the nucleon case in [14]. In contrast to elastic form factors, which are sensitive only to the ground state of the nucleon, polarizabilities (and GPs) are sensitive to its whole excitation spectrum, with the excited states contributing virtually. For instance, in a non-relativistic approach, the electric polarizability is obtained from the quadratic Stark effect calculated at the second-order in perturbation theory as where D z is the electric dipole moment operator and N * indicates a nucleon resonance. Furthermore, the polarizabilities (and GPs) are particularly suited to address the widely-used picture of the nucleon as a quark core surrounded by a pion cloud, since both of these components can be "seen" and interpreted to some extent in the Compton observables, using hadron structure models. The physical content of the GPs will be discussed in more detail in Sect. 3.3. VCS at low energy and the GP formalism: the LET The pioneering work of Ref. [14] opened a new era of investigation of nucleon structure, by establishing the physics case of VCS off the nucleon for the first time and providing a way to access GPs through experiments. We give here an overview of the formalism, the results of which will be further exploited in the experimental section. Amplitudes for the photon electroproduction process The VCS process is accessed via the exclusive photon electroproduction reaction. As the virtual photon needs to be produced by a lepton beam, VCS is always accompanied by the so-called Bethe-Heitler (BH) process, where the final photon is emitted by the lepton instead of the nucleon. Fig. 2 shows the different amplitudes contributing to the (ep → epγ) process: the BH graphs or bremsstrahlung from the electron(s), the VCS Born graphs or bremsstrahlung from the proton(s), and typical diagrams for the resonance excitation and non-resonant πN contribution in the s-channel entering the VCS non-Born (NB) term. The BH and VCS Born amplitudes are entirely calculable in QED, with the nucleon electromagnetic form factors (G E , G M ) as inputs. The non-Born amplitude T NB contains the physics of interest and is parametrized at low energy by the nucleon GPs. The three amplitudes add up coherently to form the total photon electroproduction amplitude: The t-channel involving the π 0 exchange is conventionally included in the non-Born term, i.e., in the GPs. As has been pointed out in [14,15], the splitting in Eq. (2) is not unique. Contributions which are regular in the limit q ′ → 0 and separately gauge invariant can be shifted from the Born amplitude to the non-Born amplitude and vice versa. Therefore, when calculating the GPs, one has to specify which Born terms have been subtracted from the full amplitude, since different Born terms lead to different numerical values of the GPs. In our calculation we use the Born amplitude as defined in [14]. The non-Born amplitudes and the GPs We briefly recall how the nucleon GPs are introduced in the work of Ref. [14] and later works. A multipole expansion of the non-Born amplitude H NB is performed in the c.m. frame, yielding the multipoles H (ρ ′ L ′ ,ρL)S NB (q ′ cm , q cm ). Here L (L ′ ) represents the angular momentum of the initial (final) electromagnetic transition in the (γ * p → pγ) process, whereas S differentiates between the spin-flip (S = 1) or non spin-flip (S = 0) transition at the nucleon side. [ρ (ρ ′ ) = 0, 1, 2] characterizes the longitudinal (L), electric (E) or magnetic (M) nature of the initial (final) photon. GPs are obtained as the limit of these multipoles when q ′ cm tends to zero, at arbitrary fixed q cm . At this strict threshold, the final photon has zero frequency, its electric and magnetic fields are constant ("static field") and the GPs represent the generalization at finite q cm of the polarizability in classical electromagnetism. For small values of q ′ cm one may use the dipole approximation (L ′ = 1), corresponding to electric and magnetic final-state radiation that is dipolar only. In this case, angular momentum and parity conservation lead to ten different dipole GPs [14]. However, it was shown [16,17] that nucleon crossing combined with charge conjugation symmetry reduces the number of independent GPs to six. Table 1 gives the usually adopted set of the six lowest-order GPs: two scalar, or spin-averaged, or spin-independent GPs (S = 0) and four spin-dependent, or spin-flip, or vector, or more simply spin GPs (S = 1). They depend on q cm , or equivalently on Q 2 (cf. Sect. 2). The notation in column 2 of Table 1 will be used throughout this article. The two scalar GPs, electric and magnetic, are thus defined as: with e 2 /4π = α QED = 1/137. At Q 2 = 0 they coincide with the RCS polarizabilities α E1 and β M1 . Table 1: The standard choice for the six independent dipole GPs. Column 1 refers to the original notation, and column 2 to the more standard multipole notation. Column 3 gives the correspondence in the RCS limit, defined by Q 2 → 0 or q cm → 0. An alternative approach to analyzing low-energy VCS process was proposed in Ref. [18] for zero-spin targets, and it was further extended to spin-1/2 targets (and therefore to the spin-dependent GPs) in Ref. [19]. It is based on a Lorentz covariant description of the Compton amplitudes, which are expanded on a basis written in terms of electromagnetic field strength tensors. Working with this basis, one finds a different set of GPs with respect to the definition from the multipole expansion in the c.m. frame. In particular, in addition to the dipole electric and magnetic polarizabilities, one finds a transverse electric polarizability, which describes rotational displacements of charges inside the hadron. As such, this polarizability does not contribute to the induced charge polarization and shows up at higher-order in the soft final-momentum limit of the VCS amplitude. The Low-Energy Theorem The work of Ref. [14] laid the foundations of the low-energy theorem (LET) and the low-energy expansion (LEX), for VCS in the unpolarized case. It provided for the first time a way to access GPs through experiments, via the (ep → epγ) reaction. The review article [2] contains in addition the LET for the doubly polarized case 5 , and puts in perspective two other regimes of VCS: the hard scattering and Deeply Virtual Compton Scattering (DVCS). The LET is directly inspired from the low-energy theorem of Low [20] and states that, in an expansion in powers of q ′ cm (keeping q cm fixed) the first term of the BH and Born amplitudes is of order q ′ −1 cm , while the first term of the non-Born amplitude is of order q ′ 1 cm : The LEX formula then yields for the photon electroproduction cross section below the pion production threshold: where dσ BH+Born is the BH+Born cross section. As stated in Sect. 3.2.1, this cross section is entirely calculable in QED and just needs the knowledge of the nucleon elastic form factors G E and G M . It contains no polarizability effect, and serves as an important cross section of reference throughout the whole formalism. The next term of the formula, (Φ · q ′ cm ) · Ψ 0 , is where the GPs first appear in the expansion. Ψ 0 is obtained from the interference between the BH+Born and non-Born amplitudes at the lowest order; it is therefore of order q ′ 0 cm , i.e., independent of q ′ cm . The term (Φ · q ′ cm ) is a phase-space factor (see the Appendix for details), in which an explicit factor q ′ cm has been sorted out in order to emphasize the fact that, when q ′ cm tends to zero, (Φ · q ′ cm · Ψ 0 ) tends to zero and the whole cross section tends to dσ BH+Born . We will denote the cross section dσ LEX of Eq. (5) as the "LEX cross section", obtained by neglecting the O(q ′ 2 cm ) term. The latter represents all the higher-order terms of the expansion and contains GPs of all orders. Below the pion production threshold, dσ BH+Born is essentially the dominant part of the cross section, Ψ 0 is the leading polarizability term and the higher-order terms O(q ′ 2 cm ) are expected to be negligible. The first-order polarizability term Ψ 0 contains three VCS response functions, or structure functions: P LL , P LT , and P LT , which are the following combinations of five of the six lowest-order GPs 6 : The general structure of each term, of the type [ nucleon form factor × GP ], originates from the (BH+Born)-(NB) interference. The indices (LL, T T, LT ) refer to the longitudinal or transverse nature of the virtual photon polarization in the (BH+Born) and (NB) amplitudes. In contrast to RCS where the spin polarizabilities appear at a higher order than the scalar ones, in VCS the spin and scalar GPs appear at the same order in q ′ cm . Here we will emphasize three features of Eq. (6): i) P LL is proportional to the electric GP, ii) P LT has a spin-independent part that is proportional to the magnetic GP, plus a spin-dependent part P LT spin , and iii) P T T is a combination of two spin GPs. Using the LET, experiments have extracted so far the two combinations P LL − P T T /ǫ and P LT of Eq. (5), at fixed q cm and ǫ. In particular, the separation of P LL and P T T , which requires measurements at different values of ǫ, has not been investigated yet experimentally. Finally, we remind the kinematical dependence of each term in Eq. (5). The (Φ · q ′ cm ) factor depends on q cm , q ′ cm and ǫ but not θ cm and ϕ. The V LL and V LT coefficients (see the Appendix for their definition) depend on q cm , ǫ, θ cm and ϕ but not q ′ cm . The structure functions depend only on q cm or Q 2 . Theoretical models for nucleon generalized polarizabilities Virtual Compton scattering has been investigated in various theoretical frameworks. Studies in different models, based on complementary assumptions, help to unravel the mechanisms coming into play in the GPs. The very first predictions of GPs have been calculated in the framework of a non relativistic constituent quark model (NRCQM) [14,21], which was reviewed in Ref. [22]. This model has been extended in Ref. [23] to include relativistic effects by considering a Lorentz covariant multipole expansion of the VCS amplitude and a light-front relativistic calculation of the nucleon excitations. In the constituent quark models, the GPs are expressed as the sum of the product of transition form factors of nucleon resonances over the whole spectrum, weighted by the inverse of the excitation energy. The actual calculations truncate the sum to few intermediate states corresponding to the ∆(1232) and the main resonances in the second resonance region. As discussed in Ref. [22], this truncation necessarily leads to a violation of gauge invariance. Furthermore, the Compton tensor satisfies the constraint due to photon crossing at the real photon point, but it does not respect the nucleon crossing symmetry. As a consequence, constituent quark models predict ten, and not six, independent GPs. Despite these limitations, calculations from the constituent quark model have been helpful for providing a first order-of-magnitude estimate for the nucleon resonance contributions to GPs. More refined evaluations of the resonance contribution to GPs have been obtained in the framework of an effective lagrangian model (ELM), based on a fully relativistic effective Lagrangian framework, which contains baryon resonance contributions as well as π 0 and σ exchanges in the t-channel [24,25]. A similar model using a coupled-channel unitary approach has been adopted also in Ref. [26]. All these calculations are complementary to the theoretical approaches emphasizing pionic degrees of freedom and chiral symmetry, such as the linear sigma model (LSM) and chiral effective field theories. Although the LSM is not a very realistic description of the nucleon, it is built on all the relevant symmetries like Lorentz, gauge and chiral invariance. Thanks to this, the calculation of the GPs within the LSM, performed in the limit of infinitely large sigma mass [27,28], pointed out for the first time the existence of relations between the VCS multipoles, beyond the usual constraints of parity and angular momentum conservation [16,29], as discussed in Sect. 3.2.2. In Fig. 3 we show the results for the GPs in the NRCQM of Ref. [22], in the LSM [27,28], and in the ELM [24,25]. In the NRCQM the excited states of the nucleon are given by resonances, and the Q 2 behavior of the GPs is determined by the electromagnetic transition form factors. In contrast to this, the LSM describes the excitation spectrum as pion-nucleon scattering states with quite a different Q 2 dependence. In the ELM, one sees both resonant and non-resonant contributions at work. In the case of the scalar GPs, the LSM predicts a rapid variation at small momentum transfer, and a smaller one Figure 3: Results for GPs in different model calculations as a function of squared momentum transfer Q 2 . Full lines: non-relativistic constituent quark model [22]; dashed lines: linear sigma model [27,28]; dotted lines: effective Lagrangian model [24,25]. at higher momentum. On the contrary, in the NRCQM and in the ELM the scalar GPs show a rapid fall-off in Q 2 , with a gaussian shape in the case of the NRCQM due to the assumed parametrization for the electromagnetic transition form factors. All the calculations underestimate the scalar electric polarizabilities α E1 (0) at the real photon point. In the case of the magnetic GP β M1 (Q 2 ), the pion cloud gives rise to a positive slope at the origin, while the N∆ transition form factor determines the paramagnetic contribution, which decreases as function of Q 2 . The LSM describes only the negative diamagnetic contribution, whereas the NRCQM takes into account only the positive paramagnetic contribution. The interplay of the two competing effects can be observed in the ELM, in particular at Q 2 = 0 where the ELM prediction is in very good agreement with the recent evaluation of β M1 (0) from the Particle Data Group (PDG) [12]. While the σ exchange strongly influences the numerical values of the scalar polarizabilities, it does not contribute to the vector polarizabilities. On the other hand, the π 0 -exchange in the t-channel (the anomaly diagram, see Fig. 2) is irrelevant in the spin-independent case, but very important for calculations of the vector GPs. Since the anomaly is gauge invariant and regular in the soft photon limit, it could also be considered as part of the Born amplitude instead of the non-Born amplitudes, contrary to what has been done in Ref. [14]. In Fig 3, and in the following, the contribution to the spin GPs from anomaly is not shown. All the models satisfy the model-independent constraints P (M 1,M 1)1 (0) = 0 and P (L1,L1)1 (0) = 0 due to photon-crossing symmetry. The NRCQM and LSM predict the same sign for all the vector GPs, while P (M 1,M 1)1 and P (M 1,L2)1 have opposite sign in the ELM. Furthermore, the results for P (L1,L1)1 and P (M 1,L2)1 are substantially larger in the LSM and ELM than in the NRCQM, indicating that here the non-resonant background is more important than the nucleon resonances. The variation of the spin GPs at low Q 2 is very different in all the three models, except for P M 1,M 1)1 in the LSM and in the NRCQM at variance with the ELM. Systematic calculations of pion-cloud effects became possible with the development of chiral perturbation theories (ChPTs), an expansion in the external moment and the pion mass ("p"-expansion). In such theories, one constructs the most general VCS amplitude consistent with electromagnetic gauge invariance, the pattern of chiral symmetry breaking in QCD, and Lorentz covariance, to a given order of the small parameter p ≡ {P, m π }/Λ. Here, P stands for each component of the four-momenta of the photons and of the three-momenta of the nucleons, while Λ is the breakdown scale of the theory. There exist different variants of ChPT calculations. The pioneering calculation for VCS was performed with only nucleons and pions as explicit degrees of freedom, with the effects of the nucleon-resonance encoded in a string of contact operators [30,31]. Furthermore, this work used the heavy-baryon (HB) expansion for the nucleon propagators, which amounts to making an expansion in 1/M N along with the expansion in p. The first HBChPT calculations have been performed at O(p 3 ) [30,31], and have recently been extended to O(p 4 ) for all the spin GPs and to O(p 5 ) for some of them in Refs. [32,33]. Since the excitation energy of the ∆(1232) is low, it may not be justified to "freeze" the degrees of freedom of this near-by resonance. The inclusion of the ∆(1232) as an explicit degree of freedom in the calculation of the VCS process has first been addressed in Ref. [34], by introducing the excitation energy of the ∆(1232) as an additional expansion parameter ("ǫ expansion"). Subsequently, a different counting has been proposed in Ref. [35] ("δ-expansion"), and it was employed for the VCS process in Ref. [36] using a manifestly Lorentz invariant variant of baryon chiral perturbation theory (BChPT). The two schemes mainly differ in the counting of the ∆(1232) excitation energy∆ = M ∆ − M N , compared with the pion mass m π . In the ǫ-expansion, they enter at the same order (∆ ∼ m π ), while in the δ-expansion m π ≪∆. We refer to the original works for further details. The predictions for the GPs in HBChPT and in BChPT will be discussed in the following section, in comparison with the DR results. The dispersion relation formalism Historically, DRs have been considered for the first time for the VCS process in Ref. [37]. Recently, the formalism has been reviewed in Ref. [38], using a different set of VCS amplitudes that avoid numerical artefacts due to kinematical singularities. Following the derivation in Refs. [16,17], the VCS Compton tensor is parametrized in terms of twelve independent functions F i (Q 2 , ν, t), i = 1, . . . , 12, which depend on three kinematical invariants, i.e., Q 2 , t, and the crossing-symmetric variable ν = (s − u)/(4M N ). In terms of these invariants, the limit q ′ → 0 at finite three-momentum q of the virtual photon corresponds to ν → 0 and t → −Q 2 at finite Q 2 . The GPs can then be expressed in terms of the non-Born contribution to the VCS invariant amplitudes, denoted as F NB i , at the point ν = 0, t = −Q 2 for finite Q 2 (the explicit relationships between the F NB i and the GPs can be found in Ref. [39]). The F i functions are free of poles and kinematical zeros, once the irregular nucleon pole terms have been subtracted in a gauge-invariant fashion, and are even functions of ν, i.e. F i (Q 2 , ν, t) = F i (Q 2 , ν, t). Assuming an appropriate analytic and high-energy behavior, these amplitudes fullfil unsubtracted DRs in the variable ν at fixed t and Q 2 : where the Born contribution F Born i is defined as in [2,14], whereas F pole i denotes the nucleon pole contribution (i.e., energy factors in the numerators are evaluated at the pole position) 7 . Furthermore, Im F i are the discontinuities across the s-channel cuts, starting at the pion production threshold ν thr = m π +(m 2 π +t/2+Q 2 /2)/(2M N ). However, such unsubtracted DRs require that at high energies (ν → ∞) the amplitudes Im s F i drop fast enough so that the integral of Eq. (7) is convergent and the contribution from the semicircle at infinity can be neglected. The high energy behavior of the amplitudes in the limit of ν → ∞ at fixed t and Q 2 can be deduced from Regge theory [38]. It follows that the unsubtracted dispersion integral in Eq. (7) diverges for the F 1 and F 5 amplitudes. In order to obtain useful results for these two amplitudes, we can use finite-energy sum rules, by restricting the unsubtracted integral in a finite range −ν max ≤ ν ≤ ν max and closing the contour of the integral by a semicircle of finite radius ν max in the complex plane, with the result In Eq. (8), the "asymptotic term"F as i represents the contribution along the finite semicircle of radius ν max in the complex plane, which is replaced by a finite number of energy-independent poles in the t channel. At ν = 0 and t = −Q 2 , the difference between the pole and Born contributions is vanishing for all the amplitudes and the GPs can be evaluated directly by unsubtracted DRs through the following integrals where the asymptotic contribution F as i enters only for i = 1, 5. The s-channel integrals in Eqs. (7)-(9) can be evaluated by expressing the imaginary part of the amplitudes through the unitarity relation, taking into account all the possible intermediate states which can be formed between the initial γ * N and final γN states. As long as we are interested in the energy region up to the ∆(1232) resonance, we may restrict ourselves to consider only the dominant contribution from the πN intermediate states, setting the upper limit of integration to ν max = 1.5 GeV. This dispersive πN contribution will be denoted with F πN i in the following. In the actual calculation, we evaluate F πN i from the pion photo-and electro-production amplitudes of the phenomenological MAID analysis (MAID2007 version) [40,41], which contains both resonant and non-resonant pion production mechanisms. It turns out that the residual dispersive contributions beyond the value ν max = 1.5 GeV is relevant mainly for the amplitude F 2 , while it can be neglected for the other amplitudes. Once the dispersive integrals are evaluated, we need a suitable parametrization of the energyindependent functions for the asymptotic contribution to the F 1 and F 5 amplitudes, and for the higherdispersive corrections to F 2 . The asymptotic contribution to the F 5 amplitude is saturated from the t-channel π 0 exchange (the anomaly diagram, see Fig. 2), that is calculated according to Ref. [38]. The asymptotic contribution to F 1 can be described phenomenologically as the t-channel exchange of an effective σ meson. The Q 2 -dependence of this term is unknown and can be parameterized in terms of a function directly related to the magnetic dipole GP β M1 (Q 2 ) and fitted to VCS observables. Analogously, the higher-energy dispersive contribution to F 2 can effectively be accounted for with an energy-independent function, at fixed Q 2 and t = −Q 2 . This amounts to introducing an additional fit function, which is directly related to the sum of the electric and magnetic dipole GPs. In conclusion, the contributions beyond the dispersive πN integrals can be recast in terms of the following two functions 7 The pole and Born contributions differ only for the F 1 , F 5 and F 11 amplitudes. where α E1 and β M 1 are the RCS polarizabilities, with superscripts exp and πN indicating, respectively, the experimental value [12] and the πN contribution evaluated from unsubtracted DRs. In Eq. (10), f α (Q 2 ) and f β (Q 2 ) are fit functions, with the constraints f α (0) = f β (0) = 1. Their functional form is unknown and should be adjusted by a fit to the experimental cross sections. However, in order to provide predictions for VCS observables, we adopt the following parametrization where the mass scale parameters Λ α and Λ β are free parameters, not necessarily constant with Q 2 . In Fig. 4, we show the DR predictions for the scalar GPs as function of Q 2 , along with the separate contributions from the πN dispersive term and the asymptotic term. The RCS values at Q 2 = 0 are fixed to the PDG values [12]. The electric GP is dominated by a large positive asymptotic contribution. The πN dispersive contribution adds up to the asymptotic term in the RCS limit and smoothly decreases at higher Q 2 . The magnetic GP results from a large dispersive πN (paramagnetic) contribution, dominated by the ∆(1232) resonance, and a large asymptotic (diamagnetic) contribution with opposite sign, leading to a relatively small net result. In Fig. 5 we compare the DR predictions for the GPs with the results from covariant BChPT at O(p 3 ) + O(p 4 /∆) and the calculation within HBChPT at O(p 3 ) for the scalar GPs and within HBChPT both at O(p 3 ) and at O(p 4 ) for the spin-dependent GPs. For the electric polarizability, the calculations within HBChPT and BChPT are very similar. The HBChPT result in the RCS limit is also in good agreement with the DR calculation, fixed to the PDG value [12]. However, both the calculation within HBChPT and BChPT deviate from the DRs at increasing Q 2 , with a softer fall-off in Q 2 . For the magnetic polarizability, the calculation in HBChPT does not fully account for the positive paramagnetic component coming from ∆(1232) degrees of freedom, at variance with the covariant BChPT. However, we notice that the BChPT results are quite different from the DR results in the whole Q 2 range. At Q 2 = 0 the DR value is fixed to the current PDG evaluation [12], while the BChPT predicts a substantially larger value [42,43], i.e. β M1 = 3.7 × 10 −4 fm 3 . [32,33]. The experimental points at Q 2 = 0 for the scalar polarizabilities are from PDG [12] and for the spin GPs are from the MAMI measurements [44]. In the spin-dependent sector, by comparing the HBChPT results at the lowest and at the next order, we notice large corrections at next order. The corrections to the leading order bring the HBChPT results toward the DR and BChPT estimates only in the case of the P (L1,M 2)1 GP. For the GP P (M 1,L2)1 , the sizeable large correction at the real photon point brings the HBChPT results close to the dispersive results, but the dependence on Q 2 remains quite different. Similarly, we notice sizable differences between DRs and HBChPT for the two GPs that vanish at Q 2 = 0, P (L1,L1)1 and P (M 1,M 1)1 . Instead, the results for these GPs from BChPT are more similar to the DR calculation, especially in the low Q 2 region. Model-independent constraints on the slope of the P (L1,L1)1 and P (M 1,M 1)1 GPs at Q 2 = 0 can be obtained through sum rules derived from the forward doubly virtual Compton scattering (VVCS) γ * N → γ * N, where both photons have the same spacelike virtuality q 2 = −Q 2 < 0 [45,46]. In particular, these sum rules provide relationships involving the slope of these GPs and properties of the proton target, such as the anomalous magnetic moment and the squared Pauli radius, along with the RCS spin polarizabilities γ E1M 2 and γ E1E1 and two quantities from VVCS. Therefore, the quantities in these relations are all observed in a different process, but they are not yet fully constrained by experimental measurements. Awaiting for a direct experimental verification, the sum rules have been used to infer predictions for the slope of the GPs, using the available information on the Pauli radius [47], the recent extractions of the RCS spin polarizabilities at MAMI [44,48] and the empirical information from MAID for the VVCS quantities [46]. Such predictions turned out to be consistent, within the large uncertainties, with the estimates for the slope of the GPs from both DRs and BChPT. Finally, for the P (L1,M 2)1 and P (M 1,L2)1 GPs, we observe that the BChPT predictions are similar, respectively, to the next-order and leading-order calculation within HBChPT. These GPs at Q 2 = 0 are proportional, respectively, to the RCS polarizabilities γ E1M 2 and γ M 1E2 . In Fig. 5, we show the corresponding values for the GPs at the real photon point, using the experimental RCS results of Ref. [44] given by the weighted average of the values extracted within fixed-t subtracted DRs [49,50] and covariant BChPT [42] (cf. Table 1 of Ref. [44]). The RCS value of P (M 1,L2)1 is consistent with the DR predictions and the next-order results of HBChPT, but not with the covariant BChPT value. The experimental value for P (L1,M 2)1 at Q 2 = 0 is compatible with the positive value predicted from DRs, at variance with the estimates from both HBChPT and BChPT. However, at larger Q 2 the different theoretical predictions are very similar, showing a rather flat behavior in Q 2 . DR calculations of the structure functions P LL −P T T /ǫ and P LT are discussed in sect. 4.4, cf. Figs. 15 and 17, together with the prediction of the models for P T T shown in Fig. 16. The (ep → epγ) cross section and the GP effect We will limit our study to the cross section calculated in two formalisms: LEX and DR, which provide the most useful interface with experiments. However, other descriptions of the ep → epγ cross section exist, such as the effective Lagangian model [24]. Most of the other theoretical approaches focus on modeling the GPs (cf. Sect. 3.3) and are less directly connected to the GP extraction from experiments. In this section we discuss only the non-polarized case. The polarized case will be mentioned in Sect. 4.8. The GP effect can be defined as the part of the cross section that contains the polarizability contribution, normalized to the part that does not contain it (i.e., dσ BH+Born ). Depending on the theoretical approach, one gets: -using the LEX: The main features of the cross section and GP effect are illustrated in Figs. 6 to 10, as a function of the VCS kinematical variables, varying them one at a time. Figure 6 shows a typical q ′ cm -behavior. The cross section is of bremsstrahlung type (∼ 1/q ′ cm ), and hence infra-red divergent at the origin. It is well-known that this divergence is compensated by another infra-red divergent term: the virtual radiative correction to electron-proton elastic scattering. In any case the very low q ′ cm -region (≤ 10 MeV/c) is of no interest for VCS; it contains no information about the GPs, since dσ → dσ BH+Born when q ′ cm → 0. The GP effect in the LEX approach, as defined above, is roughly quadratic in q ′ cm (cf. top-right plot of Fig. 6). The DR cross section has a more complex behavior. The ∆(1232) resonance, which is incorporated through the resonant πN intermediate states, shows up as a broad bump (cf. bottom-left plot) due to the non-Born contribution 8 . As a consequence, the GP effect from DR may differ noticeably from the LEX one, as soon as q ′ cm ≥ 50 MeV/c (cf. bottom-right plot). Another important feature is that the sensitivity of the cross section to the GPs is enhanced in the region above the pion production threshold. This last property is better seen in Fig. 7, where DR calculations are shown for different sets of parameters (Λ α , Λ β ). The sensitivity to the GPs is manifest in the differences between the possible shapes of the resonance bump. An example of the behavior as a function of the angles θ cm and ϕ is given in Fig. 8. The GP effect Bottom-left plot: the DR cross section; full calculation (solid), BH+Born (dashed) and the non-Born contribution alone (dotted). Bottom-right plot: the GP effect from DRs, as defined in Sect. 3.5. In all cases, the polarizability effect is calculated with the following structure functions values: P LL − P T T /ǫ = 17.6 GeV −2 and P LT = -5.3 GeV −2 , close to the experimental ones at Q 2 = 0.2 GeV 2 . They correspond to α E1 (Q 2 ) = 3.83 · 10 −4 fm 3 , β M1 (Q 2 ) = 1.96 · 10 −4 fm 3 in the DR model. has a complex angular dependence, implying that experimentally one cannot integrate over too wide regions in (cos θ cm , ϕ), otherwise the fine variations of the GP effect are missed. In this figure one can already notice that there are few angular regions where the GP effects from LEX and from DR agree to better than a few percent of the BH+Born cross section (more on this in Sect. 4.7). Figure 9 provides a more complete view, showing the full complexity of the GP effect in the 2D (cos θ cm , ϕ) phase space. The difference of behavior between the LEX and DR calculations (left and middle plots) appears clearly; one will also note the rapid variation of the GP effect at very backward angles (near cos θ cm = −1), a region to handle with care for GP extraction. Figures 6 to 9 demonstrate that the LEX and DR approaches provide two significantly different descriptions of the ep → epγ cross section. Figure 10 shows examples of the ǫ-dependence of the GP effect. The effect increases with ǫ, more than linearly, and in all cases it is advantageous to work at high ǫ for a GP extraction (cf. Table 4 for the ǫ-values of the experiments). The figures presented in this section can serve as guidelines for designing VCS experiments. For an experiment aimed at extracting GPs using the LEX below the pion threshold, we can summarize the main prescriptions as follows. The GP effect increases with q ′ cm and ǫ, and is more easily measurable at high values: q ′ cm ∼ 100 MeV/c and ǫ close to 1. The GP effect has a strong dependence on the θ cm and ϕ angles, due to the angular variations of the V LL and V LT coefficients in the Ψ 0 term, which are shown in Fig. 11. The GP effect ranges from ∼ -15% to +15% in most of the (cos θ cm , ϕ) phase space, except at very backward θ cm where the behavior is most complex. The choice of the optimal region in (cos θ cm , ϕ) for a good LEX fit is not an easy task, and every experiment explored differently promising conditions. First, one should obviously avoid the region of the BH peaks, due to their lack of sensitivity to the GPs and the very rapid variation of the cross section. Second, one should have a sufficiently large lever arm in V LL and V LT 9 , because they are the weighing coefficients of the fitted structure functions. Note that in some angular regions, these coefficients vanish: e.g., V LL = 0 at θ cm = 0 • and 180 • , V LT = 0 on a continuous "ring" passing near the point (θ cm = 90 • , ϕ = 90 • ) (cf. Fig. 11). These regions are more specifically sensitive to either P LL − P T T /ǫ or P LT . Third, if one applies the LEX fit in its usual truncated form, one should try to make sure that the higher-order terms O(q ′ 2 cm ) are small enough to be neglected. If this is not the case, the LEX fit should be modified such as to take into account the higher-order contributions beyond the GPs we want to extract. The development of the DR formalism opened the path for a second family of VCS experiments that focus on higher c.m. energy, namely the ∆(1232) region. An advantage in this case is that the sensitivity to the GPs is enhanced in the region above the pion production threshold. On the other hand, in this region one cannot utilize the LEX, the extraction of the GPs has to rely solely on the DR approach and the analysis of the data below and above the pion threshold through the DR framework is similar. The N → ∆ transition amplitudes are used as an input in the DR calculation. These amplitudes have been extensively studied in the past two decades; they have been determined within an experimental uncertainty that ranges between a few to 10%, depending on the amplitude, and they are also associated with a theoretical model uncertainty which is typically of a similar magnitude to the experimental one. A detailed treatment of their experimental and model uncertainties is involved in the data analysis. A corresponding uncertainty is introduced to the extracted GPs but the effect is small compared to the overall level of the uncertainty. Similarly to the measurements below the pion production threshold, one has to be careful to avoid the BH peaks where the sensitivity to the GPs gets greatly suppressed, while one has to also deal with the rapid variation of the cross section within the measured acceptance. A feature that has been employed in this type of experiments is the measurement of the cross section azimuthal asymmetries, which offer sensitivity to the GPs while they allow for the suppression of part of the systematic uncertainties. Ideally one should aim at measuring a sufficiently large lever arm in θ cm , a range of about 50 • that is restricted by the BH peak. Lastly, one should give special consideration to the selection of the beam energy. One may find it beneficial to employ higher beam energies, as this will increase the sensitivity to the GPs and the cross section rate. For example, while measuring at a specific momentum transfer in the regime of Q 2 ≈ 0.5GeV 2 one could benefit by a factor of ≈ 25% if one doubles the beam energy from 2 GeV to 4 GeV. Nevertheless, by working through this exercise one has to be careful, since moving the spectrometer to smaller angles will introduce a higher level of accidental rates which can be a limiting factor on the beam current and on the experimental dead time. At the same time one has to consider the resolution effects as, e.g., the momentum of the electron arm is increased. VCS extension to the N → ∆ program The study of the N → ∆ transition involves an ongoing experimental and theoretical effort of nearly three decades, and has contributed significantly to the understanding of the nucleon structure and dynamics [51]. One of the highlights of this program involves the exploration of the two quadrupole amplitudes, i.e., the electric quadrupole (E2) and the Coulomb quadrupole (C2), which allow to investigate the presence of non-spherical components in the nucleon wave function [52,53]. It is the complex quark-gluon and meson cloud dynamics of hadrons that give rise to these components, which, in a classical limit and at large wavelengths, will correspond to a "deformation". The spectroscopic quadrupole moment provides the most reliable and interpretable measurement of the presence of these components. For the proton it vanishes identically because of its spin 1/2 nature; instead, the signature of such components is sought in the presence of resonant quadrupole amplitudes in the γ * N → ∆ transition. The ratios of the electric and Coulomb amplitudes with respect to the magnetic dipole amplitude, respectively, the EMR and the CMR, are routinely used to present the relative magnitude of the amplitudes of interest. Non-vanishing resonant quadrupole amplitudes will signify the presence of non-spherical components in either the proton or in the ∆(1232), or more likely, in both; moreover, their Q 2 evolution is expected to provide insights on the mechanism that generates them. The experimental program has focused primarily on the dominant π 0 and π + excitation channels, due to the favourable branching ratios of approximately 66% and 33%, respectively. For these channels, although the procedure of extracting the quadrupole amplitude signal from the measured cross sections is rather straightforward, the isolation of E2 and C2 becomes challenging since there are nonresonant background contributions that are coherent with the resonant excitation of the ∆(1232) and of the same order of magnitude. One has to constrain these interfering background processes (Born terms, tails of higher resonances, etc) to purely isolate the resonant quadrupoles, but doing so with a large number of background amplitudes is nearly impossible. As a result, the amplitudes are extracted with model error which is often poorly known and rarely quoted. The effect of the physical background amplitudes, the treatment, and the control of the corresponding model uncertainties to the transition form factors represent an open front in this experimental program. The photon channel can be instrumental towards this direction, offering a valuable alternative to explore the same physics signal. One difficulty lies into the small branching ratio of approximately 0.6%, which explains why experiments have so far relied primarily on the measurement of the two-pion channels. In the photon channel, extracting the signal of interest from the measured cross sections is not as straightforward. Whereas the pion-electroproduction cross section can be factorized into a virtual photon flux and a sum of partial cross sections that contain the signal of interest, this is not possible for the photon-electroproduction process; the detected photon can emerge not only from the de-excitation of the ∆(1232), but also from the incoming or scattered electron, i.e., from the Bethe-Heitler process. The VCS reaction γ * p → γp amplitude also contains a Born component. The non-Born amplitude contains the physics of interest, which includes the GPs as well as the resonant amplitudes of the N→ ∆ transition. The DR framework has made possible to analyze this type of experimental measurements and steps have been taken towards that direction in recent years. These measurements carry significant scientific merit since the resonant amplitudes are isolated within different theoretical frameworks in the two (pion and photon) channels, the background contributions are of different nature for the two cases and therefore present different theoretical problems. Thus, important tests of the reaction framework and of the model uncertainties of the world data are offered by the comparison of the results from the two different channels. QED Radiative corrections to VCS Before the advent of VCS as a dedicated research field, the ep → epγ reaction was just seen as being part of the radiative corrections to the (ep → ep) elastic process, namely, the internal bremsstrahlung part, dominated by radiation from the electron lines (i.e., the BH graphs of Fig. 2). For the measurement of GPs, the ep → epγ reaction becomes a specific process, being subject to its own radiative effects. It is therefore crucial to understand and handle the radiative corrections to the ep → epγ reaction, as their effect on the cross section is of the same order as the GP effect itself. This topic has been studied extensively in Ref. [54] and also in several Thesis works [55][56][57]. The radiative corrections to ep → epγ have been developed in deep analogy with radiative corrections to elastic scattering (ep → ep), and are globally as important as for the elastic process. This section summarizes the main features for the VCS case. The calculation includes all graphs contributing to order (α 4 QED ) in the cross section. One distinguishes between virtual corrections, that imply the exchange of a supplementary virtual photon, and real radiative corrections that imply the emission of a supplementary real photon. All infra-red divergences cancel when combining the soft-photon emission processes, from (virtual+real) corrections. One is then left with: a virtual correction δ V , plus a real internal correction (i.e., a real radiation coming from any line of the ep → epγ graph) δ R , plus a real external correction (i.e., a real radiation coming from another nucleus in the target) δ ′ . The δ V term is independent of acceptance cuts, and cannot be calculated analytically. Some parts of it 10 have to be evaluated numerically, a task that required most developments and innovative tricks w.r.t. the elastic case. The result exhibited a remarkable continuity with the virtual correction in the elastic case. The δ V term varies slowly with Q 2 and is almost constant as a function of the other variables. The δ R term is divided into an acceptance-dependent part, δ R1 , and an acceptance-independent part, δ R2 . The δ ′ term is treated in a classical way as in [58]. The δ R2 term is analytical, varying slowly with Q 2 and being almost constant as a function of the other variables, except near the BH peaks. The δ R1 term has large variations in the whole (q ′ cm , cos θ cm , ϕ) phase space. The δ R1 and δ ′ corrections are usually implemented in the simulation of the experiment, at the event generation level, in order to create a realistic radiative tail and apply properly the experimental cuts [59]. One gets for the "radiatively corrected" cross section the formal expression [54]: where dσ raw exp is the raw measured cross section, and dσ corrected exp is the one that can be compared to the theory. The radiative corrections can also be exponentiated. Smaller terms can also be added, such as the two-photon exchange contribution, δ 1 , and radiative corrections at the proton side, δ 2 [54]. Numerically, for instance at Q 2 = 0.3 GeV 2 and kinematics close to those of the first VCS experiment at MAMI, one has [55]: two large terms, δ V ≃ −0.16 and δ R2 ≃ +0. 22, and a small term, (δ 1 + δ 2 ) ≃ 0.95. The terms δ R1 and δ ′ are also large, but their value depends on experimental cuts. With a cut on the maximal soft-photon energy ∆E s = 15 MeV, one gets: δ R1 ≃ −0.17 and δ ′ ≃ −0.07. The accuracy on the F rad factor is estimated to be ± 1-2%. As in many other fields, asymmetry measurements in VCS are much less affected by radiative corrections than absolute cross-section measurements. Sect. 4.1 gives a panel of all VCS experiments performed so far and the type of obtained results. Sections 4.2 to 4.7 are devoted to the measurement of the structure functions P LL − P T T /ǫ and P LT and the scalar GPs of the proton from unpolarized data, a topic which has seen the longest and most continuous developments. The other VCS experiments and results will be summarized in Sect. 4.8. Types of experiments Due to the limited acceptance of spectrometers, each experiment has been performed at isolated values of Q 2 and ǫ. Regions in W both below and above the pion production threshold have been explored, and some experiments used polarization degrees of freedom. Tables 2 and 3 summarize the characteristics of each experiment. Experiments dedicated to the scalar GPs and structure functions Here we give a brief overview of the VCS experiments which have measured the structure functions P LL − P T T /ǫ and P LT and the scalar GPs of the proton. The MAMI-I experiment [7,74] was the really pioneering one, in which all experimental aspects, from design to analysis, were established for the first time, including, e.g., radiative corrections (see Sect. 3.7), dedicated Monte-Carlo simulations, LEX fit methods, etc. The covered range in q ′ cm was complete, but data were limited to in-plane angles. The JLab experiment [75,76] explored the highest photon virtualities so far, in the range 1-2 GeV 2 , and found very small GPs, indicating a fast fall-off with Q 2 . This experiment was also the first one to show that GP extractions both below and above the pion production threshold, using respectively the LEX and DR formalisms, gave consistent results. The MIT-Bates experiment [77,78] exploited out-of-plane kinematics more specifically, and made measurements at the smallest Q 2 so far (0.057 GeV 2 ), enabling the first estimation of the mean-square radius of the electric GP (see Sect. 4.6). This experiment was also the first one to evidence a bias in the LEX fit (see Sect. 4.4). At this stage the Q 2 -dependence of the observables started to appear as non-trivial, especially for P LL − P T T /ǫ and α E1 (Q 2 ), showing an enhancement at Q 2 = 0.33 GeV 2 w.r.t. the other data points (see Figs. 14 and 18). Measurements were repeated at this Q 2 during the MAMI-IV experiment [11] (ignoring the double polarization information), at angular kinematics very similar to MAMI-I, and confirmed the results previously found. This situation, with scarce data points and a puzzling Q 2 -behavior, motivated the need for new measurements. Two recent experiments performed in the intermediate Q 2 -range brought new elements of answer. The MAMI-V experiment [82] determined the electric GP at Q 2 = 0.20 GeV 2 from cross section and asymmetry measurements in the ∆(1232) resonance region. In parallel the experiment offered an important first measurement of the N → ∆ quadrupole amplitude through the photon channel (cf. Sect. 4.3.4), thus providing a stringent control to the model uncertainties of the N → ∆ transition world data. The MAMI-VI experiment [83] was performed at three values of Q 2 : 0.10, 0.20 and 0.45 GeV 2 . The structure functions P LL − P T T /ǫ and P LT and the scalar GPs were extracted with good precision from cross-section data below the pion production threshold. Out-of-plane kinematics were designed at each Q 2 , in the line of the MIT-Bates experiment, but covering a larger angular phase space (cf. Fig. 20). Polarizability fits were performed with and without a novel bin selection method that was aimed at suppressing the higher-order terms of the LEX (see Sect. 4.3.1). This was another inheritance from the MIT-Bates experiment. The new experiment E12-15-001 at JLab [84] acquired data recently and is currently at the early stages of the data analysis (see Sect. 6.2.2). The experiment aims to determine the two scalar GPs in the range Q 2 = 0.3 GeV 2 to Q 2 = 0.75 GeV 2 through cross section and asymmetry measurements in the nucleon resonance region. The experiment will rely on the extraction of the GPs through the DR framework. The higher energy employed in these measurements offers an enhanced sensitivity to the GPs, and the results will provide a direct cross check to the MAMI-I and MAMI-IV results where the enhancement of the electric GP with Q 2 was previously observed. Extraction methods for the scalar GPs and structure functions As for the extraction of polarizabilities in RCS, the extraction of GPs in VCS is not direct and requires a fit, made within a theoretical framework. Experiments use two different frameworks: a modelindependent one based on the LEX, and a model-dependent one based on Dispersion Relations. These two formalisms have different domains of validity in W . The LEX formalism is valid only below the pion production threshold. Indeed, in Ref. [14] the VCS amplitude is considered as real, a property that holds only for W < (M N + m π ). As soon as hadronic intermediate states other than the nucleon can be created on-shell, starting by a nucleon plus a pion, the VCS amplitude acquires an imaginary part and the LEX formalism [14] is not valid anymore. The LEX fit (on unpolarized data) at fixed q cm and ǫ, only yields the two structure functions P LL − P T T /ǫ and P LT , and individual GPs are not accessed 11 . The scalar GPs can be deduced only if one subtracts from these two structure functions their spin-dependent part, i.e., P T T and P LT spin . In absence of any available measurements of the spin GPs, this subtraction relies on a model calculation. In essence, since dσ BH+Born is the cross section without any polarizability effect, the structure functions P LL − P T T /ǫ and P LT are always obtained by fitting the deviation of dσ exp from dσ BH+Born . The difference (dσ exp − dσ BH+Born ), or more precisely the quantity δM = (dσ exp − dσ BH+Born )/(Φ · q ′ cm ), plays therefore a special role in the LEX fits 12 . We have seen in Sect. 3.4 that Dispersion Relations provide a very appropriate and efficient formalism to analyze VCS experiments both below and above the pion production threshold. The imaginary part of the VCS amplitude is a central ingredient of the model, entering dispersive integrals saturated by πN intermediate states. As a key feature, the existence of free parameters in the model, related to the unconstrained part of α E1 (Q 2 ) and β M1 (Q 2 ) (cf. Eq. (11)), allows us to perform an experimental fit in order to extract these scalar GPs. On the other side, the spin GPs are fully constrained in the model and cannot be fitted. The formalism is suited for all values of W up to ∼ 1.3 GeV, i.e. slightly above the ππN threshold, thus covering most of the ∆(1232) resonance region. LEX fits The LEX fit in its standard form is based on the comparison of a set of measured cross sections, dσ exp , at fixed q cm and ǫ, to the expression of dσ LEX in Eq. (5). It means that the quantity δM defined above is assumed to have no dependence on q ′ cm . The fit is most simple, consisting in a linear χ 2 -minimization of δM with two free parameters, the structure functions P LL − P T T /ǫ and P LT : This standard LEX fit has two virtues: model-independence and simplicity. However, we still have at present time a limited understanding of its validity. Other variants of the LEX fit exist, addressing in several ways the possible q ′ cm -evolution of the cross section. In Refs. [7,60] two variants were investigated on the cross-section data of the MAMI-I experiment: i) a linear q ′ cm -dependence of δM; ii) a more complete form of the (BH+Born)-(non-Born) interference term, in which all six lowest-order GPs are free parameters. This study essentially concluded that the observed q ′ cm -dependence of δM was always weak, but the obtained structure functions nevertheless showed some sensitivity to the fitting hypothesis. More recently, in the MAMI-VI experiment [83], another variant of the LEX fit was considered, by selecting only the regions in phase space where the terms O(q ′ 2 cm ) in Eq. (5) are small enough to be neglected. To this aim, one uses the DR model, in which the cross section includes all orders in q ′ cm . By subtracting the dσ LEX cross section of Eq. (5) from the DR cross section dσ DR , one isolates just the higher-order terms of the LEX. The quantity O(q ′ 2 cm ) DR = (dσ DR − dσ LEX )/dσ BH+Born is calculated at the kinematics of every cross-section point. Then, keeping only the points where |O(q ′ 2 cm ) DR | is smaller than a few percent, one performs the standard LEX fit as defined above. Although model-dependent, this estimator O(q ′ 2 cm ) DR can be useful to improve the reliability of the LEX fit. Such a case has already been observed for the results at Q 2 = 0.1 GeV 2 [83]. Values reported in Tables 4 and 5 for the MAMI-VI experiment are obtained using this phase-space selection criterion. 11 The particular case of the LEX fit in doubly polarized VCS, where the six lowest-order GPs can in principle be disentangled, will be discussed in Sect. 4.8. 12 This quantity is noted (M exp DR fits below the pion production threshold VCS data below W = M N + m π can always be analyzed in terms of the two formalisms, LEX and DR, and most experiments performed this double analysis. The DR fit is based on the comparison of the measured cross section to the one calculated by the DR model. In practice this fit is less straightforward than the LEX fit, because the structure functions or GPs do not appear in a simple analytic form in the model cross section. One usually has to scan the whole phase space of the free parameters of the model. Two-dimensional grids, either in (Λ α , Λ β ) or in (α E1 (Q 2 ), β M1 (Q 2 )), have been used to this aim. One builds a χ 2 at each node of the grid: and finds the minimum numerically. The minimization provides the values of the scalar GPs and the structure functions P LL − P T T /ǫ and P LT as well. Systematic errors In most VCS experiments, the results are dominated by systematic errors, which are larger than the statistical ones by a factor ∼ 2 to 4 (or sometimes more) 13 . Statistical errors are given by the fit itself, typically by the size of the contour at (χ 2 min + 1). For the systematic errors, several sources of uncertainty are well identified: 1) the experimental luminosity and detector efficiencies (triggering, tracking, etc.); 2) radiative corrections to VCS; 3) the choice of proton form factors in the calculation of dσ BH+Born ; 4) the solid angle calculation by Monte-Carlo; 5) the limited knowledge of spectrometer optics and experimental offsets; 6) the fitting assumption, or model uncertainty. Sources 1) to 3) are generally considered as acting as a global normalization uncertainty on the cross section, while the other sources may not be so global and may induce point-to-point distortions of the angular distributions. Overall, it is hard to reduce the total systematic error on the cross sections below the ± 3-4% level. Although data at low q ′ cm do not bring much information about the GPs, they can help in reducing the total systematic error. Indeed, for q ′ cm ≤ 50 MeV/c, the GP effect is very small, of the order of 1% (resp. 2%) at q ′ cm = 25 (resp. 45) MeV/c. In this q ′ cm -range the GP effect is therefore not fully negligible, but it is well under control, even if evaluated with approximate GP values. In these conditions the O(q ′ 2 cm ) terms vanish and the measured cross section must match the theoretical dσ LEX . The test consists in fitting the global renormalization factor F norm that realizes this matching. This is done by comparing dσ ′ exp = F norm × dσ exp with dσ LEX 14 . The factor F norm found at low q ′ cm is then applicable to the remaining part of the data set, at higher q ′ cm , considering that it corrects for global systematic errors which affect every cross-section value in the same way (i.e., sources 1 to 3 mentioned above). The effect of such systematic errors is thus greatly reduced by adopting this renormalization procedure. Every VCS experiment having low-q ′ cm data utilized them to test the absolute normalization of the cross section, and to renormalize it if needed [7,[74][75][76][77][78]83]. By this method, the overall normalization uncertainty was reduced to, e.g., ± 1% in the MAMI-I experiment [57,74] and to ± 1.5% in the MAMI-VI experiment [83]. To apply this renormalization procedure (and more generally to extract GPs) one has to make a choice for the proton form factors (G p E , G p M ), which enter the BH+Born cross section. Different formfactor parametrizations can vary by several percent at intermediate Q 2 , inducing variations of dσ BH+Born which can be even larger. The fitted value of F norm therefore depends directly on the form-factor choice. This "floating normalization" may seem dangerous at first sight for the stability of the polarizability fit, but actually it is a way to eliminate almost completely the form-factor dependency of the physics results. The reason is the following one. The key feature is that, when changing the value of G p E or G p M , dσ BH+Born just scales globally, to a good approximation 15 . Therefore the normalization factor F norm of the low-q ′ cm test will scale accordingly, and in the polarizability fit, dσ exp and dσ BH+Born will both be rescaled by the same factor. Due to the nature of δM 16 , this process stabilizes the fit. In other words, with a proper normalization of dσ exp relative to dσ BH+Born , made possible using low-q ′ cm data, the polarizability fit can concentrate on the important feature which is the shape of (dσ exp − dσ BH+Born ) versus (q ′ cm , θ cm , ϕ), without being disturbed by scale effects. It was shown in several thesis works [60,[70][71][72] that, if one follows this procedure, the results in terms of structure functions and GPs become essentially independent of the form-factor choice. In the MAMI-IV experiment [11], it was not the case; due to the lack of cross-sections at low-q ′ cm , an explicit and non-negligible error is quoted as coming from the proton form factors, mostly for P LT . To conclude, this whole argumentation on normalization holds to a precision of about ± 1% of the cross sections, but not better. Therefore VCS analyses can reach a systematic error due to absolute normalization uncertainty as low as ± 1%, but it seems presently an irreducible limit. DR fits in the first resonance region The DR model is the only tool to extract GPs from data in the region of the nucleon resonance. The model can be used in different ways to obtain information about GPs, and several types of experimental analyses have been performed in this W -region. One can perform a DR fit as described in Sect. 4.3.2 to extract the scalar GPs, i.e., by scanning the whole phase space of the (Λ α , Λ β ) parameters. Such an analysis was done in the JLab experiment [61,76] with the data set I-b at Q 2 = 0.92 GeV 2 and W mostly above the pion production threshold, in the range [1-1.28] GeV. Cross sections were measured at backward polar c.m. angle (cos θ cm = −0.975 to −0.650) and full azimuthal coverage. This first DR fit in the nucleon resonance region proved to be competitive: its results were in very good agreement with the ones obtained by the same experiment at W < (M N + m π ), and they had significantly smaller error bars, mostly for the systematics (cf. the JLab-Ib results compared to the JLab-Ia results in Tables 4 and 5). The first JLab VCS experiment thus demonstrated great success in extracting the scalar GPs from measurements in the nucleon resonance region, opening up the path for more measurements of this type. The aim of these experiments was further extended to also include the study of the N → ∆ transition form factors (cf. Sect. 3.6). The first such experiment was MAMI-III [80]. The experiment was initially designed to measure the H(e, e ′ p)π 0 reaction in the nucleon resonance and thus the selection of the kinematics, as well as the experiment beam time, was not optimized for the simultaneous measurement of the photon channel. The experiment offered limited sensitivity to the scalar GPs, but was nevertheless successful into making a first exploration of the transition form factors through the photon channel. The next step forward involved MAMI-V [82], a dedicated experiment that would focus on the parallel extraction of the scalar GPs and of the N → ∆ transition form factors. A new feature that was introduced in this experiment is the measurement of the cross section azimuthal asymmetries of the type (dσ ϕ=180 • − dσ ϕ=0 • )/(dσ ϕ=180 • + dσ ϕ=0 • ), which offers sensitivity to the physics signal, while, at the same time, allows for the suppression of part of the systematic uncertainties. In these measurements the BH+Born contribution accounts for ≈ 20% of the total cross section (see Fig. 12), while the primary θ cm (deg) sources of systematic uncertainties involve the uncertainties in the momenta and the angles of the two spectrometers, the luminosity, the knowledge of the acceptance, and the radiative corrections. The combined uncertainty coming from the solid angle, the luminosity, and the radiative corrections is of the order of ≈ ±2.5% to ±3% for these measurements, while the part that depends on the uncertainties in the momenta and the angles of the two spectrometers varies on a per setting basis. The statistical uncertainty on the cross section is smaller, ranging between 1.5% and 2%. In the DR fits one has to consider different parameterizations for the proton form factors in the analysis, since these quantities enter the calculation of the BH+Born cross section, as well as the uncertainty in the knowledge of the resonant amplitudes. These two type of uncertainties are of about the same level. In these experiments one measures parasitically the pion electroproduction cross section within the spectrometer acceptance, which is at least an order of magnitude larger compared to the VCS one. As this cross section is well known, it offers a valuable normalization measurement for these experiments. At the same time, one has to be careful with the small tail of the pion events that could contaminate the missing mass peak of the photon channel. The correction from such contributions is rather small and introduces an uncertainty in the cross section at the order of 0.1%. The measurement of the azimuthal asymmetries helps to reduce the effect of the systematic uncertainties in the fitting of the scalar GPs, and for MAMI-V the α E1 (Q 2 ) systematic uncertainty was found nearly double compared to the statistical one. The sensitivity of these measurements to the electric GP is exhibited in Fig. 13. The sensitivity to the Coulomb quadrupole amplitude is also explored in the figure, and a detailed discussion about it will be presented in Sect. 4.8.3. Results for the structure functions P LL − P T T /ǫ and P LT The structure functions are the observables directly fitted from experiments in the LEX approach. As seen in Sect. 3.2.3, P LL − P T T /ǫ as well as P LT are combinations of scalar and spin GPs. Table 4 collects the values of P LL − P T T /ǫ and P LT extracted by the various experiments. Most results come in "doublets", under the form of a LEX fit and a DR fit, performed on the same cross-section data, and presented in two successive lines of Table 4. Strictly speaking, the results of LEX and DR fits are comparable only at the level of these structure functions, not at the level of the GPs, since the LEX fit does not access the latter directly. Figure 13: (color online) Cross sections and asymmetries measured at Q 2 = 0.20 GeV 2 from the MAMI-V experiment. The DR calculation [39] is also shown with different variations of α E1 (Q 2 ) and of CMR to exhibit the sensitivity of the measurements to these amplitudes. Figure taken from Ref. [82]. From Table 4 it is clear that each laboratory explored a Q 2 -range of its own, going from very low (MIT-Bates) to intermediate (MAMI) and high Q 2 (JLab). It is also clear that the (total) error of the measurements decreases less rapidly with Q 2 than the observables themselves. At high Q 2 (∼ 1 GeV 2 and above) it becomes increasingly difficult to measure structure functions or GPs which are significantly non-zero (within their error bar). This is evidenced by the JLab results [75]. Figure 14 depicts the whole set of values of P LL − P T T /ǫ and P LT reported in Table 4. Note that the structure functions P LL − P T T /ǫ and P LT at Q 2 = 0 are simply proportional to the RCS polarizabilities α E1 and β M1 (cf. the Appendix). Thanks to the most recent experiments, the Q 2 -region below 0.5 GeV 2 in Fig. 14 starts now to be densely filled, with precise data, and a rather consistent Q 2 -picture emerges. The vast majority of points agree well with a smooth behavior, in remarkably good agreement with a typical DR model calculation (solid curve), in which a single dipole shape is assumed for the unconstrained part of the scalar GPs (cf. Eq. (11)) 17 . We remind that the DR model does not by itself predict the electric and magnetic GPs, and that the DR curve in Fig. 14 is obtained using (Λ α , Λ β ) parameters fitted from experimental data. The dipole ansatz was introduced in the model only as a practical way to parametrize a shape in Q 2 ; any other shape could be used instead. Experimental fits are made at each Q 2 separately, and independently of any assumption on the global Q 2 -dependence. Yet, in view of the present data, this dipole ansatz seems to be not too far from reality. There are two exceptions to the overall smooth Q 2 -behavior in Fig. 14. The first one is the MIT-Bates LEX point for P LT , lying at a large negative value. It is well understood in terms of a bias in the LEX fit [77,78], due to the competition of the lowest-order and higher-order GP terms of the low-energy expansion 18 . The DR fit in this experiment is obviously better behaved than the LEX fit, and gives a 17 Note that the experimental points correspond to various values of ǫ, while the DR curve is calculated at a fixed ǫ = 0.65. However the comparison between theory and experiment is not affected, since in the DR model the P T T structure function is very small (see Fig. 16). A DR curve calculated at ǫ = 0.9 would be almost exactly superimposed to the one of Fig. 14. 18 In the in-plane kinematics of the MIT-Bates experiment, at θ cm = 90 • and ϕ = 180 • , the lowest-order GP term of the LEX was exceptionally small, due to a near-perfect cancellation between V LL · (P LL − P T T /ǫ) and (V LT · P LT ). The Table 4, including the RCS point [12]. Inner (resp. outer) error bars are statistical (resp. total). Some points are slightly shifted for visibility. The solid curve is the DR model calculation for (Λ α = Λ β = 0.7 GeV) and ǫ = 0.65 [39]. more sensible value of P LT . The second exception consists in the three MAMI points for P LL − P T T /ǫ at Q 2 = 0.33 GeV 2 , all lying over the general trend. No experimental or analysis bias has been identified in the two experiments (MAMI-I and IV), and this enhancement remains presently unexplained. The left part of Fig. 15 shows three model calculations of P LL −P T T /ǫ and P LT in the lower part of the Q 2 -range, where ChPT is applicable. Historically, the HBChPT O(p 3 ) calculation of Ref. [34] appeared as very successful in describing the VCS observables, due to its good agreement with the measurements from the MAMI-I and MIT-Bates experiments. In particular, the large values of P T T and P LT spin predicted by this model, i.e., the spin-dependent part of the two extracted structure functions, were a key ingredient to reproduce the MAMI-I results 19 . The HBChPT calculation was then pushed one order higher, but only for the spin GPs [32,33]. The result showed a severe lack of convergence: see for instance the HBChPT O(p 3 ) and O(p 4 ) calculations of P T T in Fig. 16. In addition, such a O(p 4 ) calculation does not exist yet for the scalar GPs. Therefore it appears hard to draw any firm conclusion from the comparison of HBChPT with VCS experiments, and the agreement mentioned above with the MAMI-I results is possibly accidental. O(q ′ 2 cm ) terms were then non-negligible, and ignoring them in the LEX fit was too approximate. This first evidence of a problem in a LEX fit was instructive for posterior designs, e.g., of the MAMI-VI experiment. A new calculation of VCS observables is provided by the recently developed covariant BChPT [36]. The structure function P LL − P T T /ǫ calculated by this model is in better agreement with the data than HBChPT, although it still lies above the most recent experimental points (Fig. 15). The structure function P LT from covariant BChPT reproduces well the VCS data, although being at tension with the RCS data. The theoretical uncertainty of this model (shaded area) is quite large, but hopefully it can be reduced in the future. The right part of Fig. 15 shows the sensitivity in the DR model when one changes the free parameters (Λ α , Λ β ). We stress that, in the VCS analyses, these parameters have always been fitted independently for each experimental data set. The fitted values have all been found in a remarkably narrow range, [0.5,0.8] GeV for Λ α and Λ β (to the exception of the data at Q 2 = 0.33 GeV 2 ). Throughout this article we have considered (Λ α = Λ β = 0.7 GeV) as an average reference, compatible with most of the data points. Finally, we show in Fig. 17 the scalar and spin-dependent parts of the measured structure functions, as calculated by two models: DRs and covariant BChPT 20 . In these two calculations, the spin-dependent part (dashed-dotted curves) is very similar, and of small magnitude (in contrast to HBChPT O(p 3 ), not shown here). The dominance of the scalar part means that P LL − P T T /ǫ and P LT give an almost direct picture of the electric and magnetic GPs, at least at low and intermediate Q 2 . At higher Q 2 it is less and less true; in the DR model for instance, the spin part tends to contribute as much as the scalar 20 We remind that in the DR model, spin GPs and their combinations are fully predicted. part to the measured structure functions when Q 2 reaches 1 GeV 2 and above 21 . Table 5 collects the values of the proton scalar GPs extracted by the various experiments. These values are a direct output of the fit only in the case of DR analyses; in the case of LEX analyses, only P LL − P T T /ǫ and P LT are fitted, and their spin-dependent part must be subtracted. To this aim we use the DR model, for several reasons: its good reliability, and its validity over the full Q 2 -range, a feature that no other model provides. The obtained picture of the scalar GPs is thus model-dependent (but consistently), and would be different if one used another model for this subtraction. We have reported four sets of polarizability values at Q 2 = 0 in Table 5 in order to reflect the present state of knowledge of α E1 and β M1 in RCS. The error bar quoted for these polarizabilities by the PDG [12] is quite small and may not reflect the actual spread between the various analyses [43,86,87,89]. An intense effort is ongoing experimentally and theoretically to pin down the RCS polarizabilities; see, e.g., Refs. [9,10,87,90]. Figure 18 displays the data of Table 5 for α E1 (Q 2 ) and β M1 (Q 2 ). Apart from a change of sign when going from P LT to the magnetic GP, Figs. 14 and 18 show great similarities in shape. This is due to the smallness of the spin-dependent part in P LL − P T T /ǫ and P LT , especially in the DR model (cf. Fig. 17). Results for the scalar GPs Over the explored Q 2 -range, there is again a good overall agreement of the experimental data of Fig. 18 with a smooth fall-off, as described by the DR model calculation (solid curve) already shown in Fig. 14 22 . One observes again the exception of the MAMI-I and MAMI-IV data points at Q 2 = 0.33 GeV 2 , mostly for the electric GP. This enhancement of the data for the electric GP is more pronounced than the corresponding one for P LL −P T T /ǫ; it simply originates from the presence of the proton electric Figure 17: (color online) The measured structure functions in terms of their scalar and spin-dependent parts, for two models: DR (thick black curves) [3,39] and covariant BChPT (thin red curves) [36]. Dashed curves are for the scalar part, dashed-dotted curves for the spin-dependent part, and solid curves for the sum. The DR calculation uses (Λ α = Λ β = 0.7 GeV). In the left plot one has ǫ = 0.65, so that the dashed-dotted curves correspond to (−P T T /0.65). form factor in P LL 23 . For the MAMI-I experiment, the DR fit [7], which is a priori the most reliable fit, gives a value of β M1 (Q 2 ) in smooth agreement with the general trend, but still gives a high value of α E1 (Q 2 ), that is also confirmed by the MAMI-IV experiment. In the absence of explanation on the experimental side, this localized enhancement of the electric GP has effectively to be considered as the signal of a physics mechanism not yet understood. In Ref. [91], a new parametrization of the unknown asymptotic contribution to the DR calculation of α E1 (Q 2 ) has indeed been proposed to take into account this local "bump". More measurements would be necessary in the region of Q 2 = 0.33 GeV 2 to investigate further this puzzling behavior. The new JLab VCS experiment E12-15-001 [84] is presently exploring the intermediate Q 2 -range of 0.3-0.7 GeV 2 (see Sect. 6.2.2) and will bring elements of answer. If confirmed, a non-smooth Q 2 -dependence will call for really unusual explanations. As a side remark, it is not totally excluded that an anomaly observed in α E1 (Q 2 ) could have another origin, e.g., in the spin GPs entering the P T T structure function. Indeed, all the present extractions of the scalar GPs assume a smooth Q 2 -dependence of the P T T structure function, either directly (DR fit) or indirectly (LEX fit + subtraction of the spin-dependent part). But one has to keep in mind that no data exist to confirm or not this behavior. The smallness of β M1 (Q 2 ) w.r.t. to α E1 (Q 2 ) makes it difficult to determine the magnetic GP with a good precision (in relative value); this is illustrated, e.g., by the spread of the three data points a Q 2 = 0.33 GeV 2 in Fig. 18. Nevertheless a consistent behavior tends to emerge from the world data, namely thanks to the precise data points from the MAMI-VI experiment. The low-Q 2 measurements suggest the existence of an extremum of β M1 (Q 2 ), weakly pronounced, in the region near Q 2 = 0.1 GeV 2 ; but the exact shape depends crucially on the actual RCS value, which is under debate [9,10,87,90]. The DR model as shown in Fig. 18 accounts rather well for the measurements of the magnetic GP over the full Q 2 -range, with just a single dipole ansatz parametrizing the unconstrained part of β M1 (Q 2 ). Table 5, including the RCS point [12]. Inner (resp. outer) error bars are statistical (resp. total). Some points are slightly shifted for visibility. The solid curve is the DR model calculation for (Λ α = Λ β = 0.7 GeV). The left plots of Fig. 19 show three model calculations of α E1 (Q 2 ) and β M1 (Q 2 ), in complete analogy to Fig. 15. As already mentioned in Sect. 3.4, the HBChPT [34] and covariant BChPT [36] calculations are in close agreement for the electric GP. However, due to their different calculation of P T T (cf. Fig. 16), the two models differ visibly for P LL − P T T /ǫ (cf. Fig. 15). Secondly, in the right plots of Fig. 19 we note that the presence of an extremum of the magnetic GP at low Q 2 depends on the value of Λ β in the DR model. For example, Λ β = 0.5 GeV generates an extremum, but the DR curve at this Λ β will overshoot most of the data points at higher Q 2 (≥ 0.4 GeV 2 ). This suggests that, at least for β M1 (Q 2 ), the single-dipole ansatz could be replaced by another Q 2 -parametrization, which would better describe the data. The present DR description of the scalar GPs is nevertheless quite satisfactory. The delicate balance between the large diamagnetic and paramagnetic parts of β M1 (Q 2 ) (cf. Fig. 4) can already be well-tuned in the model. In addition, the description of α E1 (Q 2 ), with an almost completely dominant [asymptotic + beyond πN] component behaving as a pure dipole, is able to reproduce most of the data. Mean-square polarizability radii Mean-square radii are a basic measure of the extension of spatial distributions. Similarly to form factors, the mean-square radius of α E1 (Q 2 ) and β M1 (Q 2 ) is obtained from the slope of the electric and magnetic GPs at Q 2 = 0. One has for instance: These radii can be determined using a DR fit to the very-low Q 2 data in VCS. Such a work was presented in Ref. [78], based on the two experimental data points: RCS and the MIT-Bates measurement at Q 2 = 0.057 GeV 2 . With the addition of the MAMI-VI measurements at Q 2 = 0.1 GeV 2 and the new RCS values of Ref. [12], it is appropriate to give here an update of these mean-square polarizability radii. We use the same method as described in Ref. [78], and the new results are reported in Table 6, for the full DR calculation and for the separate πN and asymptotic contributions. The error bars for the total results and for the asymptotic contributions take into account the uncertainties from the RCS experimental value (at the denominator of Eq. (13)) and from the fit of the Λ α and Λ β parameters to the VCS data points. On the other side, the πN contribution is fully determined by the dispersion integrals and the corresponding error bar reflects only the uncertainties on the RCS polarizabilities. The total results for α E1 are consistent, within the error bars, with the first determination of [78]. However, we find a more pronounced contribution from the πN channel w.r.t. the asymptotic term. Furthermore, the mean-square electric polarizability radius is much larger than the mean-square charge radius (which is about 0.77 fm 2 ), showing the effect of the deformation of the meson cloud of the proton under the influence of an external electric field. The results for the mean-square magnetic radius are much better constrained than in the determination of [78], thanks to the recent low-Q 2 measurements at MAMI. The total result comes from a delicate cancellation between the negative asymptotic (diamagnetic) contribution and the positive πN (paramagnetic) contribution, with the dominance of the diamagnetic term associated with the long-distance effects of the pion cloud. 4.7 Further comments on the O(q ′ 2 cm ) term of the LEX We have seen in Sect. 4.3.1 that the DR model provides a way to estimate the higher-order terms of the LEX expansion. Actually the low-energy theorem of Ref. [14] is not an expansion in q ′ cm but an expansion in (q ′ cm /q cm ) 24 . Therefore when q cm decreases, or equivalently when Q 2 decreases, one may wonder how the validity of the LEX truncation evolves, and if the higher-order terms of the LEX become more important. The DR estimator introduced in Sect. 4.3.1 allows us to study this question. We have built this quantity, i.e., the higher-order term O(q ′ 2 cm ) as given by the DR model, divided by dσ BH+Born , for various experimental conditions. Fig. 20 displays the result for seven different experiments, at the highest measured values of q ′ cm below the pion production threshold (which are always around 100 MeV/c), in the 2D-plane (cos θ cm , ϕ). The DR estimator was evaluated using the fitted values of the structure functions in each case. The top plots show indeed the anticipated general trend: when Q 2 decreases, from right to left, the quantity O(q ′ 2 cm ) DR tends to reach high values in wider regions in (cos θ cm , ϕ) 25 . The bottom plots show the angular phase-space where this estimator remains small (< 3%) compared to a typical first-order GP effect of 10-15%. This region, displayed as a filled area, is the one where the LEX truncation to first order is a priori most reliable. These plots illustrate how the choice of angular kinematics can potentially impact the LEX fit. Namely, the filled area shrinks when Q 2 decreases, pointing to difficulties to do a proper LEX fit at very low Q 2 . By looking at the points where cross sections have been measured (open black circles in the bottom plots) one sees that the various experiments made quite different choices, depending on the adopted strategy and the possibilities offered by the apparatus. Of course the DR estimator presented here should not be taken too strictly. However, as said in Sect. 4.3.1, appyling a selection criterion based on the O(q ′ 2 cm ) DR quantity can be seen as a valuable tentative to improve the reliability of the LEX fit, at the price of introducing a slight (DR-)model dependence. Other experimental results The spectrum of low-energy VCS observables is wider than just the two structure functions P LL −P T T /ǫ and P LT and the scalar GPs of the proton. Some VCS experiments have explored other observables, albeit in a less extensive way. Their results are summarized in this section. Beam single-spin asymmetry The beam single-spin asymmetry (beam SSA) in VCS was first introduced in Ref. [92], with a focus on the hard scattering regime. The main physics interest was to access non-trivial phases of QCD and to test the diquark model predictions. The observable is the asymmetry (dσ + − dσ − )/(dσ + + dσ − ), where dσ + and dσ − are the photon electroproduction cross-sections with a longitudinally polarized electron beam of helicity + 1 2 and − 1 2 . The numerator, equal to Im(T VCS ) · Re(T VCS + T BH ), indicates that the beam SSA is proportional to the imaginary part of the VCS amplitude. Therefore one must go above the pion production threshold to access this asymmetry. The first term, Im(T VCS ) · Re(T VCS ), is purely due to VCS and measures the relative phase between longitudinal and transverse virtual Compton helicity amplitudes. The second term, Im(T VCS ) · Re(T BH ), is an interference term that measures the relative phases between the VCS and the BH amplitudes. In kinematics where BH dominates, this interference plays the role of an amplificator of the VCS contribution and enhances the asymmetry. Both terms of the numerator vanish at ϕ = 0 and 180 • , so one must go out-of-plane to access this asymmetry. 24 From P. Guichon, private communication. It can also be seen from the detailed LEX expression in, e.g., Ref. [2]. 25 The angular variations of O(q ′ 2 cm ) DR in the figure seem to be quite different from left (low Q 2 ) to right (high Q 2 ), but some similarities in pattern are hidden by the choice of a unique color map scale. The MAMI-II experiment [67,79] measured the beam SSA in the first resonance region, at W = 1.2 GeV, Q 2 = 0.35 GeV 2 , ϕ = 220 • and θ cm < 35 • . Here the physics goal was different, and focusing on testing the input of the DR model, i.e., the calculation of Im(T VCS ) entering the DR integrals. The beam SSA resulting from the measurement was of small magnitude (below 10%) and rather limited precision. These data showed an overall good agreement with the DR calculation, which used the (γ ( * ) N → πN) multipoles of the MAID2003 analysis. The main finding was that, in the MAMI-II kinematics, the DR calculation had little sensitivity to the GPs, but had a good sensitivity to the two small longitudinal multipoles S 1+ and S 0+ in the (pπ 0 ) channel. Another finding was that the Beam SSA in the (ep → epπ 0 ) channel, that was measured simultaneously to (ep → epγ), provided supplementary constraints for possible adjustments of these small multipoles. Indeed in these measurements the two channels are coupled, since Im(T VCS ) is connected to the πN multipoles by unitarity. Double-spin asymmetry The case of doubly polarized VCS has been first studied theoretically in Ref. [25], and formulated in its final shape in Ref. [2] after downcutting the number of independent GPs from ten to six [16,17]. The observables, doubly polarized cross sections or asymmetries, need polarization on both the leptonic and hadronic sides. For the ( ep → e pγ) process corresponding to a longitudinally polarized beam and the measurement of the final proton polarization, the double-spin asymmetry has the expression [68]: where s ′ i is the projection of the final proton spin along the direction i = x, y or z in the c.m. (cf. Fig. 1), h is the beam helicity and dσ is a doubly polarized cross section. In contrast to the beam single-spin asymmetry, the double-spin asymmetry does not vanish below the pion production threshold. In this range of W , and in analogy with the unpolarized case, a low-energy theorem has been established for the polarized cross section difference ∆dσ i : In Eq. (15), ∆dσ i,BH+Born contains no GPs and is entirely calculable. The first-order polarizability term ∆M 0i,NB contains new combinations of the six lowest-order GPs, under the form of the structure functions P z LT , P ′ z LT and P ′ ⊥ LT (see the complete formulas in the Appendix). Together with the three structure functions of the unpolarized case (P LL , P T T and P LT ), they form a set of six independent structure functions, that is equivalent to the set of the six independent GPs. Therefore, by measuring the three proton polarization components P cm i (i = x, y, z), this formalism opens up the possibility to disentangle all the lowest-order GPs, a perspective that looks of course very attractive. For convenience, three more structure functions are introduced: P ⊥ LT , P ⊥ T T and P ′ ⊥ T T , because they appear in the expression of P cm x and P cm y . They are simply linear combinations of the other structure functions (see the Appendix). Model calculations [25] predict large double-spin asymmetries in typical MAMI kinematics, where the dominant contribution comes from the (BH+Born) process and is modulated by a few-percent effect coming from the GPs. Double polarization observables were explored in the one-and-only MAMI-IV experiment, at kinematics essentially similar to the ones of MAMI-I. The beam was longitudinally polarized and a focalplane polarimeter (FPP) was used to measure the recoil proton transverse polarization components (P f p x , P f p y ). High statistics were needed, because one had to cut away a majority of protons which scattered at too low angle (< 9 • ) in the carbon analyzer of the FPP. A first analysis step consisted in fitting the c.m. polarizations P cm i (i = x, y, z) to the azimuthal distribution of events in the FPP. This analysis showed that only P cm x and P cm y could be adjusted, and that P cm z had to be fixed to its theoretical (BH+Born) value. The P cm y component was very small and almost all the new information was carried only by P cm x . A second step consisted in fitting individual GPs to the same FPP distribution as above. The fit utilized an unbinned likelihood method, in which the quantities ∆M 0i,NB (i = x, y) are replaced by their analytical content in terms of GPs. Unfortunately this fit was inconclusive. As a third step, a more conclusive fit was achieved when the quantities ∆M 0i,NB (i = x, y) were replaced by their analytical content in terms of the structure functions. By fixing P ⊥ T T , P The challenges of this experiment and the complexity of the analysis were clearly one step higher than in an unpolarized experiment. The measured double-spin asymmetry turned out to be less sensitive than expected to the GPs, probably because of insufficient statistics. In any case, it left the disentangling of the six lowest-order GPs as a far-reaching goal. N → ∆ multipoles MAMI-III [80] was the first experiment to achieve an exploration of the N → ∆ transition amplitudes through the photon channel. A first measurement of the dominant magnetic dipole amplitude in the transition was performed, and the result was found in excellent agreement to the corresponding result from the pion channel (see Fig. 22). That was an important step for the N → ∆ program and the experience gained from these measurements provided guidance for planning the next measurement (MAMI-V) that would focus on the central part of this program, the quadrupole amplitudes in the transition. The MAMI-V experiment [82] achieved the first extraction of the N → ∆ Coulomb quadrupole amplitude through the VCS channel. The measurement of the azimuthal asymmetries were proven very beneficial as the systematic uncertainties were constrained to a level comparable to the statistical ones. The sensitivity of these measurements to the Coulomb quadrupole amplitude is exhibited in Fig. 13. The Coulomb quadrupole was measured at Q 2 = 0.20 GeV 2 , and the result CMR (VCS) = (−4.4 ± 0.8 stat ± 0.6 sys )% validated the pion channel world data, where the corresponding measurement is CMR = (−5.09 ± 0.28 stat+sys ± 0.30 model )% (see Fig. 23). The results demonstrated that a good control of the model uncertainties was achieved, and gave further credence to the theoretical interpretation that the ∆(1232) resonance consists of a bare quark-gluon core and of a large pion-cloud contribution. 5 Spatial density interpretation of the generalized polarizabilities As described in Sect. 3.1, the Q 2 dependence of the GPs allows one to probe the spatial deformations of the charge and magnetization densities, when the nucleon is subject to an external static electromagnetic field [18,91]. The formal connection between the GPs and the spatial densities of induced polarizations has been derived in Ref. [91]. In order to define proper spatial densities, i.e., with a true probabilistic interpretation without relativistic corrections, one should consider the VCS process in a symmetric light-front frame, where the direction of the average nucleon momentum P = (p + p ′ )/2 is taken as thê z axis and the momentum transfer to the nucleon ∆ µ = (p ′µ − p µ ) is purely transverse. In this frame, the transverse components of the virtual photon momentum q ⊥ , with Q 2 = |q ⊥ | 2 , are the conjugate variables to the transverse position b ⊥ , which measures the transverse distance from the (transverse) center of momentum [93,94]. In the following, we will consider the polarization vector ε ′ ⊥ of the outgoing photon corresponding with an applied electric field E ∼ iq ′ 0 ε ′ ⊥ that polarizes the charge distribution of the nucleon. Depending on the spin polarization of the nucleon, we have two different induced polarization vectors. They can be expressed in terms of GPs, and are functions only of the transverse photon momentum q ⊥ . Therefore, by a Fourier transform from q ⊥ to b ⊥ , they provide a map of the deformation of the charge density in the transverse position space. The explicit relation between the GPs and the induced polarizations can be found in Ref. [91]. Figure 24 shows the induced deformation in transverse-position space for an unpolarized proton (left panel) and for a proton with the spin in thex direction aligned with the applied electric field (right panel), as calculated using the DR results of the GPs with the mass scale parameters Λ α = 0.73 GeV and Λ β = 0.63 GeV [10]. In the case of unpolarized proton, the polarization density displays a dipole pattern in the same direction of the applied field, mainly due to the contribution from the scalar GPs. The spatial extension at the nucleon periphery strongly depends on the mass scales and the assumptions on the functional form of the asymptotic contributions. In the case of a transversely polarized proton in thex direction, we observe a dipole deformation confined near the centre, and, on top of that, a quadrupole pattern with pronounced strength around 0.5 fm due to the electric GP. Conclusions and Outlook We should mention that, by focusing on the reaction γ * N → Nγ, this review has covered only a part of the "polarizabilities' world". For sake of completeness, it may be useful to recall that polarizabilities Discussing this entire field is of course beyond the scope of the present review, and we restricted ourselves to discuss the VCS process at low energy. VCS offers a rich theoretical and experimental playground, that allows unique studies of the nucleon structure. Experiments conducted so far, in which data from MAMI have taken a prominent part, lead to a consistent picture of the electric and magnetic GPs of the proton, in the Q 2 -range ∼ 0-2 GeV 2 . The data also raise questions, which will be addressed by the recent JLab experiment, in the intermediate Q 2 range. We have presented at length how the many facets of the DR model can be used in experimental VCS analyses. At the highest Q 2 it remains probably the only approach to measure GPs with good precision. We hope that more dedicated VCS experiments will come to life in order to fill the gaps in our knowledge and understanding of the nucleon GPs, including the spin GPs which will be a new challenge to the skills of experimentalists and theorists. In this section we present a panel of ongoing and future developments in the field of VCS, covering both theoretical and experimental aspects. Theoretical front The joint experimental and theoretical efforts in the last years have allowed us to identify a set of response functions that can be extracted from the Compton scattering process at different energy scales and in different kinematical conditions, and have a clear interpretation in terms of structure properties of the nucleon. Low-energy Compton scattering provides information on global as well as spatially resolved electromagnetic properties of the nucleon in terms of static and generalized polarizabilities, and, with increasing energy, allows us to study the effects of the nucleon excitation spectrum through dynamical (energy dependent) polarizabilities [96][97][98]. Furthermore, the variation of the initial photon virtuality Q 2 allows one to probe a wide range of distance scales, interpolating between hadronic degrees of freedom at low virtuality and partonic degrees of freedom at large virtuality. The unified description of the nucleon response functions in the whole Q 2 range is one of the main challenges of theoretical models. Progress in this direction has recently been made in Refs. [99,100], using a Dyson-Schwinger/Faddeev approach. Unfortunately, some of the diagrams contributing to the process in this approach are numerically too hard to calculate. So far a practical solution has been proposed to give preliminary results only for the scalar GPs, although within certain approximations which violate gauge-invariance [101]. A new formal approach to the description of the Compton scattering process has been recently addressed in Refs. [102,103]. It can be viewed as a formalism that uses the same set of hadronic variables, the so-called Compton form factors, at large and low virtuality of the initial photon, and it has been suggested in [102] to provide a unified framework for experimental studies of generalized parton distributions as well as generalized polarizabilities. Following this line, one could also explore the possibility to develop an unified dispersion relation formalism for the Compton form factors in different kinematical limits, connecting the existing DR approaches that deal separately with either generalized parton distributions (see, e.g., [104][105][106]) or polarizabilities [3,10]. Further progress in the DR approach for VCS at low energy could be to developing a subtracted DRs formalism along the lines of the subtracted DR framework used in RCS [49,50]. By choosing the subtraction point at the polarizability point ν = 0, and t = −Q 2 , one can write down the VCS amplitudes as the sum of s-and t-channel subtracted dispersion integrals, and subtraction constants given in terms of the six leading-order GPs. The subtracted s-channel integrals can be evaluated through photo-and electro-production amplitudes, as described in this work, while the t-channel integrals can be saturated by ππ intermediate states in the t channel γ * γ → ππ → NN . The input for the subprocess γ * γ → ππ can be taken, for example, from the recent dispersion analysis within a coupled-channel approach of Ref. [107], while the ππ → NN subprocess can be described as in the RCS case [50]. The main advantage of subtracted DRs for VCS is that all the six GPs can be taken as free fit parameters to be adjusted to data. Furthermore, the model dependence introduced by the high-energy contributions to the dispersion integrals is considerable reduced. Experimental front 6.2.1 Experimental access to spin GPs Although the polarizability phenomenon in the spin-dependent sector is not subtended by a simple and intuitive picture as in the scalar case, it is an essential piece of knowledge of nucleon structure, that calls for measurements. Unfortunately, investigating the nucleon spin GPs remains a virgin field so far; a few exploratory paths are outlined below. A first perspective is offered by the P T T structure function, which is a combination of two spin GPs (cf. Eq. (6)). The advantage is that P T T appears in the LEX, and can be in principle disentangled from P LL if one performs (unpolarized) measurements at several values of ǫ. The difficulty of such an ǫ-separation lies in the smallness of P T T , according to the DR and covariant BChPT calculations of Fig. 16. Another strategy consists in combining unpolarized and doubly polarized observables in one experiment, at a single value of ǫ. With an unpolarized analysis yielding the structure function P LL − P T T /ǫ and a doubly polarized analysis yielding the structure function P ⊥ LT , which is another combination of P LL and P T T , one can in principle separate P LL and P T T . This method is discussed in Refs. [68,69] and was tried in the MAMI-IV experiment, but without significant results. The two correlation lines between P LL and P T T obtained in this experiment, from the unpolarized and polarized analyses, turned out to be almost identical, due to the choice of kinematics. Considerations on more optimal kinematics can be found in Ref. [68]. Lastly, possible developments of the DR model, as exposed in Sect. 6.1, offer potentially a new way to access spin GPs, by letting all six lowest-order GPs be free parameters. Experiments performed in the Delta resonance region could benefit from their higher sensitivity to GPs. Similarly to RCS, observables using polarization degrees of freedom would probably need to be investigated in order to find optimal measurements in the spin sector. The E12-15-001 experiment at JLab The E12-15-001 experiment [84] at JLab completed its first phase of data taking recently (July 2019). The experiment utilized the SHMS and HMS spectrometers [108,109] in Hall C to detect, respectively, electrons and protons in coincidence, while the reconstructed missing mass has been used for the identification of the photon. An electron beam of energy E = 4.55 GeV and a 10 cm liquid hydrogen target were employed for the measurements. The experiment aims to explore the GPs within the range of Q 2 = 0.3 GeV 2 to Q 2 = 0.75 GeV 2 in order to investigate the non trivial evolution of α E1 (Q 2 ) with the momentum transfer, and to provide a precise measurement of β M1 (Q 2 ). The experiment phase space covers the nucleon resonance region (see Fig. 25) and thus the DR analysis framework will be utilized for the extraction of the GPs from the measured cross sections and azimuthal asymmetries. For the low-Q 2 settings the electron spectrometer was placed in a small angle of ≈ 8 • and the relatively high singles rates in conjunction with the large acceptance of the SHMS spectrometer have been the limiting factor of the beam current to about 30 µA for these settings. For the higher momentum transfer settings this limitation is relaxed as one can easily run at double the beam current. The cross sections will be measured with a statistical uncertainty of about ± 1.5%, while the systematic uncertainties will be the dominating factor being roughly double compared to the statistical ones. The uncertainties of the beam energy and of the spectrometer angles will introduce a systematic uncertainty to the cross section ranging from ± 1% to ± 2.5%, depending on the setting. Other sources of systematic uncertainties involve the target density, detector efficiency, acceptance, and target cell background, each one of which is expected to contribute to about ± 0.5%. Systematic uncertainties related to the target length, beam charge, dead time corrections, and contamination of pions under the photon peak will also contribute, but to a smaller extent. The uncertainty due to the radiative corrections will be ± 1.5%, while various parametrizations for the form factors will be utilized in the analysis. For the asymmetries, the systematic uncertainties are still larger compared to the statistical ones, but not as dominant as in the case of the cross sections, and they are expected to be at the order of ≈ 1%, in absolute asymmetry magnitude. The extraction of the GPs will be performed by a DR fit to the measured cross sections and azimuthal asymmetries. The primary source of uncertainty for both the electric and the magnetic GP will be the systematic ones, while the statistical uncertainty for both GPs is expected to be ≈ 70% of the systematic one. In Fig. 26 the projected cross sections and asymmetries are presented for Q 2 = 0.65 GeV 2 . The solid (red) and dashed (blue) curves correspond to a variation of the electric GP from α E1 (Q 2 ) = 4.8 10 −4 fm 3 (β M1 (Q 2 ) = 1.1 10 −4 fm 3 ) to α E1 (Q 2 ) = 1.5 10 −4 fm 3 (β M1 (Q 2 ) = 1.1 10 −4 fm 3 ). A variation of β M1 (Q 2 ) from = 0.4 10 −4 fm 3 to = 1.6 10 −4 fm 3 is presented by the two, dotted and dashed-dotted, green curves in the cross section figures. The same variation in β M1 (Q 2 ) is represented through the light blue band in the asymmetry figure. One can observe that above θ cm ≈ 160 • the β M1 (Q 2 ) variation is affecting both cross sections in a systematically similar way, and this is reflected as a cancellation of the effect in the azimuthal asymmetry (suppression of the light blue band in the corresponding θ cm range). The projected measurements for α E1 (Q 2 ) are presented in Fig. 27. The results will map the momentum transfer signature of the two scalar GPs with high precision, and will offer a valuable cross check to the MAMI measurements at Q 2 = 0.33 GeV 2 . • Cross section and phase-space factor: The (ep → epγ) cross section is defined as: . In this expression, one can sort out an explicit factor q ′ cm due to the fact that (s−M 2 N ) is proportional to q ′ cm . The remaining part of the phase-space factor, Φ = (2π) −5 64M N · k ′ lab k lab · 2 √ s , remains finite when q ′ cm tends to zero. • V LL and V LT coefficients: There are several notations in the literature for the coefficients which are in front of the structure functions in the LEX formula. Here we have used the following notations: where K 2 , v 1 , v 2 and v 3 are defined in Eqs.(98)-(100) of Ref. [2], andq 0cm is the virtual photon c.m. energy in the limit q ′ cm → 0, given by:q 0cm = M N − M 2 N + q 2 cm . • Structure functions at Q 2 = 0: The expression of the measured structure functions P LL − P T T /ǫ and P LT at Q 2 = 0 in terms of the RCS polarizabilities is obtained by applying Eq. (6) for a real incident photon, together with Eq. (3) and using the "tilde" variables: • VCS with double polarization: Three new structure functions appear in the first-order GP term of the doubly polarized cross section. The structure function P ′ ⊥ LT is the only one containing the "sixth GP" P (M 1,L2)1 , and it contributes Table 4: Experimental results for the structure functions P LL − P T T /ǫ and P LT using LEX and DR fits, ordered by increasing values of Q 2 . The first error is statistical and the second one is systematic, except in the special cases in footonote. The first four lines are values at Q 2 = 0 deduced from various fits of the RCS polarizabilities α E1 and β M1 , converted to structure functions using the expressions of the Appendix. Experiment Q 2 ǫ P LL − P T T /ǫ P LT type of nomenclature (GeV 2 ) (GeV −2 ) (GeV −2 ) analysis RCS 1 a 0 81.0 ± 2.0 ∓ 2.7± 2.0 -5.
2019-10-24T13:27:39.000Z
2019-10-24T00:00:00.000
{ "year": 2020, "sha1": "3e59e1f297bb2b6604fc8804b1cec099ed937e63", "oa_license": null, "oa_url": "https://www.sciencedirect.com/science/article/am/pii/S0146641020300016", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3e59e1f297bb2b6604fc8804b1cec099ed937e63", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54073555
pes2o/s2orc
v3-fos-license
Affine Symmetry, Geodesics, and Homogeneous Spacetimes We show that the conservation laws for the geodesic equation which are associated to affine symmetries can be obtained from symmetries of the Lagrangian for affinely parametrized geodesics according to Noether's theorem, in contrast to claims found in the literature. In particular, using Aminova's classification of affine motions of Lorentzian manifolds, we show in detail how affine motions define generalized symmetries of the geodesic Lagrangian. We compute all infinitesimal proper affine symmetries and the corresponding geodesic conservation laws for all homogeneous solutions to the Einstein field equations in four spacetime dimensions with each of the following energy-momentum contents: vacuum, cosmological constant, perfect fluid, pure radiation, and homogeneous electromagnetic fields. Introduction Homotheties of a metric define point symmetries of the Lagrangian for geodesics and define conservation laws for the geodesic equation via Noether's theorem [1], [2], [3], [4], [5]. In most cases the analytic tractability of a system of geodesic equations depends upon the existence of such conservation laws, or perhaps conservation laws associated with other geometric structures, e.g., Killing tensors. Affine symmetries are diffeomorphisms of a spacetime which preserve the affine connection (see, e.g., [6]). These include homotheties (and isometries) as a special case, but the group of affine transformations may include non-homothetic transformations. Affine symmetries which are not homotheties are called proper affine symmetries. Affine symmetries act as a transformation group on the space of solutions of the affinely parametrized geodesic equation [1], [3]. For this reason they are often called affine collineations. It has been known for some time that there are two conservation laws for the geodesic equation which are associated to each 1-parameter group of proper affine symmetries [2], [7]. This may be surprising since proper affine symmetries do not define point transformations which preserve the geodesic Lagrangian. Indeed these conservation laws have been characterized as "non-Noetherian" in [7], [8]. To some extent, the existence of these conservation laws has been understood in the context of modifications of the Lagrangian formalism such as found in [4] and in [7]. As we shall show here, one can directly apply Noether's theorem to the standard Lagrangian for affinely parametrized geoedesics to obtain the two conservation laws associated to proper affine symmetries. To do this we use the fact that to account for all conservation laws of a system of Euler-Lagrange equation one must account for all generalized symmetries of the Lagrangian [9], [3], [4]. 1 Generalized symmetries need not act as point transformations, but instead act as infinitesimal transformations on the infinite jet space of the dependent variables. These symmetries were introduced by Noether [10]; they generalize point symmetries and contact symmetries. We shall show that associated to each proper affine motion there are two generalized symmetries of the geodesic Lagrangian. Noether's theorem yields the corresponding conservation laws. Using Aminova's classification of affine symmetries [11] we explain in some detail how the infinitesimal transformations associated to affine symmetries manage to define generalized symmetries of the geodesic Lagrangian. The derivation of the geodesic conservation laws for affine symmetries from Noether's theorem is the principal result of our paper. A number of papers have found affine symmetries for various solutions of the Einstein equations, e.g., [12], [13], [14], [8]. Hall and da Costa [16] have given a classification (based upon holonomy groups) of possible affine symmetries which can occur in four-dimensional spacetimes. The possibilities for electrovacua in four dimensions have been examined in [18]. As a modest contribution to this body of work and as an illustration of our results, we calculate all continuous proper affine symmetries and corresponding conservation laws for all homogeneous solutions to the Einstein-matter field equations in four dimensions for each of the following energy-momentum contents: vacuum, cosmological constant, perfect fluid, pure radiation, and homogeneous electromagnetic field. To our knowledge the proper affine symmetries have not been exhaustively enumerated for all such solutions. In §2 we will review the fundamentals of affine symmetry and briefly review the results of Aminova's classification of continuous affine symmetries of Lorentzian manifolds in any dimension. In §3 we summarize the results we will need from the theory of generalized symmetries and conservation laws in the context of ordinary differential equations. We then show how affine symmetries define generalized symmetries of the geodesic Lagrangian. We apply Noether's theorem to obtain the corresponding conservation laws. Finally, in §4 we enumerate all homogeneous solutions of the Einstein equations (in four spacetime dimensions and with the matter content as listed above) along with all their infinitesimal proper affine symmetries and corresponding conservation laws. 1 See reference [9] for a comprehensive exposition of generalized symmetries and Noether's theorem, applicable to PDEs and ODEs. See reference [3], [4] for a geometric exposition of the theory of symmetries and conservation laws tailored to second order ODEs with applications to projective symmetries and conservation laws of the geodesic equation as well as to conservation laws associated to Killing tensors. Affine Symmetry In this section we review the fundamentals of affine symmetry transformations, with an eye on the applications to the geodesic equation and solutions to the Einstein equations given in the following sections. Let (M, g) be a pseudo-Riemannian manifold. The metric uniquely determines a torsion-free affine connection ∇ from the condition ∇g = 0. A diffeomorphism φ : M → M is an affine symmetry if it preserves this connection, that is, for any tensor It is easy to check that homotheties (φ * g = c g, c = const.) are affine symmetries. Affine symmetries which are not homotheties will be called proper affine symmetries. The existence of a proper affine symmetry implies the vector space of parallel symmetric 0 2 tensor fields has dimension greater than one since h = φ * g is parallel: A 1-parameter group φ λ of diffeomorphisms is an affine motion if it preserves the connection for each value of the parameter λ. In this case the definition (2.1) can be replaced with an infinitesimal condition involving the Lie derivative along the affine vector field Y on M generating the 1-parameter group: This condition is equivalent to where R a bcd is the Riemann tensor and we are using the abstract index notation. The affine vector field therefore satisfies an over-determined system of linear partial differential equations of finite type. For a generic metric g there are no solutions. The maximum number of solutions is n(n + 1) where n = dim(M ). Homothetic vector fields, defined by L Y g = c g, c = constant, (2.5) satisfy (2.4). An affine vector field which is not a homothetic vector field will be called a proper affine vector field. Proper affine vector fields come in equivalence classes: two proper affine vector fields belong to the same equivalence class if they differ by a homothetic vector field. In light of (2.3), a proper affine vector field Y defines a parallel symmetric 0 2 tensor field h not proportional to the metric via To our knowledge the classification of affine motions is not complete except in the cases of Riemannian and Lorentzian manifolds. All affine motions of an irreducible Riemannian manifold are homotheties [15]. In the reducible case, the de Rham theorem decomposes a Riemannian manifold as a product of a flat manifold and irreducible Riemannian manifolds of dimension greater than one: where g 0 is flat and is the restriction of g to the submanifold tangent to the distribution of parallel vector fields. It follows that affine vector fields generate homotheties in each irreducible component. In the Lorentzian case, Aminova [11] has shown 2 that the preceding result holds, now corresponding to the de Rham-Wu decomposition [17], but a new possibility arises. A locally irreducible Lorentz manifold (M, g) may admit a proper affine vector field Y , but only if it admits a parallel null vector field k a and the affine vector field acts via In this case there will exist coordinates (w, v, x i ), i = 1, 2, . . . , n − 2, in which and, modulo the addition of a homothetic vector field, In the Lorentzian case affine motions occur only when either of these two situations (2.7), (2.9) (or both) arise. The proper affine vector fields are then of 3 types: (I) homothetic vector fields for the irreducible subspaces, (II) vector fields acting as in (2.8), or (III) infinitesimal generators of linear "intermixing" transformations among the coordinates adapted to the parallel vector fields. In case III the affine vector fields can be put into the form where w and v are defined in (2.9) and y, z, etc., denote coordinates on M 0 which rectify non-null parallel vector fields ∂ y , ∂ z , etc. Affine Symmetries and Conservation Laws of the Geodesic Equation In this section we will obtain the principal result of this paper: a derivation of the conservation laws associated to affine symmetries from Noether's theorem. To this end, we begin with some definitions from the geometry of differential equations and the calculus of variations, which have been specialized to the case of one independent variable [9] (see also [3], [4]). Preliminaries Let C be the bundle of curves on M . Let J be the infinite jet bundle of curves in M [19]. Using a coordinate chart x α on U ⊂ M , and a parameter s for the curve, local coordinates on C are (s, x α ). A curve in M is then a cross section, x α = u α (s), of C. Local coordinates on J are denoted by (s, x α ,ẋ α ,ẍ α , . . . ). The cross section x α = u α (s) extends to a cross section of J via Functions on J are denoted by The total derivative, D : and represents the "total time derivative" along a curve x α = u α (s) in the sense that The tangent space at a given point in J is spanned by In the calculus of variations, generalized vector fields correspond to infinitesimal variations of curves Given a generalized vector field v, its infinite prolongation pr v is the extension to J given by ... The prolongation of v describes the extension of the variation (3.7) to all derivatives of the curve, e.g., . (3.10) From equation (3.9) it is clear that, in general, the infinitesimal transformation of a quantity involving n derivatives of the curve will involve derivatives of the curve of order greater than n. Consequently, the infinitesimal transformation defined by a generalized vector field requires the entire jet bundle for its definition. The corresponding transformation group on the set of curves -cross sections of C -is constructed by solving an auxiliary system of PDEs [9]. A restricted class of infinitesimal transformations is generated by vector fields which can be defined entirely on the bundle of curves C and generate a transformation group of C. These are the point transformations, which arise when the components of v only depend upon (s, x α ): A generalized vector field (3.6) with a[x] = 0 is called an evolutionary vector field. The prolongation of an evolutionary vector field v ev = σ[x] ∂ ∂x α takes the simple form: (3.12) In general, an evolutionary vector field v = σ α [x] ∂ ∂x α defines an infinitesimal variation of a curve x α = u α (s) according to: (3.13) The total derivative is associated to a generalized vector field ∂ ∂s +ẋ α ∂ ∂x α whose infinite prolongation is: The prolongation of a generalized vector field (3.6) can always be decomposed into the sum of the prolongation of an evolutionary vector field and a total derivative: is called the evolutionary representative of v. The evolutionary representative of a vector field generating a point symmetry is of the form for some functions a(s, x) and b α (s, x) on C. Let x α = u α (s) be a curve in U parametrized by s. This curve is an affinely parametrized geodesic if and only if it satisfies where Γ α βγ (u) are the Christoffel symbols of the metric-compatible connection evaluated along the curve. The equations (3.18) are equivalent to the Euler-Lagrange equations of the Lagrangian L : J → R for affinely parametrized geodesics: The equation of motion (3.18) and all its differential consequences defines a submanifold E ⊂ J called the prolonged equation manifold. In the coordinates (s, x α ,ẋ α ,ẍ α , . . . ) on J the equations for E arë (3.20) We now define a conservation law as a quantity built from any parametrized curve x α = u α (s) and its derivatives to any order which becomes independent of s when the curve satisfies the geodesic equation (3.18). (The conserved quantity may depend explicitly upon s.) This means that the conservation law is a function on J whose total derivative vanishes when evalutated on the prolonged equation manifold. A familiar example of a conservation law for the affinely parametrized geodesic equation is provided by the Lagrangian itself: We have which vanishes when evaluated on E as defined in (3.20). Noether's theorem establishes a correspondence between conservation laws and symmetries of the Lagrangian provided the notion of "symmetry" is as follows. Notice that the left hand side of (3.24) is just the Lie derivative of L along pr v, taking account of the fact that the Lagrangian is a density of weight one on the real line with coordinate s, or equivalently that L ds is a 1-form. A straightforward calculation establishes the following convenient result [9]. The connection between symmetries and conservation laws, first proved by Noether, when specialized to first-order Lagrangians for ODEs is as follows [10], [20], [4], [9]. L(s, x,ẋ), the conservation law is given by For a general version of the theorem and proof, applicable to a general Lagrangian, and suitable for ODEs or PDEs, see [9] (see also [4] in the ODE context). The conservation law defined by (3.22) can be obtained from Noether's theorem by virtue of the point symmetry It is a classical result that spacetime isometries define symmetries and conservation laws for geodesics. This can be seen as follows. A 1-parameter family of isometries φ λ : M → M , φ * λ g = g, λ ∈ R, of a spacetime (M, g) is generated by a Killing vector field ξ on M satisfying L ξ g = 0. This vector field lifts to an evolutionary vector field, It is straightforward to verify directly that DQ = 0 on the prolonged equation manifold E (see (3.20)). Symmetries and conservation laws associated to affine vector fields We now turn to one of the principal results of this paper: affine motions define generalized symmetries of the Lagrangian for affinely parametrized geodesics. Theorem 2. Let Y be an affine vector field on the pseudo-Riemannian manifold (M, g). Define h = L Y g. The generalized vector fields define generalized symmetries of the Lagrangian (3.19). Proof. The proof goes by direct computation. We use Combining all these relations reveals in each case The existence of the two conservation laws for the geodesic equation associated to an affine motion [2], [7] now follows from an application of Noether's theorem. Corollary. Let Y be an affine vector field on the pseudo-Riemannian manifold (M, g). Define h = L Y g. The following quantities are conservation laws for the affinely parametrized geodesic equation: Proof. This follows from Theorem 2 and Theorem 1 applied to v 1 and v 2 with the function G given by − 1 2 h αβ (x)ẋ αẋβ and 1 2 sh αβẋ αẋβ , respectively. It can also be verified directly by computing DQ in each case and checking that this derivative vanishes on the prolonged equation manifold E by virtue of ∇h = 0. Theorem 2 proves that affine motions define generalized symmetries of the Lagrangian for affinely parametrized geodesics. In the following we shall show in some detail how this occurs via the classification of affine symmetries due to Aminova [11]. Recall that an affine vector field either generates a 1-parameter group of homotheties or a 1-parameter group of proper affine symmetries. The proper affine motions have been classified in [11] and define bona fide generalized symmetries via Theorem 2. In the case of homotheties, the vector fields in Theorem 2 reduce to evolutionary representatives of point symmetries because h = g in this case. Let us begin by explicitly describing the point transformations associated to homothetic vector fields. If the affine vector Y field generates a 1-parameter group of homotheties φ t : the corresponding point transformation Φ (1) t : C → C generated by v 1 =ẋ α ∂ ∂x α in Theorem 2 is given by a translation of s: (3.46) Of course, the Lagrangian 1-form L ds with L given in (3.19) has manifest symmetry under translation in s. The point transformation Φ (3.47) corresponding to the homothetic mapping of M onto itself along with a rescaling of the affine parameter. This transformation is also a symmetry of L ds: the homothety has the effect of rescaling the metric in (3.19) which is compensated by the rescaling of s. In the special case where Y generates an isometry, v 1 vanishes while v 2 reduces to the lift (3.27) of the infinitesimal generator of the isometry to C. We now explain in detail how proper affine motions of Lorentz manifolds yield the generalized symmetries displayed in Theorem 2. According to reference [11], if Y is a proper affine vector field then (I) it acts by homothety on irreducible components in a de Rham-Wu decomposition (2.7) and/or (II) it acts via (2.8) on an irreducible Lorentzian component, and/or (III) it acts by the intermixing transformations generated by vector fields of the form (2.11). We now examine each of these cases. In case (I), in coordinates adapted to the product, the Lagrangian decomposes according to (2.7), (3.48) and the action of the symmetry is to rescale each of the metrics g i , i = 1, . . . , r, with a corresponding rescaling of the affine parameter, as discussed above when a homothety of the entire metric was considered. See (3.47). The action of the symmetry on the first term in (3.48) is as follows. Introduce coordinates which rectify a basis of parallel vector fields so that the metric takes the form where ǫ = ±1. The proper affine vector fields can be chosen to take the form We then have (3.52) The corresponding portion of the Lagrangian transforms as In case (II) it is convenient to use the coordinates x α = (v, w, x i ) introduced in (2.9), (2.10). The Lagrangian (3.19) is given in such coordinates by The infinitesimal symmetry transformation is generated by Finally, in case (III), aside from irreducible components and their affine symmetries of type (I), we have a metric of the form (3.59) The intermixing transformations are of the form The infinitesimal transformations with, for example, A = 1 are given by The Lagrangian Affine Motions and Conservation Laws for Homogeneous Solutions of the Einstein Equations A number of authors have found affine motions for various solutions of the Einstein equations, e.g., [12], [13], [14], [8]. Besides the classification of affine motions due to Aminova [11], Hall and da Costa used a classification of holonomy groups to determine which affine symmetries may occur in a four-dimensional spacetime [16]. The possibilities for electrovacua in four dimensions have been examined in [18]. In this section we calculate all proper affine motions which arise for homogeneous solutions of the Einstein field equations. This provides a complete characterization of proper affine motions for all homogeneous solutions of the Einstein equations with matter content given by vacuum, Einstein, perfect fluid, and electromagnetic field. We also give the corresponding conservation laws for affinely parametrized geodesics. Homogeneous spacetimes admit a transitive group of isometries. All homogeneous solutions to the Einstein equations, are known for the following cases: vacuum (T ab = 0 = Λ); Einstein (T ab = 0, Λ = 0); homogeneous perfect fluids, pure radiation, and homogeneous electromagnetic fields, The 4-velocity, radiation vector, and electromagnetic field satisfy L ξ V = 0, L ξ k = 0, L ξ F = 0, where ξ is any Killing vector field, L ξ g = 0. All these solutions can be found in reference [21]. Using the DifferentialGeometry package [22] we have calculated all the affine vector fields and first integrals for this class of solutions. The analysis has two parts. First, one directly solves the equations arising from infinitesimal invariance of the Christoffel symbols in the given coordinate chart: It is straightforward to extract from the solution space of (4.5) a basis for the set of proper affine vector fields (modulo homotheties). Second, the results are checked against the dimension of the vector space of solutions to (4.5), which can be computed a priori as follows. The linear system of equations (2.4) determining Y is of finite type, so that all second and higher order derivatives of Y are determined by Y and its first derivatives. The dimension of the vector space of solutions in a neighborhood of a point p ∈ M can be determined by successively differentiating equation (2.4) and expressing the result in terms of Y and ∇Y at p. This defines a system of linear equations for the n(n + 1) dimensional vector space of data Y (p) and ∇Y (p). If r is the rank of this linear system, then the vector space of solutions to (2.4) is of dimension n(n + 1) − r. In the following, we list the results of this analysis for all homogeneous solutions of the Einstein equations with matter content as described above. We follow the enumeration of these solutions as given in reference [21]. We present the line element of the metric, a basis for the vector space of proper affine vector fields (modulo homothetic vector fields), and the corresponding conservation laws for geodesics, calculated according to Corollary 3.2.
2018-07-26T17:59:01.000Z
2018-07-21T00:00:00.000
{ "year": 2018, "sha1": "238e1ed70a020a88c26d23e39ce3b2ffa4857b66", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.08180", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "238e1ed70a020a88c26d23e39ce3b2ffa4857b66", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
247292655
pes2o/s2orc
v3-fos-license
Metric density results for the value distribution of Sudler products We study the value distribution of the Sudler product $P_N(\alpha) := \prod_{n=1}^{N}\lvert2\sin(\pi n \alpha)\rvert$ for Lebesgue-almost every irrational $\alpha$. We show that for every non-decreasing function $\psi: (0,\infty) \to (0,\infty)$ with $\sum_{k=1}^{\infty} \frac{1}{\psi(k)} = \infty$, the set $\{N \in \mathbb{N}: \log P_N(\alpha) \leq -\psi(\log N)\}$ has upper density $1$, which answers a question of Bence Borda. On the other hand, we prove that $\{N \in \mathbb{N}: \log P_N(\alpha) \geq \psi(\log N)\}$ has upper density at least $\frac{1}{2}$, with remarkable equality if $\liminf_{k \to \infty} \psi(k)/(k \log k) \geq C$ for some sufficiently large $C>0$. Introduction and statement of results For α ∈ R and N a natural number, the Sudler product is defined as This product was first studied by Erdös and Szekeres [12]. Later, Sudler products appeared in many different areas of mathematics that include, among others, Zagier's quantum modular forms and hyperbolic knots in algebraic topology [3,8,24], restricted partition functions [23], KAM theory [17] and Padé approximants [18]. Furthermore, they were used in the solution of the Ten Martini Problem [5]. Note that by 1 -periodicity of P N (α) and the fact that P N (α) = 0 for rational α and N sufficiently large, it suffices to consider irrational numbers α ∈ [0, 1]. In [12], it was proven that (1) lim inf N →∞ P N (α) = 0, lim sup N →∞ P N (α) = ∞ holds for almost every α, raising the question of whether this holds for all irrationals α. Lubinsky [19] showed that (1) remains true for all α that have unbounded partial quotients. On the other hand, Grepstad, Kaltenböck and Neumüller showed in [13] that lim inf N →∞ P N (φ) > 0 for φ being the Golden Ratio, answering the question negatively. This counterexample was extended in [4,15] to certain quadratic irrationals that have only particularly small partial quotients. For more results in this area, we refer the reader to [14] and the references therein. The asymptotic behaviour of the Sudler product depends delicately on the size of the partial quotients of α. Since very much is known about the Diophantine properties for almost all irrationals, many results have been obtained in the metrical setting. Note that after taking logarithm, we see that log P N (α) = N r=1 f (nα) is a Birkhoff sum for the irrational rotation with f (x) = log|2 sin(πx)|, having a logarithmic singularity. For a general overview of Birkhoff sums in similar settings, we refer the reader to the survey [11]. Lubinsky and Saff [20] proved that for almost all α, we have lim N →∞ log P N (α) N = 0. Subsequently, Lubinsky [19] improved this result and obtained a divergence/convergence result as it is typical in metric Diophantine approximation: under a regularity condition (see [19] for the precise requirements), he showed that for a positive, non-decreasing function ψ with ∞ k=1 1 ψ(k) < ∞, almost all α satisfy (2) |log P N (α)| ≪ ψ(log N) (where ≪ denotes the usual Vinogradov symbol, see Section 2.1 for a proper definition). On the other hand, if ∞ k=1 1 ψ(k) = ∞, then both inequalities (3) log hold for infinitely many N. These statements also follow from a more refined result obtained by Aistleitner and Borda [3], who showed that for all α whose partial quotients fulfill (a 1 + . . . + a K )/K → ∞, we have In a recent work, Borda [9] proved several results on the value distribution of Sudler products, both for badly approximable irrationals and for almost all α. In the latter context, he improved (3) in the sense that the inequalities in (3) The proof relies on (4) and the variance estimate which is shown to hold for infinitely many M ∈ N. Additionally, Borda makes use of the "reflection principle" of Sudler products, which will also play a main role in this paper. This principle was observed by [4] and used in the subsequent literature on Sudler products several times. We state it here in the form of [3, Propositions 2 and 3]: for any irrational α and 0 ≤ N < q K (where q K denotes the denominator of the k -th convergent of α, see Section 2.2 for a proper definition), we have In particular, (7) implies that for almost all α, the values log P N (α), N = 1, . . . , q K , distribute symmetrically around the center log q K , which is however of negligible order for almost all α. Hence, the numbers 1 ≤ N < q K lie approximately as often in (5) as in (6). Borda remarked in [9] that the estimate on the upper density in Theorem A is probably not optimal, saying that it might be possible that the union of (5) and (6) has upper density 1. Here we prove something even stronger: we show that already (6) on its own has upper density 1. The symmetry around the negligible center log q k discussed above leads to the belief that (5) has the same upper density than (6). Surprisingly, this turns out to be wrong: we prove that if ψ is as in Theorem 1 and additionally fulfills a certain regularity condition, (5) has upper density 1/2 for almost every α. Remarks on Theorems 1 and 2 and further research. • Note that the divergence criterion of ∞ k=1 1 ψ(k) is invariant under multiplication with constant factors. Therefore, it suffices to show Theorems 1 and the first part of Theorem 2 for the sets (5) and (6) with ψ(log N) substituted with C 1 · ψ(C 2 log N), where C 1 , C 2 > 0 are arbitrary constants. We will make use of this fact several times in the subsequent proofs without explicitly stating it. • By (2), we see that the assumption ∞ k=1 1 ψ(k) = ∞ is essential, as otherwise the upper density is trivially zero. Note that also "upper density" cannot be replaced by "lower density": for ψ(k) ≥ 12V /π 2 + ε k log k, where V is the constant from Theorem A, even the union of (5) and (6) has lower density zero (see [9,Theorem 7]). It is interesting to find the minimal growth rate of ψ such that the sets (5), (6) or their union have non-zero lower density. • Note that even in the case when the regularity condition lim inf k→∞ ψ(k) k log k ≥ C is not satisfied, Theorem 2 gives an improved lower bound in comparison to Theorem A. Our approach relies on the fact that for almost every irrational, the trimmed sum of its first k partial quotients is bounded from above by k log k, with the largest partial quotient dominating the sum infinitely often. Therefore, we only need to control the Ostrowski coefficient of the largest partial quotient (see Section 3 for an overview). It remains open how far the regularity condition from Theorem 2 can be relaxed such that the upper density of (5) is still 1/2 for almost every α. Below we show that ψ has to fulfill ψ(k) ≥ (1/2 − ε)k infinitely often for arbitrary small ε > 0. This can be deduced in the following way from [9, Theorem 9]: the theorem states (among other results) that for any t ≥ 0, where λ denotes the 1 -dimensional Lebesgue measure. By Chebyshev's inequality, we obtain that for any ε, y > 0, Applying Fatou's Lemma, we get that on a set of measure of at least c (10πε 2 y) > 0, holds for infinitely many M. This implies that the upper density of is bounded from below by 1 − y, so choosing y < 1 2 , we can deduce that for ψ(k) ≤ (1/2 − ε)k, the upper density of (5) being 1/2 fails to hold on a set of positive measure. However, it remains open whether having ψ(k) ≥ k 2 is already sufficient to deduce upper density 1/2 for almost all α. Similarly, it is interesting if there is some threshold function where the upper density of the set in (5) jumps from 1/2 to 1 for almost every α (and if so, how fast does this function grow?), or if the value of the upper density attains a fixed number strictly between 1/2 and 1 for certain functions ψ and almost every irrational. Notation and preliminary results g(x) = 1. Given a real number x ∈ R, we write x = min{|x − k| : k ∈ Z} for the distance of x from its nearest integer. Continued fractions. In this subsection, we shortly recall all necessary facts about the theory on continued fraction that are used to prove Theorems 1 and 2. For a more detailed introduction, we refer the reader to the classical literature, e.g. [1,21,22]. Every irrational α has a unique infinite continued fraction expansion [a 0 ; a 1 , ...] with convergents p k /q k = [a 0 ; a 1 , ..., a k ] that fulfill the recursions For shorter notation, we will just write p k , q k , a k , although these entities depend on α. We know that p k /q k approximates α very well, which leads to the following well-known inequalities for k ≥ 1: from where we can deduce that Using (8), we obtain that Fixing an irrational α = [a 0 ; a 1 , ...], the Ostrowski expansion of a non-negative integer N is the unique representation Metrical results. Much is known about the almost sure behavior of continued fraction coefficients and convergents. Below we state all known properties of almost every α that are used during the proofs of Theorems 1 and 2. Corollary 3. Let ψ be a non-decreasing, positive function such that ∞ k=1 1 ψ(k) = ∞. Then for almost every α, there exist infinitely many K ∈ N such that the following hold. a) ψ(K) < a K < K 2 . b) K−1 ℓ=1 a ℓ ≪ K log K with an absolute implied constant. Heuristic behind the proofs We start by sketching the heuristic idea behind the proof of Theorems 1 and 2. This can be compared with [2, Section 2.1]. Starting with Theorem 1, note that we can assume without loss of generality that ψ(k)/(k log k) → ∞, since this implies the statement also for slower-growing ψ. Let ψ and K be as in Corollary 3 and let N < q K be arbitrary with Ostrowski expansion N = K−1 ℓ=0 b ℓ q ℓ . We use the usual decomposition of P N (α) into certain shifted Sudler products. This approach was first used in the special case for α being the Golden Ratio in [13] and made more explicit and general in subsequent works in this area, e.g. [3,4,14,15,16]. Defining Ignoring first the contribution of the numbers ε ℓ (N), and using the approximation P q ℓ α, (−1) ℓ x/q ℓ ≈ |2 sin(πx)| elaborated later, we see that By the choice of K as in Corollary 3, the value a K dominates the sum K−1 ℓ=0 a ℓ . So using log|2 sin(πx)| ≤ log(2) and assuming that (17) b K−1 /a K 0 log|2 sin(πx)| dx is bounded away from 0, we have that log P N (α) ≪ −a K , provided that the integral in (17) is negative. It is easy to see that this is the case if and only if b K−1 /a K < 1 2 , which leads to (18) log P N (α) ≪ −ψ(K) is equivalent to log P N (α) ≪ −ψ(log N) for most N, which implies Theorem 1. By the same reasoning, we can immediately deduce that at least 50% of all numbers N < q K fulfill (18). Using the reflection principle, we see that also is fulfilled for about 50% of all numbers N < q K , hence the first part of Theorem 2 follows immediately. For the equality in case lim inf k→∞ ψ(k)/(k log k) ≥ C, we fix some integer q K−1 ≤ M < q K (this K does not fulfill in general the properties of Corollary 3), and show that asymptotically, at most 50% of all N < M can fulfill log P N (α) ≫ ψ(K). Defining a ℓ 0 = max ℓ≤K a ℓ , we can argue similar to before that for C sufficiently large and log N ≫ log q K , So in order to fulfill log P N (α) ≥ ψ(log N), we have the necessary condition or equivalently, b ℓ 0 −1 (N)/a ℓ 0 > 1/2, which can be seen to be fulfilled by at most 50% of all N < M. Hence, no matter how we choose M ∈ N, at most half the numbers N < M fulfill (19), so the upper density of (5) cannot exceed 1/2. The punchline why the upper densities of (5) and (6) differ is the following: on the full period 1 ≤ N ≤ q K , there are about as many elements in (5) as in (6), and for a K being large, almost all elements are in one of those sets. The criterion whether N is in (5) or in (6) is (almost) equivalent b K−1 (N) > a K /2 or not. As b K−1 is the most significant coefficient for the size of N (since b K−1 (M) < b K−1 (N) implies M < N), we see that all elements in (6) appear before the elements in (5), causing the asymmetric result. Remark. Note that all estimates in this paper only consider upper bounds. This makes the analysis much easier since we can ignore the singularities of the function log|2 sin(πx)| at x = 0 or x = 1, as we trivially bound log|2 sin(πx)| ≤ log(2) from above. The reflection principle provides the tool to use the upper bounds also to achieve Theorem 2, without having to consider that singularities. 4.1. Preparatory results for the approximation errors. In this section, we discuss the actual errors that are made by comparing log P N (α) with a K b K−1 /a K 0 log|2 sin(πx)| dx (see Lemma 7). The first step in this direction is done by [2,Proposition 12]. For the convenience of the reader, we state it below as Proposition 4. Proposition 4. Let N = K−1 ℓ=0 b ℓ q ℓ be the Ostrowski expansion of a non-negative integer and ε ℓ (N) as in (15). There exists a universal constant C > 0 such that for any ℓ ≥ 1 with b ℓ ≥ 1, log|2 sin(π(bq ℓ δ ℓ + ε ℓ (N)))| sin(πnδ ℓ /q ℓ ) cot π n(−1) ℓ p ℓ + x q ℓ denotes a modified cotangent sum. We see that we need to find upper bounds on the modified cotangent sums V ℓ . This is done by the following variant of [2, Lemma 8]. Proof. The statements in (i) are proven in [2, Lemma 8]. For (ii), we use the estimate |V ′ k (x)| ≪ 1 (1−|x|) 2 , which is also shown in [2]. The result now follows immediately after integration. Next, we turn our attention to controlling the size of the perturbations ε ℓ (N). It is easy to see that −1 < ε ℓ (N) < 1 for any 1 ≤ ℓ ≤ K − 1. By Lemma 5, we see that the error made by V ℓ (bq ℓ δ ℓ + ε ℓ (N)) is particularly large when its argument is close to its singularities at −1 and 1. The following proposition aims to bound the arguments away from those singularities and to show that the perturbation ε ℓ (N) is small if a ℓ+1 is large, which will be the case in the main term (see Section 3). Proposition 6. Let ε ℓ (N) be defined as in (15) and b ℓ ≥ 1. Then we have the following inequalities: (i) (ii) with the implied constants being absolute. The following lemma combines the preparatory results from above. It contains the main ingredients to the proof of both Theorems 1 and 2. 4.2. Proof of Theorem 1. We can assume without loss of generality that lim k→∞ ψ(k)/(k log k) = ∞, as showing this will imply the statement of Theorem 1 also for slower growing ψ. Applying Corollary 3, we know that there exist infinitely many K such that Fixing an arbitrary small δ > 0, we define for every K ≥ 1 that fulfills (28), the set Choosing K sufficiently large, we have by (14) that for all N ∈ M K , As 1, it suffices to show that for each N ∈ M K , we have We apply Lemma 7 with ℓ 0 = K and obtain Note that we have ε K−1 (N) = 0 and b K−1 (N) ≤ 1 2 − δ a K , so since log|2 sin(πx)| is monotonically increasing on [0, 1/2], we have for some c = c(δ) > 0 that b K−1 −1 b=0 log 2 sin πbq K−1 δ K−1 + ε K−1 (N) ≤ a K b K−1 /a K 1 log|2 sin(πx)| dx ≤ −c·a K ≪ −ψ(K), which completes the proof. 4.3. Proof of Theorem 2. By the proof of Theorem 1, we can deduce that By the reflection principle (7), we see that at most one of the inequalities log P N (α) ≤ −2ψ(K), log P q K −N −1 (α) ≤ −2ψ(K) can be fulfilled, hence there is equality in (29). Applying the reflection principle a second time implies which finishes the proof of the first part of Theorem 2. To show equality in the case where lim inf k→∞ ψ(k)/(k log k) ≥ C, let q K−1 ≤ M < q K be an arbitrary integer and let a ℓ 0 = max ℓ≤K a ℓ . We define the sets
2022-03-08T06:47:31.405Z
2022-03-07T00:00:00.000
{ "year": 2022, "sha1": "38084a3151e765d5fdbb3cdc5e98cb47acdae304", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "38084a3151e765d5fdbb3cdc5e98cb47acdae304", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
118539917
pes2o/s2orc
v3-fos-license
Single beam atom sorting machine We create two overlapping one-dimensional optical lattices using a single laser beam, a spatial light modulator and a high numerical aperture lens. These lattices have the potential to trap single atoms, and using the dynamic capabilities of the spatial light modulator may shift and sort atoms to a minimum atom-atom separation of $1.52 \mu$m. We show how a simple feedback circuit can compensate for the spatial light modulator's intensity modulation. Introduction Individual neutral atoms trapped in optical dipole traps present a feasible approach to the construction of a quantum computer [1,2], as well as providing a versatile platform for direct investigation of quantum phenomena [3][4][5]. The capability to manipulate and rearrange the relative position of individual atoms in the optical dipole traps is crucial to the above fields, and has been the subject of intense research. Different approaches to performing this manipulation include the use of acoustic optic modulators (AOM) to create an 'atom sorting machine' [6], using a spatial light modulator (SLM) to create dynamic atom traps [7][8][9] and using piezo-controlled mirrors [10] to change the position of trapped atoms. In particular, the atom sorting machine [6] represented an important step in scientists' ability to manipulate the microscopic world. In this setup, atoms are trapped in the antinodes of two 1-D crossed optical standing waves [11]. The atom trapping antinodes are shifted through the use of AOMs, and are able to shift and sort trapped neutral atoms to a distance of 10 µm of each other. The interatomic distance of 10 µm is limited by the size of the beam waist of each of the overlapping 1-D optical standing waves. The atom sorting machine, combined with the recently demonstrated neutral atom Rydberg gate [12,13], are key components towards the development of a neutral atom based quantum computer [14]. However, as shown in Refs. [12,13], ∼ 4 µm is the interatomic distance needed in order to create the neutral atom Rydberg gate. Introducing high numerical aperture lenses into the experiment described in Ref. [6] could decrease the waists of the overlapping 1-D optical standing waves, and therefore the interatomic distance by up to an order of magnitude, but 4 lenses would be needed to achieve this, severely restricting the optical access in such a setup. Holograms, on the other hand, have the potential to create arbitrarily shaped optical dipole traps. Therefore one could create two 1-D lattices with the interatomic distance required for a Rydberg gate, with holograms, laser light and an appropriate single lens. Static and dynamical holograms can be created with spatial light modulators [15]. The use of certain SLM modules can introduce an intensity fluctuation to a diffracted light beam [16]. A fluctuating trapping beam intensity shifts a trapped atom's resonances [17], and therefore it is difficult to resonantly address such an atom. Furthermore, the resulting time dependent trapping potential can lead to heating and possibly loss of the trapped atom. We recently demonstrated a high-efficiency method of loading individual atoms to an optical micro-trap [18]. In that experiment a large number of atoms are first loaded to a microscopic optical trap, whereupon they are irradiated with laser light to induce light-assisted collisions. The energy gained through these collision allow atoms to escape the trap, until only one atom remains trapped. Our demonstrated loading excited state ground state Fig. 1. a) The effect of far off resonant light on the energy levels of a two level atom. b) The spatially dependent energy levels resulting from a light field from a Gaussian beam in which the atom sees the ground state as a potential well, and may become trapped if its energy is small enough. efficiency of 83% allows for scaling the number of singly occupied micro-traps to beyond a few, but sorting is still required for larger systems. We used blue-detuned light for fluorescence detection, together with a variant of Sisyphus cooling to image the trapped atoms [19]. In this paper we demonstrate how we use analytical holograms produced with an SLM, and a single high numerical aperture lens, to create two 1D crossed optical standing waves. Here the antinodes of the standing waves can be used as a single beam atom sorting machine capable of achieving an interatomic separation of 1.52 µm. We show how with the use of a simple feedback loop, the undesirable intensity noise created by the SLM can be decreased by 90 %. A. Trapping atoms with light Neutral atoms can be spatially confined in red-detuned optical dipole traps. When a two level atom is in a spatially dependent far off resonant light field, its ground and excited state energy levels experience a light shift ∆E equal to [17]: where ω 0 , Γ, ∆, I(r) are the transition angular frequency of the two level atom, the decay rate of the excited atom, the detuning between the light angular frequency and atomic transition angular frequency, and the spatially dependent intensity of the light field respectively. The ± represents whether the energy level has been shifted up or down. For far off resonance red detuned light the ground state energy level is shifted down, and the excited energy level is shifted up. A spatially dependent field, such as that of a Gaussian beam, produces a ground state potential well, U (r) = ∆E − (r), in which an atom can be trapped. The potential of this atom trap is proportional to the local intensity of the laser beam, as in the schematic in Fig. 1b. Such optical dipole traps can be formed in various ways, for example as the focus of a tightly focused laser beam [20], or arrays of dipole traps can be formed as the antinodes of a standing wave of two counter-propagating laser beams [6]. Fig. 2. Schematic of the setup. Light projected onto the SLM is diffracted and spatially filtered before being imaged through a high numerical aperture lens. The resulting image is viewed using the microscope. The point "(x)" is the space where a λ 2 waveplate and PBS is added to calibrate the SLM. To compensate for the intensity flicker of the SLM, the power of a small part of the diffracted beam is measured, passed through an electronic feedback system, and fed back into the AOM. Figure 2 is a schematic of the setup we built to investigate the use of an SLM as a single beam atom sorting machine with a sub-micron lattice constant. A laser beam red-detuned from 780 nm is passed through an AOM, the first order diffracted beam is passed through a polarizer and coupled into a polarization maintaining fiber. The power in the beam is controlled through the AOM driver. To trap neutral atoms, one generally needs a strong intensity gradient, and therefore a high intensity light field [18]. However, here we use a very low power beam during investigations in order to not saturate the imaging camera. The outcoupled beam from the fiber has a Gaussian profile with waist of 5 mm, and power of 15 nW. To ensure a pure polarisation, we pass this beam through a λ 2 waveplate and polarizing beamsplitter (PBS). A subsequent λ 2 waveplate is used to orientate the polarization of light for the SLM. We use a Holoeye Pluto Phase Only Reflective Modulator. The SLM face is 16.6 × 10.2 mm, with 1920 × 1080 8 µm pixels. The SLM takes its input signal from the green color channel of the graphics card in the computer to which it is connected. The SLM can respond to 256 gray-scale levels displayed on the computer monitor, and therefore each pixel can impart 256 different phase levels to an incident laser beam. The SLM has a refresh rate of 60 Hz. The SLM is tilted at a small angle to direct the reflection and diffracted beams away from the input optics. We use a blazed hologram to diffract the incident beam, and its first order is used to create the atom-sorting machine. We image the diffracted beam with a two lens, 1× telescope configuration. The reflected beam is spatially filtered with an iris at the focus of both 300 mm plano-convex lenses within the telescope, where the reflected and diffracted beams are well separated (Fig. 2). A beamsplitter directs a portion of the spatially filtered beam onto a photodiode for use in a feedback system to minimize the intensity flicker produced by the refresh rate of the SLM, as is described below. The spatially filtered beam is then focused through a high numerical aperture lens (NA = 0.55), identical to the one used for single atom trapping in Ref. [18]. The light structures produced are imaged with a 100x magnification microscope, which has a NA = 0.7 objective lens, onto a charge coupled device (CCD) camera. C. SLM calibration We use blazed holograms on the SLM to form the dipole trap structures. The main advantage of this approach is the high diffraction efficiency one can obtain (up to 83% [21]). To achieve high diffraction efficiency, the phase response of the SLM must be calibrated for use in a given experiment. A linear response between the SLM's 256 gray-scale inputs and a 0 − 2π phase shift is desired. But the phase added to incident light is dependant on the wavelength of the laser and the angle of the SLM to the incoming beam. The SLM has a Look Up Table (LUT) that stores the input-grayscale-to-phase-conversion. To calibrate the LUT to the input angle and light wavelength used we used a method inspired from Ref. [22] in which we interfere a beam phase-modulated by the SLM, with a beam that receives no phase-modulation from the SLM. We begin with a linearly polarized beam traveling in theẑ direction, in complex notation its electric field is given by: where E 0 is the field amplitude, ω is the angular frequency, k is the wave number andx,ŷ are the unit polarization vectors. The SLM will only phase modulate incident light of a certain polarization. Here the modulation axis is thex axis. Therefore after being reflected by the SLM, the electric field of the beam is: where θ is the phase added to the component of the beam withx polarization. θ is dependant on the LUT value for that particular gray scale level. Passing this beam through a λ 2 waveplate, with fast axis at an angle of π 8 with respect to thex polarization axis, creates the following electric field: This beam passes through a PBS and the intensity from the output arm that reflects the component of light ofx polarization is: We play a "movie" on the SLM: a sequence of 256 frames, corresponding to the 256 available gray scale levels, at a rate of 4 frames per second, and in each frame all pixels identically display one of the 256 levels available to them. We record the intensity of light, I(θ), in one of the output arms of the PBS. This measured intensity changes as a result of each frame of the movie changing, and from Eq. 6 one can deduce the phase added for each particular gray scale level associated with each frame. From these intensity measurements a new LUT is created that will produce a linear phase response and this LUT is written to the SLM. This simple inline method means that to calibrate the SLM, we only need to place a λ 2 waveplate, a PBS and a photodetector into the beam reflected from the SLM, at the point marked "(x)" in the existing setup in Fig. 2. The method is convenient as it calibrates the SLM in situ for the correct angle and wavelength it is intended to be used for within its future experiments. Changing the incident angle or wavelength of the laser beam will decrease the diffraction efficiency of the SLM. The optical components used for calibration can be taken out of the setup very easily, without affecting the beam incident angle on the SLM. After this calibration we can diffract 79% of incoming light for 780 nm, close to the maximum 83% possible [21] for the default wavelength of 633 nm. D. Minimizing SLM flicker The Holoeye SLM used in this experiment has a refresh rate of 60 Hz. The Liquid Crystal (LC) pixels themselves are addressed at the addressing rate of 120 Hz. Due to the binary nature of the addressing scheme, the liquid crystals are continuously moving, contributing to a phase shift error at the addressing rate [23]. This adds a flicker to the intensity of the diffracted beam of up to ±28%. The measured intensity is displayed as the blue line in Fig. 3. When trapping atoms in optical dipole traps formed from an SLM, an intensity modulation will lead to a corresponding modulation of the trapping potential and the atomic resonances (U (r) ∝ I), making such trapped atoms difficult to probe and investigate [16]. To minimize this flickering effect we direct a small portion of the first order beam from the SLM onto a photodiode, and the resulting signal passes through a simple electronic feedback loop, to the AOM driver, as in Fig. 2. The red line in Fig. 3 is the resulting measured intensity, with the amplitude of the noise reduced by 90%. Instead of a feedback circuit, one could feed the correct waveform to the AOM in order to completely cancel the flicker. However in our experiments we will dynamically change the holograms on the SLM, and with different holograms, different intensity modulations exist, a real time feedback circuit is needed to minimize the intensity modulations. A. Static Dipole Traps Using this setup, a single dipole trap can be formed by having a blazed grating cover the entire SLM: the diffracted beam retains the incoming beam's spatial profile. Once this is focused through the high NA lens, it creates a single dipole trap with a full-width-half-maximum (FWHM) size of (0.76 ± 0.04 µm). The blazed grating has a period of 20 pixels. A pixel on the SLM, whose corresponding pixel from the gray-scale image is black, will add a 0 radian phase shift to incident light. Similarly white corresponds to a 2π phase shift. In the case of producing the array of traps in Fig. 4a, two sections of the beam incident on the SLM are diffracted, corresponding to the light incident on the two sections of blazed grating in the hologram. The non-diffracted part of the beam is spatially filtered, and the diffracted parts of the beam are focused by the high NA lens. These two parts of the diffracted light beam will interfere once they are focused by the high NA lens to form the standing wave in the horizontal direction in Fig. 4b. By changing the layout of the gray-scale image, and therefore the hologram, one can produce an array of dipole traps in the vertical direction, as in Fig. 4d. The lattice constant can also be changed by changing the distance between the two blazed sections of the SLM hologram. We have obtained a sub-micron lattice constant of 0.76 ± 0.04µm, and this is limited by the numerical aperture of the aspheric lens. The error in this measurement is due to the pixel size of the CCD camera. The minimum transverse waist of the lattices in Fig. 4 for this setup is 1.01 ± 0.04µm (the full-widthhalf-maximum of the horizontal lattice in the vertical direction). This waist defines how many traps will overlap (2 in this case) if we project both a lattice in the vertical and horizontal direction on top of each other as in Fig 4f. This ultimately limits the atom-atom separation that we can obtain when using this setup to rearrange and sort single atoms. The analytical holograms were created using MATLAB R software, requiring less processing time than creating holograms by taking numerical fourier transforms of desired light patterns. B. Dynamic optical lattices for atom sorting Trapped neutral atoms will remain in the moving dipole traps, if these traps move adiabatically [6,24,25]. We create moving dipole traps by dynamically changing the holograms that produced the static dipole traps above. If a phase shift is introduced into one of the blazed gratings in the gray-scale image in Fig. 4a, the resulting array of dipole traps will be shifted along the lattice. However, the overall position of the entire array will not move, only the relative position of the dipole traps within the array envelope. A relative phase change of 2π between the blazed gratings results in the movement of the lattice of a distance of one lattice constant. Therefore by incremental changing the relative phase of one of the blazed gratings in Fig. 4a, we produce a horizontal conveyor belt capable of moving atoms. The conveyor belt can drive in either direction, by changing the direction of relative phase shift. Similarly shifting atoms up and down can be achieved in a vertical conveyor belt by changing the phase of one of the blazed gratings in Fig. 4c relative to the other. The atom-sorting machine described in Ref. [6] sorts atoms in both order and position by having an overlapping vertical and horizontal lattice pattern that can move independently of each other. By using the vertical conveyor belt to extract atoms from the horizontal lattice, followed by a shift of the horizontal lattice and a reinsertion, atoms can be repositioned in the horizontal lattice pattern. This can be reproduced using an SLM by applying a hologram similar to that in Fig. 4e where the relative phase is shifted between the left and right blazed gratings to drive the horizontal conveyor belt. Independently shifting the relative phase between the top and bottom blazed grating will drive the vertical conveyor belt. However, following this procedure, a complex interference pattern is produced in the overlap of the two lattices. An atom will be lost when attempting to transport it through this central interference. To overcome this, when driving the horizontal conveyor belt, we change the absolute phase of both the left and right blazed gratings in the hologram. Specifically, an incremental phase is added to the left blazed grating, whilst simultaneously being subtracted from the right blazed grating; the phase of the top and bottom gratings are held constant. This is as we would expect for creating an interfering moving lattice system: in order to maintain constructive interference between the antinodes of the vertical and the moving horizontal lattice, the relative phase between horizontal antinodes and vertical antinodes must not change. A similar procedure is used when operating the vertical conveyor belt. Figure 5 shows a series of images of optical dipole traps created with the SLM, and with colored spots representing atoms superimposed into the images, this illustrates conveyor belt protocol. Figure 5a shows where two atoms may originally be loaded into the horizontal optical lattice. In Fig. 5 b-d the vertical lattice is turned on, one of the atoms is moved to the center of the horizontal optical lattice using the horizontal optical conveyor belt, then extracted using the optical conveyor belt in the vertical direction. In Fig. 5 e-g the horizontal optical lattice is then moved, the atom is reinserted and the vertical optical lattice is turned off. This shows how atoms could be sorted to within 1.52 µm (two lattice constants) of each other. Our scheme to sort atoms has both advantages and disadvantages over the previously demonstrated atom sorting machine [6]. The amount of overlap between horizontal and vertical lattice patterns due to the size of the waist of the lattice patterns ultimately limits the minimum achievable distance between atoms after sorting. For example in [6], the minimum distance between sorted atoms is 10µm. Because we use a high NA lens, we are able to achieve relatively small waists in the lattice patterns. Our measured minimum lattice waist is 1.01 ± 0.04µm, which when overlapped by a perpendicular lattice pattern is a minimum overlap of two lattice constants (1.52 ± 0.08µm). This is the minimum atom-atom spacing achievable after sorting with this setup, comparing favorable to the interatomic distance of ∼ 4 µm needed in order to create the neutral atom Rydberg gate [12,13]. However the number of lattice sites in [6] is limited only by the size of their vacuum chamber and wavelength of the lattice pattern. Diffraction aberrations limited our setup to the creation of up to a 1-dimensional lattice with 28 resolvable sites. However, less than 28 atoms could be "sorted" in this case; as atoms originally trapped on the edges of the lattice will be lost as "the conveyor belt" moves to shift atoms. Conclusions We have used a spatial light modulator, a high numerical aperture lens and a single laser beam to create dynamical sub-micron dipole traps which may be used to trap neutral atoms and with the potential to manipulate the atom's spatial position. We can create an overlapping horizontal and vertical lattice of atom trapping potentials with a sub-micron lattice constant. By dynamically changing the SLM hologram we can change the position of the lattice potentials, thereby creating a single beam scheme that may be used for moving and sorting individual atoms, with a potential minimum atom-atom separation of 1.52µm. The intensity 'flicker' (a product of the digital nature of the SLM) is compensated with a simple feedback circuit and an acousto-optic modulator. Acknowledgements This work is supported by the New Zealand Foundation for Research, Science and Technology (NZ-FRST) Contract No. NERF-UOOX0703 and a University of Otago Research Grant.
2011-08-10T08:34:23.000Z
2011-08-10T00:00:00.000
{ "year": 2011, "sha1": "9b60f951f2b87f79684fefcea28277136075e890", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1108.2123.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9b60f951f2b87f79684fefcea28277136075e890", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
186204021
pes2o/s2orc
v3-fos-license
68Ga-PSMA-PET/CT for the evaluation of liver metastases in patients with prostate cancer Background The purpose of this study was to evaluate the imaging properties of hepatic metastases in 68Ga-PSMA positron emission tomography (PET) in patients with prostate cancer (PC). Methods 68Ga-PSMA-PET/CT scans of PC patients available in our database were evaluated retrospectively for liver metastases. Metastases were identified using 68Ga-PSMA-PET, CT, MRI and follow-up scans. Different parameters including, maximum standardized uptake values (SUVmax) of the healthy liver and liver metastases were assessed by two- and three-dimensional regions of interest (2D/3D ROI). Results One hundred three liver metastases in 18 of 739 PC patients were identified. In total, 80 PSMA-positive (77.7%) and 23 PSMA-negative (22.3%) metastases were identified. The mean SUVmax of PSMA-positive liver metastases was significantly higher than that of the normal liver tissue in both 2D and 3D ROI (p ≤ 0.05). The mean SUVmax of PSMA-positive metastases was 9.84 ± 4.94 in 2D ROI and 10.27 ± 5.28 in 3D ROI; the mean SUVmax of PSMA-negative metastases was 3.25 ± 1.81 in 2D ROI and 3.40 ± 1.78 in 3D ROI, and significantly lower than that of the normal liver tissue (p ≤ 0.05). A significant (p ≤ 0.05) correlation between SUVmax in PSMA-positive liver metastases and both size (ρSpearman = 0.57) of metastases and PSA serum level (ρSpearman = 0.60) was found. Conclusions In 68Ga-PSMA-PET, the majority of liver metastases highly overexpress PSMA and is therefore directly detectable. For the analysis of PET images, it has to be taken into account that also a significant portion of metastases can only be detected indirectly, as these metastases are PSMA-negative. Background Worldwide, prostate cancer (PC) is considered the second most frequently diagnosed cancer in men and the fifth leading cause of cancer death [1]. Recently, radiolabeled prostate-specific membrane antigen (PSMA) ligands such as 68 Ga-PSMA-HBED-CC have been introduced as a promising radiotracer for the PET imaging of PC [2]. PSMA is a transmembrane protein that is significantly overexpressed in most prostate cancer cells [3]. Different studies demonstrated that 68 Ga-PSMA-PET enables imaging with a higher specificity and sensitivity regarding the detection of metastases, compared to current standard imaging (CT, MRI and bone scintigraphy) and other PET tracers such as 18 F-Choline [4][5][6][7]. It also improves detection of metastatic lesions at low serum PSA levels in biochemically recurrent prostate cancer [8]. The liver is considered to be the third most common site for systemic metastases in PC (25%), after bone (90%) and lung (46%), according to autopsy studies [9]. The prevalence of clinical liver metastases in retrospective studies was 4.3 and 8.0% [10,11]. Liver metastases typically occur in systemic, late stage, hormone refractory disease [10]. However, there are reports of patients with liver metastases as the first site of metastatic disease and the liver representing the only metastatic site [10,12,13]. Especially in this patient collective, early and reliable detection of liver metastases is of high clinical importance for accurate staging and therapy planning. There is evidence that in PC, liver metastases are frequently associated with neuroendocrine characteristics; in a prospective study of 28 patients with liver metastases, Pouessel et al. measured increased levels of the neuroendocrine serum markers chromogranin A and neurone-specific enolase in 84 and 44% of the patients, and out of six patients with a pathological analysis, two had neuroendocrine metastases [10]. Neuroendocrine transdifferentiation might lead to loss of PSMAexpression and therefore impede the visualization of liver metastases in 68 Ga-PSMA-PET [14]. Furthermore, the relatively high background activity of the liver might also affect the visibility of liver metastases in 68 Ga-PSMA-PET [14]. Imaging of hepatic PC metastases in 68 Ga-PSMA-PET has been reported in case reports, but not been systematically researched in a larger cohort of patients [12,[15][16][17][18]. Therefore, the aim of this study was to investigate the 68 Ga-PSMA-PET imaging properties of liver metastases in PC patients. Study population For this retrospective study, we obtained approval from our institutional ethics review board. We extracted 739 consecutive patients with confirmed prostate cancer from our local database who underwent at least one 68 Ga-PSMA-PET/CT between September 2013 and April 2017. Out of these, we identified eighteen patients with liver metastases, according to the criteria described below. Prostate cancer was histologically proven in all patients. Only patients with no other known type of cancer but PC were included. All available additional information from clinical records were obtained. Patients' characteristics are summarized in Table 1. Gleason score (GS) was available in eleven, therapy information only in thirteen and PSA level only in twelve patients. Imaging protocol PET/CT imaging was performed 75.8 ± 18.2 min after intravenous injection of 120.5 ± 25.7 MBq of 68 Ga-PSMA. PET scans were acquired using a Gemini Astonish TF 16 PET/CT scanner (Phillips Medical Systems) in 3D acquisition mode [21]. Axial, sagittal and coronal slices were reconstructed (144 voxels with 4mm 3 , isotropic). Before PET scan, a low-dose CT was performed for anatomical mapping and attenuation correction (30 mAs, 120 kVp). Each bed position was acquired for 1.5 min with a 50% overlap. In case contrast-enhanced CT (CE-CT) was performed, 80-120 ml of contrast agent (Ultravist® 370, Bayer Schering Pharma, Berlin, Germany) was injected intravenously with a delay of 70 s for the venous phase. Imaging analysis Two experienced observers analyzed the PET/CT scans using Visage 7.1 (Visage Imaging GmbH, Berlin, Germany). For the diagnosis of metastases, all available imaging studies including all imaging modalities (CT, MRI, 68 Ga-PET) of the patients were taken into consideration. At least two of the following four criteria had to be fulfilled for the diagnosis of liver metastasis: (I) CT imaging with low-to-isoattenuating masses [22]; (II) MRI with typical presentation of liver metastases according to guidelines [23]; (III) high focal uptake of 68 Ga-PSMA in PET distinctively above normal heterogeneity; (IV) new appearance or significant change of size of lesions according to the RECIST 1.1 criteria compared to previous studies within the same modality with a minimum follow-up interval of six months [24]. Patients with signs of a malignancy other than PC were excluded. Out of 23 patients with suspected liver metastases, five patients dropped out because they did not fulfill these criteria. Overall 18 patients with hepatic metastases were identified out of 739 patients. Among these, criteria I was fulfilled by all patients, criteria II by four patients, criteria III by 16 patients and criteria IV by 12 patients. Maximum ten metastases per patient were analyzed. In case a patient was imaged more than once, only the most recent 68 Ga-PSMA-PET scan was included in this study. As a result, 103 liver metastases were analyzed as part of this study. The sizes of metastases were measured based on the CT scan. Regarding the evaluation of the radiodensity, two groups were formed. One group in which only unenhanced CTs were available (five patients) and another group in which contrast-enhanced CTs were available (13 patients). To normalize standardized uptake values (SUV) for body weight, they were calculated by the software using with the equation SUV = C tis /Q inj /BW, where C tis is the lesion activity concentration in MBq per milliliter, Q inj is the activity injected in MBq, and BW is the bodyweight in kilograms. For PET data quantification, a twodimensional region of interest (2D ROI), as well as a three-dimensional region of interest (3D ROI), were defined. 68 Ga-PSMA-HBED-CC uptake was quantified using maximum standardized uptake values (SUV max ). All values were recorded in the transaxial, attenuationcorrected PET-slice representing the greatest extent of the respective lesion. Regions of interest were defined manually in freehand mode avoiding the periphery of lesions to minimize partial volume effects. SUV max of the healthy liver was measured in a region with minimal irregularities. An SUV max -lesion-to-background ratio (LBR) was calculated for all metastases in 3D ROI, using the formula LBR ¼ SUV max of metastasis SUV max of liver . Any tracer uptake 20% or more above liver uptake was considered PSMApositive, any tracer uptake below that was considered PSMA-negative. The readers were blinded to the results of other diagnostic procedures and the clinical history of the patients. Statistical analysis The descriptive statistics are reported as mean, median and/or range when applicable. Nonparametric statistical tests were used as the data contained several outliers. The Mann-Whitney U test was used for the comparison of SUV max values and mean radiodensity values (HU mean ) between the healthy liver and liver metastases. SUV max values in 2D and 3D ROI were compared using the Wilcoxon signed-rank test. To determine the relationship between SUV max and size of lesions, patients' age and PSA serum level, a Spearman's rank correlation was used. A binomial test was run to evaluate the distribution of liver metastases among the hepatic lobes. The significance level was set to α < 0.05. Statistical analyses were conducted with SPSS 23 for Mac (IBM Corp, Armonk, NY). Characteristics of the study patients In total, 103 liver metastases were detected in 18 of 739 (2.44%) patients. Patients' characteristics are summarized in Table 1. Mean patients' age was 70.1 ± 8.5 years. Lesion-based analysis of liver metastases All detailed results are depicted in Table 2. The mean size of metastases was 3.3 ± 4.7 cm 2 (range 0.2-29.5cm 2 ). The mean SUV max of all liver metastases was 8.4 ± 5.2 in 2D and 8.7 ± 5.5 in 3D ROI, compared to a mean SUVmax of the normal liver of 4.8 ± 2.3 in 2D and 5.3 ± 2.3 in 3D ROI. The mean SUV max of all liver metastases was significantly higher than the SUV max of normal liver in both 2D (p ≤ 0.05) and 3D ROI (p ≤ 0.05). In total, 80 PSMA positive (77.7%) and 23 PSMA negative (22.3%) metastases were identified. Examples of PSMA-positive and PSMA-negative metastases are illustrated in Figs. 1 and 2. The mean SUV max of PSMA-positive metastases was 9.8 ± 4.9 in 2D (see Fig. 3) and 10.3 ± 5.3 in 3D ROI. The mean SUV max of PSMA-negative metastases was 3.3 ± 1.8 in 2D and 3.4 ± 1.8 in 3D ROI. This was significantly lower than the mean SUV max of the normal liver, in both 2D (p ≤ 0.05) and 3D ROI (p ≤ 0.001). The mean SUV max obtained by 3D ROI was significantly higher than that obtained by 2D ROI in normal liver (p ≤ 0.05) as well as in PSMA-positive liver metastases (p ≤ 0.001). There was no difference in SUV max of PSMA-negative metastases between 2D and 3D ROI (p > 0.05). The mean SUV max -lesion-to-background ratio in PSMApositive liver metastases was 2.7 ± 1.5, which was significantly higher than that of PSMA-negative metastases (0.5 ± 0.3, p ≤ 0.001, see Fig. 4). HU mean of liver metastases compared to the normal liver The mean CT attenuation value of liver metastases was significantly lower than that of the normal liver, in CE-CT (p ≤ 0.001) and unenhanced CT (p ≤ 0.05). In liver metastases, HU mean was 61.0 ± 25.1 in CE-CT and 31.1 ± 13.9 in unenhanced CT, whereas the HU mean of the normal liver was 102.2 ± 17.1 in CE-CT and 53.8 ± 8.9 in unenhanced CT. In PSMA-negative metastases, HU mean was 30.4 ± 19.7 in CE-CT and 19.1 ± 5.3 in unenhanced CT. In PSMA-positive metastases, HU mean was 67.0 ± 21.5 in CE-CT and 40.4 ± 11.1 in unenhanced CT. HU mean of PSMA-negative metastases was found to be significantly lower than that of PSMA-positive metastases, in both contrast-enhanced and unenhanced CT (both p ≤ 0.001). Correlation between size and SUV max of liver metastases We calculated a moderate significant positive relationship between size and SUV max of PSMA-positive metastases (Fig. 5a, All data are given as mean ± standard deviation and range in parentheses. SUV max Maximum standardized uptake value, ROI Region of interest, HU mean Mean Hounsfield units, CE-CT Contrast-enhanced CT The mean SUV max of all liver metastases was significantly higher than the SUV max of the normal liver, both in 2D (p ≤ 0.05) and 3D ROI (p ≤ 0.05). The mean SUV max of PSMA-negative liver metastases was significantly lower than the SUV max of the normal liver, in 2D (p ≤ 0.05) and 3D ROI (p ≤ 0.001). The mean CT attenuation value HU mean of PSMA-positive metastases was significantly lower than that of normal liver, in contrast-enhanced (p ≤ 0.001) as well as in unenhanced CT (p ≤ 0.05). SUV max Maximum standardized uptake value, ROI Region of interest, HU mean Mean Hounsfield units The mean SUV max of PSMA-positive liver metastases was 9.8 ± 4.9 and significantly higher than the mean SUV max of the normal liver (4.8 ± 2.3, p ≤ 0.001). In contrast, the mean SUV max of PSMA-negative liver metastases was 3.3 ± 1.8 and significantly lower than that of the normal liver (p ≤ 0.05). SUV max Maximum standardized uptake value Patient-based analysis and correlation between PSA, patients' age, and SUV max Of 18 patients with liver metastases, eight patients (44, 4%) had ten or more metastases, three patients (16.7%) had two to ten metastases, and seven patients (38.9%) had a single metastasis. Regarding the tracer uptake, 15 patients (83.3%) had PSMA-positive hepatic metastases only, two patients (11.1%) had PSMA-negative metastases only, and one patient (5.6%) had mixed metastases. The distribution of liver metastases by liver segments is illustrated in Fig. 6. A higher number of patients had liver metastases in the right (100%) than in the in the left hepatic lobe (61.1%, p > 0.05). A weak, significant negative relationship between patients' age and SUV max of PSMA-positive metastases was calculated (Fig. 5b, ρ Spearman = − 0.221, 95% CI [− 0.420; − 0.002], p ≤ 0.05). Also, there was a moderate, significant positive correlation between the PSA serum level at the time of Fig. 4 Mean SUV max -lesion-to-background ratio of PSMA-positive and PSMA-negative liver metastases. The mean SUV max -lesion-to-background ratio in PSMA-positive liver metastases was 2.7 ± 1.5 and significantly higher than that of PSMA-negative metastases (0.5 ± 0.3, p ≤ 0.001). SUV max Maximum standardized uptake value Discussion This study evaluated the imaging characteristics of liver metastases in 68 Ga-PSMA-PET. It was demonstrated that the majority of liver metastases highly overexpress PSMA and is therefore directly detectable by 68 Ga-PSMA-PET. For the analysis of PET images, it has to be taken into account that also a significant portion of metastases can only be detected indirectly, as these metastases are PSMA-negative. 68 Ga-PSMA-PET/CT has demonstrated potential to improve the initial staging, lymph node staging, and detection of recurrence of PC, even at low PSA levels. Several studies have indicated that 68 Ga-PSMA-PET is more accurate compared to other tracers as such as 18 Fcholine [25]. So far, the imaging properties of liver metastases in 68 Ga-PSMA-PET have not been systematically researched. In our cohort, liver metastases were present in 2.4% of patients who underwent 68 Ga-PSMA-PET. This was lower compared to the prevalence reported by other studies, likely as a result of the different study designs and the limited sensitivity of PET for the detection of small (< 1 cm) metastases [10,11]. In our study population, the majority of patients demonstrated PSMA-positive hepatic metastases, while only a small number of patients demonstrated PSMA-negative or mixed metastases. An explanation for the difference of 68 Ga-PSMA-HBED-CC uptake in liver metastases could be the diversity of phenotypes in metastases, predominantly the neuroendocrine trans-differentiation. In PC, liver metastases are frequently associated with neuroendocrine characteristics as well as with advanced state in systemic disease [10]. It is thought that the degree of neuroendocrine trans-differentiation increases with disease progression and in response to ADT [26]. A pronounced elevation of neuroendocrine serum markers such as neuron-specific enolase and chromogranin A has been demonstrated in patients with long duration of ADT [27]. Autopsy studies have confirmed the phenotypic heterogeneity of end-stage metastatic prostate cancer [28,29]. A large part of neuroendocrine prostate cancer cells does not express generic PC biomarkers including P501S, PSMA, and PSA [30]. This is consistent with the histopathologic finding in one of our study patients with PSMA-negative liver metastases, in whom liver and prostate biopsy were performed. Histopathology of the metastasis revealed an infiltration of the liver with neuroendocrine carcinoma cells, which were positive for the neuroendocrine biomarker CD56, but negative for PSA, PSMA and androgen receptor. In the same patient, histopathology of the prostate tissue Fig. 6 Patient-based analysis of the localization of liver metastases, according to liver segments. Percentages indicate the proportion of study patients in whom liver metastases were localized within the respective liver segment. Liver segment VI was the most common localization for liver metastases (80%), whereas liver segment I was the least common site (44%) exposed an acinar adenocarcinoma with 5% of the cells presenting neuroendocrine markers, which can be interpreted as a partial trans-differentiation. The findings of this study are also consistent with a case report by Usmani et al. of a PC patient with an unsuspicious 68 Ga-PSMA-PET, whereas a 68 Ga-DOTANOC-PET performed ten days later revealed multiple somatostatin-avid hepatic and lymph node metastases, and lymph node cytology confirmed neuroendocrine differentiation [31]. Overall, neuroendocrine trans-differentiation could explain the loss of PSMA-expression of liver metastases in progressive disease. Vice versa, the detection of PSMA-underexpression in liver metastases could represent trans-differentiation; clinicians need to be familiar with this concept as it may result in treatment adaptation. Interestingly, the radiodensity of PSMA-negative liver metastases was significantly lower compared to the PSMA-positive metastases, in both unenhanced and contrast-enhanced CTs. This finding could further support the differentiation of liver metastases in PC but needs to be verified in a larger cohort. Additionally, a significant positive correlation between the serum PSA level at the time of examination and SUV max of PSMA-positive liver metastases was observed. This could be explained by the fact that both parameters tend to increase within the progression of the disease. The finding is consistent with the studies of Koerber et al. and Sachkepides et al., who reported that patients with higher PSA values demonstrated a significant higher tracer uptake in intraprostatic tumor lesions on PSMA-PET/CTs [32,33]. Between the size and SUVmax of PSMA-positive liver metastases, a weak but significant association was found. This might be the result of a proliferative advantage of highly PSMAexpressing cells, as it has been demonstrated in-vitro [34]. We further observed a weak but significant, negative association between age and SUV max of PSMApositive liver metastases. A hypothesis explaining this finding could be that patients who develop liver metastases at a younger age have a more aggressive subtype of PC with higher PSMA-expression. This, however, needs to be investigated in a larger cohort. A limitation of this retrospective study is that diagnoses of liver metastases were not confirmed histopathologically since no biopsies of most of the metastases were performed. A possible limitation to the lesionbased analysis regarding the calculation of mean SUV max values could be due to an overestimation of the patients subgroup with multiple metastases compared to the subgroup with few metastases. Conclusions The majority of liver metastases highly overexpress PSMA in 68 Ga-PSMA-PET and is therefore directly detectable. For the analysis of PET images, it has to be taken into account that also a significant portion of metastases can only be detected indirectly, as these metastases are PSMA-negative. Future studies are warranted to test these findings in a larger collective of patients and to correlate changes on histopathology with the PSMA expression.
2019-06-13T13:06:31.435Z
2019-06-11T00:00:00.000
{ "year": 2019, "sha1": "eb0e32cac5a50daef9164c66f8efd21dbe1b9229", "oa_license": "CCBY", "oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-019-0220-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb0e32cac5a50daef9164c66f8efd21dbe1b9229", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118387108
pes2o/s2orc
v3-fos-license
Experimentally Feasible Security Check for n-qubit Quantum Secret Sharing In this article we present a general security strategy for quantum secret sharing (QSS) protocols based on the HBB scheme presented by Hillery, Bu\v{z}ek and Berthiaume [Phys. Rev A \textbf{59}, 1829 (1999)]. We focus on a generalization of the HBB protocol to $n$ communication parties thus including $n$-partite GHZ states. We show that the multipartite version of the HBB scheme is insecure in certain settings and impractical when going to large $n$. To provide security for such QSS schemes in general we use the framework presented by some of the authors [M. Huber, F. Minert, A. Gabriel, B. C. Hiesmayr, Phys. Rev. Lett. \textbf{104}, 210501 (2010)] to detect certain genuine $n$ partite entanglement between the communication parties. In particular, we present a simple inequality which tests the security. I. INTRODUCTION In classical cryptography secret sharing has been introduced by Shamir [1] and Blakley [2] in 1979 and is useful in many applications. The main idea is to divide a secret into several shares and distribute these shares among several parties such that the secret can be reconstructed when a certain number of parties (or all) come together and combine their shares. Additionally, each party alone is not able to gain any information about the secret. The idea of secret sharing has been brought to Quantum Cryptography in 1999 when Hillery, Bužek and Berthiaume presented their scheme [3] based on GHZ states. Since then Quantum Secret Sharing (QSS) has been another field of great interest besides Quantum Key Distribution (QKD). In the same year Karlsson, Koashi and Imoto also presented a similar QSS protocol based on Bell states [4] and several other schemes followed [5][6][7][8][9][10][11][12][13][14][15][16][17]. Most of these protocols make heavy use of entangled states to communicate between several parties. In general, the security of such protocols is rather complex to analyze since there are more parties involved compared to QKD and some of the legal participants have to be considered dishonest. This model of adversaries from the inside is in fact much stronger because such an adversary in general has more advantages than an eavesdropper from the outside. The success of the protocol depends strongly on the fact that all parties share a certain genuine multipartite entangled state after transmission. We show in this paper that the security of a protocol can be obtained by checking for this certain genuine multipartite entanglement. For that we use the framework presented in Refs. [18,19] which provide Bell-like inequalities which are experimentally testable. In the following section we shortly review the HBB scheme including the argument presented in Ref. [20] regarding the security against a cheating Charlie. Further, we discuss the generalization of the HBB scheme to n qubits and present a successful eavesdropping strategy based on the argument in Ref. [20]. Based on the inequalities we provide a new security argument for n qubit secret sharing protocols. II. THE HBB SCHEME In their article [3] Hillery, Bužek and Berthiaume presented a quantum secret sharing scheme based on the distribution of GHZ states of the form between three parties, Alice, Bob and Charlie. Each party measures its qubit at random in one of two bases. Based on their results, Bob and Charlie together are able to determine Alice's result but individually have no information about it. In detail, Alice generates copies of the state |Ψ in her laboratory and sends qubit B to Bob and qubit C to Charlie. Then, each party randomly chooses to measure its qubit either in the X or in the Y basis. The eigenstates of these bases are Taking the X basis the GHZ state |Ψ can be written as After each party performed its measurement they all announce their bases for the whole sequence sent by Alice but do not reveal the specific result. Additionally, all three parties sacrifice some of the remaining measurement results to check for eavesdroppers and dishonest parties by comparing them publicly. Based on the information about the basis choice of the remaining qubits Charlie always knows whether Alice and Bob have the same results or not, but he has no information about their exact results. Further, Bob knows that he either has the same or the opposite result of Alice and thus needs the information about Charlie's measurement result to fully determine it. Thus, Bob and Charlie have to collaborate to obtain Alice's result. Due to the random choice of the measurement bases, Charlie will measure in the wrong basis half of the times. These cases can be identified when the three parties reveal their bases and the respective qubits have to be discarded. The security argument, as is described above, has been presented in Ref. [3] but later that year Karlsson et al. commented on the HBB scheme that the order in which the measurement bases and the results for the test bits are revealed is crucial [4]. They showed that the HBB scheme becomes insecure if the measurement bases are revealed before the results for the test bits. They suggested the following sequence: first, Bob and Charlie publicly disclose their measurement results for the test bits and afterwards, in the reversed order, they announce the corresponding measurement bases. The reversed order is important such that none of them can gain too much information from the actions of the previous parties. We want to stress that this is, nevertheless, not a very efficient way to secure the protocol since the order of the messages is not implicitly preserved by the network. Alice has to tell each party when to send its result and has to wait on the response. In case of three parties as in the HBB scheme there is no big difference but it can become a large overhead when going to n parties. III. A NEW SECURITY ARGUMENT In three articles [18,19,21] the authors presented a series of inequalities to test for genuine multipartite entanglement and for k-separability for any multipartite qudit system. These Bell-like inequalities are easily experimentally implementable as only local observables are needed. We present here how two inequalities designed for the HBB protocol described above can be used to check for adversaries. The idea is that the attack strategy based on auxiliary qubits as presented in Ref. [4] does not work if the parties can verify that they share a genuinly multipartite entangled n-qubit state. The intervention of an untrusted party, e.g. Charlie, is based on the auxiliary qubits he introduces into the protocol to gain additional information about Bob's results. Differently stated, it changes the overall state and this can be detected by performing certain additional setups and evaluating the inequalities given in eq. (6) below. Before we present the inequalities we need to define bi-separability: If the density operator of a 3 qubit state can be decomposed into the following form with p j , q j , r j ≥ 0 and j p j + q j + r j = 1, it is called biseparable. Here the two-body states ρ j AB , ρ j BC and ρ j AC can describe entangled states. Even though there is no bipartite splitting with respect to which the state ρ is separable, it is considered biseparable since it can be prepared through a statistical mixture of bipartite entangled states. Generalization to n-qubit states is straightforward. Based on the bi-separability we can define the inequalities for the 3-qubit case of the HBB protocol. Using σ 1 := I and the abbreviation for the Pauli operators we can rewrite and linearize the inequalities derived in Refs. [18,19] in terms of local observables: These inequalities are satisfied for all biseparable states. It is convex, therefore it obviously valid for mixed states. As it is easy to see the first inequality uses combinations of local observables which are needed in the original HBB scheme to form the secret key (cf. table I) whereas the second inequality uses combinations which are discarded in the original protocol (i.e. yyy, yxx, xyx and xxy). Unfortunately, the latter one can only be applied if the initial state is the "imaginary" GHZ state Thus, we have to adjust the original HBB protocol in the following way: Alice prepares at random one of two states, either the standard GHZ state |Ψ or the state |Φ . Then, she distributes the qubits between Bob and Charlie as in the original protocol. Due to the use of the inequalities in eq. (6) the Z-basis has to be introduced as an additional measurement basis. After Bob and Charlie performed their measurements they announce their bases and Alice tells them to reveal some of their results to test for the inequalities. Here, Alice tests with the first inequality of eq. (6) whenever she prepared the state |Ψ and with the second inequality when she prepared |Φ . We want to stress that Alice does not announce which initial state she prepared until after the check for eavesdroppers. Therefore, the sequence in which Bob and Charlie announce their bases and results is irrelevant since a cheating Charlie can not be sure whether Alice initially prepared |Ψ or |Φ . Hence, Charlie introduces a certain error and will be detected by the legitimate parties as it is explained in detail in the next section. The application of the inequalities makes it possible to overcome the check for the correct order of the messages and thus makes the protocol less complex. The introduction of the second GHZ state |Φ does not influence the efficiency, since combinations of observables which are discarded in the original protocol can be used with the state |Φ and vice versa. The only drawback is the additional measurement basis Z, which is not necessary to establish the secret but is needed to compute the inequalities. Fortunately, we can overcome also this problem by choosing Z only with a certain probability q, which can go to 0 in the asymptotic limit. IV. SECURITY PROOF FOR 3 QUBITS In particular the first inequality in eq. (6) is violated by the GHZ-state |Ψ in the computational basis with the value 1 2 , which is the optimum for any GHZ-state representation. Note that there are several representations of the GHZ-state which would give no violation. The security check -optimized for the basis system the three parties agreed on -would therefore use some of the measurement results to evaluate the inequalities (similar to the check for adversaries suggested in [3]). Additionally, the three parties also have to perform measurements in the Z basis to evaluate the inequalities, which slightly changes the protocol, as pointed out above. If the inequalities are violated, the parties can be sure that no adversary is present, what we prove in the following. In Ref. [20] it has been shown -using a more general approach than in [4] -that the original HBB scheme [3] is insecure against a dishonest Charlie. The main idea is again that Charlie intercepts the qubit flying to Bob and entangles it with an ancillary qubit. Later on, he uses his qubit together with the ancillary qubit to infer Alice's measurement result without Bob's assistance. In detail, Charlie uses an ancillary qubit in the state |0 E and entangles it with the intercepted qubit B using the Hadamard operation H = 1/ √ 2(|0 0| + |0 1| + |1 0| − |1 1|) on qubit B and a CNOT operation CNOT = |0 0| ⊗ I + |1 1| ⊗ σ x on qubits B and E. This brings the initial system |Ψ 0 ABC ⊗ |0 E into the states Charlie sends qubit B to Bob and waits until Alice and Bob announce their measurement bases. According to the measurement results of Alice and Bob, the qubits C and E in Charlie's possession collapse into some predefined state. In case both Alice and Bob measure in the X basis Charlie obtains one of the states Charlie uses this fact together with the information about Bob's measurement basis and result to determine the correct value he has to announce to stay undetected. Further, Charlie is also able to compute Alice's result without any help of Bob [20], which makes the whole protocol insecure. Taking our suggested modified version of the HBB scheme performing the check for adversaries based on the inequalities and employing 2 GHZ states at random Charlie is always detected with a certain probability. As pointed out, Charlie's attack mainly relies on the information about Bob's bases and results, which he can also obtain in our modified version. Nevertheless, Charlie is unable to decide which initial state Alice prepared such that he can only guess the correct result to violate the inequalities. In detail after Charlie's attack the four qubit state is a mixture of the two following states are mixed Ignoring Charlies additional qubit the first inequality derives to I 1 : − 1 2 − p ≤ 0 and the second inequality derives to I 2 : 1 2 − p ≤ 0 with p being the probability that Alice chooses the state |Ψ 1 . These values are different to the expected values without cheating parties, thus Charlie will be revealed. On the other hand Charlie can try to act by local unitaries on qubit C or by unitaries on qubits CE (here we used the convenient parametrization of the unitary group U (4) in Ref. [22]) such that the value of I 1 gets more positive but the trade off is that I 2 gets more negative, again this can be detected. In summary, the suggested attack to the HBB scheme presented in Ref. [20] as well as any generalization of it is detected by the test of the two inequalities. V. SECURITY PROOF FOR n QUBITS The inequalities provided by the framework presented in Refs. [18,19] can be extended to any number of qubits. To give an example, for the 4-qubit case described in the previous section we get a similar inequalities: Also in this case, the four communicating parties sacrifice some of their measurement results to test the inequalities. If they are satisfied they have to assume that an adversary is present. Extending the attack strategy from [13] to four parties the state Charlie uses in his attack is a 6-qubit entangled state. Due to his intervention, the genuine 4-qubit state is destroyed and thus no genuine 4-qubit entanglement can be detected using the inequalities. Hence, the legitimate communication parties discover Charlie's intervention and abort the protocol. The inequalities for n qubits can be derived straight forward from the 3-qubit case (eq. 6) and the 4-qubit case (eq. 11). Thus, the check for adversaries can be performed in the same way as described above. This gives the advantage that the communication parties no longer have to rely on the order of the messages. The adversary, Charlie, does not know which of the measurement results count for the test of the inequalities and which count for the secret. Hence, he sends measurement results which do not violate the inequalities and therefore he is detected by the other parties also in the most general case. VI. CONCLUSION In this article we presented a security argument for general HBB-type quantum secret sharing schemes between n parties. The check for adversaries of such protocols is in general getting more and more inefficient if a large number of parties is involved. We present a different security strategy based on the verification of genuine multipartite entanglement itself which is at the heart of such protocols and in addition that is efficient also for large number of parties. In a slightly different version of the HBB protocol we described a way to integrate this security check efficiently, i.e. by simple Bell-like inequalities (Refs. [18,19]) adapted to the protocol. They use the data which is usually not regarded for the protocol and a measurement in a third direction has to be introduced. A test of these inequalities is a much stronger statement than the common test for eavesdroppers presented e.g. in Refs [3,4] as they indicate the presence of an adversary because any adversary has to change the npartite entangled state in order to obtain any information on the secrete. Certainly, our presented general scheme may be applied to secret sharing protocols involving multi-qudit systems or graph states.
2010-09-24T10:22:20.000Z
2010-09-24T00:00:00.000
{ "year": 2010, "sha1": "1239030bdd3f1b81354c2beb283507bba680f4e3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1009.4796", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1239030bdd3f1b81354c2beb283507bba680f4e3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
51888585
pes2o/s2orc
v3-fos-license
Assessing the Impact of a Risk-Based Intervention on Piped Water Quality in Rural Communities: The Case of Mid-Western Nepal Ensuring universal access to safe drinking water is a global challenge, especially in rural areas. This research aimed to assess the effectiveness of a risk-based strategy to improve drinking water safety for five gravity-fed piped schemes in rural communities of the Mid-Western Region of Nepal. The strategy was based on establishing community-led monitoring of the microbial water quality and the sanitary status of the schemes. The interventions examined included field-robust laboratories, centralized data management, targeted infrastructure improvements, household hygiene and filter promotion, and community training. The results indicate a statistically significant improvement in the microbial water quality eight months after intervention implementation, with the share of taps and household stored water containers meeting the international guidelines increasing from 7% to 50% and from 17% to 53%, respectively. At the study endline, all taps had a concentration of <10 CFU Escherichia coli/100 mL. These water quality improvements were driven by scheme-level chlorination, improved hygiene behavior, and the universal uptake of household water treatment. Sanitary inspection tools did not predict microbial water quality and, alone, are not sufficient for decision making. Implementation of this risk-based water safety strategy in remote rural communities can support efforts towards achieving universal water safety. Introduction In recent years, water sector professionals have made considerable progress improving access to drinking water worldwide. The Millennium Development Goal (MDG) for drinking water was met in 2015, with 2.6 billion people gaining access to an improved drinking water source since 1990 [1]. However, the additional sanitary protection offered by an improved drinking water source does not ensure that the water is safe to drink, because it is not guaranteed to be free from fecal contamination [2,3]. Half a million people worldwide died in 2012 due to consumption of unsafe water [4]. The MDGs thus underscored an urgent need to prioritize interventions designed to limit the hazards to human health by meeting the international guidelines for drinking water safety [5]. To address this issue, the water sector adopted Sustainable Development Goal (SDG) 6, which now includes measures of availability, accessibility, and quality as core standards in its definition of safely managed drinking water [6]. With these considerations, over a quarter of the global population currently lacks access to safely managed drinking water [7]. Water sector practitioners, therefore, face the challenging objective to deliver "universal and equitable access to safe and affordable drinking water for all by 2030" (SDG 6.1). In Nepal, only a quarter of the rural population was estimated to have access to safely managed drinking water in 2015 [7], with access rates being lowest in the most remote areas where treatment is virtually non-existent and microbial contamination of water supplies is well documented. For example, Shrestha et al. (2017) reported inadequate water, sanitation, and hygiene (WASH) conditions in rural Nepal [8]. In the hilly areas of Mid-Western Nepal, a previous study reported a high health risk associated with the consumption of water from public taps, with 69% of samples collected testing positive for Escherichia coli (E. coli). One in ten samples contained more than 100 colony forming units (CFU) of E. coli/100 mL [9], considered at very high risk per World Health Organization (WHO) classifications [5]. Another study in this region reported high daily variability and peak concentrations of fecal contamination [10]. These studies indicate a need for a comprehensive risk management strategy in place of end-of-pipe testing. Shrestha et al. (2017) recommended regular monitoring of water quality to generate missing information regarding seasonal variations [8]. The authors additionally suggested several mitigation actions, such as source protection, regular inspections, and targeted upgrades, with an emphasis on community engagement and water treatment measures. Such activities align with the World Health Organization (WHO)'s Water Safety Plan (WSP) approach that has been (and continues to be) widely promoted for improving drinking water safety from the source to the consumer. This approach is based on the identification of hazards and the mitigation of risks to achieve multibarrier protections for public health safety [5]. WSPs can be adapted to the needs of any drinking water project, including small communities' water supplies [11,12]. One tool used in small community WSPs is the sanitary inspection form to systematically assess vulnerabilities throughout the water scheme. These assessment forms proactively identify hazards at critical locations, thereby informing the management team regarding the potential sources of contamination to the water system and the mitigation efforts required. While the WSP approach supports operational management processes for drinking water supplies, String and Lantagne point to a need for "evidence-based, documented impacts to both water supply and health after WSP implementation" [13]. Evidence regarding the implementation and impacts of WSPs on water quality is especially lacking in remote rural settings, where monitoring activities are hindered by low access to laboratory resources and technical expertise. Additionally, the suitability of sanitary inspection tools for assessing water safety is questioned, with previous studies showing contradictory conclusions regarding the predictability of fecal pollution levels based on sanitary risk scores alone [14][15][16]. It is therefore argued that effective risk management for water supplies should combine sanitary protection indicators with regular water quality testing [3,16]. In addition, the WHO has developed a revised set of forms that better suit the reality of small water supplies in rural contexts [17]. The objective of this research was to describe and evaluate a risk-based water safety strategy within five rural communities served by gravity-fed piped water supply schemes in the Dullu municipality in Mid-Western Nepal. Using a controlled before-and-after study design, we assessed the impact of a suite of interventions on the microbial water quality at different points throughout the system over an eight-month period. The interventions included the reinforcement of a pre-existing household water treatment and safe storage (HWTS) promotion campaign and targeted infrastructural and management improvements to the water schemes. Regular water quality monitoring was established using two solar-powered field laboratories equipped for microbial testing, and adapted sanitary inspections tools were used to systematically assess risks to the water systems. Intensive community participation and training were core features throughout the project's implementation. In addition to the main objective of evaluating intervention impacts on the microbial water quality, other research questions of interest were as follows: (1) How did community members engage with the risk management process for their water system? (2) To what extent were the water safety interventions taken up by the communities by the study endline? (3) Did sanitary inspection scores align with water quality testing results? The project was implemented by Helvetas Swiss Intercooperation Nepal's (hereafter referred to as Helvetas-Nepal) Integrated Water Resources Management (IWRM) program, in collaboration the Swiss Federal Institute of Aquatic Science and Technology (hereafter referred to as Eawag) and REACH: Improving Water Security for the Poor (a program led by Oxford University and funded by the United Kingdom (UK) Government). The study commenced with baseline data collection from 120 households across five intervention communities and three control communities. To assess outcomes, an endline assessment was performed eight months after the baseline to capture changes in the microbial water quality and in households' perceptions and behavior regarding their drinking water. The water safety strategy showed promising results towards achieving SDG 6.1 in rural communities dependent on gravity-fed piped schemes. Within intervention communities, we observed water quality improvements at taps and within households, improved hygiene behavior, and increased community capacity to proactively identify and mitigate the risks identified through regular monitoring. However, the microbial water quality did not meet the international guidelines by the study's endline for 100% of the water points assessed, indicating that further efforts are needed to ensure universal access to safe drinking water in this setting. This study also revealed the limitations of sanitary inspection scores and concluded that such tools should be combined with regular water quality testing for a complete risk management approach. Study Site Nepal is a landlocked country in Southern Asia that is situated in the Himalayas and shares borders with India and China. Three main regions compose the country's landscape and climate: a flat tropical area called the Terai, an intermediate hilly region, and the Himalayan mountains [18]. In 2017, the population was estimated to be 29 million people [19], 81% of whom were living in rural areas in 2015 [20]. Nepal ranked at the poorest end of the United Nations Development Programme Human Development Index in 2016, in the 144th position out of 188 countries [21]. Water scarcity is a common issue in the country [22,23] that is exacerbated by ongoing climate change impacts [24]. Developmental efforts in the past years have mainly focused on meeting the water supply demand and increasing freshwater accessibility. In addition, recent national development initiatives have focused on eliminating open defecation and achieving universal improved drinking water access, especially in rural areas [25]. The Nepal Water Supply, Sanitation, and Hygiene Sector Development Plan for 2016-2030 [26] highlights poor drinking water quality and the lack of an effective monitoring and surveillance system as a barrier to the implementation of the National Drinking Water Quality Standards [27]. The study was conducted in the Dullu municipality in the Dailekh district of the Mid-Western Development Region (Figure 1). This intermediate hilly region was selected as the study location because it is representative of the rural, hilly settings of Nepal, with the additional advantages of close proximity to sufficient projects within the Helvetas-Nepal IWRM service area and relatively convenient road access. In total, eight communities with gravity-fed piped drinking water schemes were selected for this study: five schemes where risk-based water safety intervention took place (hereafter called intervention schemes) and three control schemes where no risk-based water safety interventions were implemented. Before the study, all eight communities had received a new piped water system with private or public taps constructed by Helvetas-Nepal between 2012 and 2016. Alongside system installation in each community, the program additionally established a water and sanitation users' committee, promoted improved household hygiene practices, distributed ceramic filters for household water treatment, and trained a female community health volunteer and a village maintenance worker responsible for repairing the water supply system. These pre-baseline activities, which defined the starting scenario of all the study communities, are summarized in Table 1. treatment, and trained a female community health volunteer and a village maintenance worker responsible for repairing the water supply system. These pre-baseline activities, which defined the starting scenario of all the study communities, are summarized in Table 1. Description of Drinking Water Schemes The selected water schemes were constructed between 2012 and 2016; all were completed at least one year prior to this research. All are simple gravity-fed piped networks with spring sources, except one that includes a solar-powered lifting pump to deliver water from a downhill reservoir to the uphill distribution tanks. All the schemes provide intermittent water services with variable opening times and service durations throughout the year, as is common in the hilly region. They are all similar in their layout with a spring source that is connected to a reservoir tank by a distribution line, with water then flowing to the taps ( Figure 2). All the selected schemes deliver water to public taps except one that has private taps only. Description of Drinking Water Schemes The selected water schemes were constructed between 2012 and 2016; all were completed at least one year prior to this research. All are simple gravity-fed piped networks with spring sources, except one that includes a solar-powered lifting pump to deliver water from a downhill reservoir to the uphill distribution tanks. All the schemes provide intermittent water services with variable opening times and service durations throughout the year, as is common in the hilly region. They are all similar in their layout with a spring source that is connected to a reservoir tank by a distribution line, with water then flowing to the taps ( Figure 2). All the selected schemes deliver water to public taps except one that has private taps only. Sketch of a typical gravity-fed piped water scheme (or sub-scheme). Each scheme is composed of 1-4 sub-schemes. Sub-schemes comprise one water project for the same community but make use of independent water sources. Within a sub-scheme, one water source can feed several reservoir tanks that distribute water to different areas of the village. The intermediate structures can be distribution and collection chambers, purge valve chambers, break pressure tanks or interruption chambers, and air valve chambers. Study Design and Sample Strategy Two distinct research strategies were used: one for the baseline and endline surveys and the other for regular monitoring. The baseline and endline surveys aimed to assess community members' perceptions and behaviors regarding their drinking water. The sanitary state of the water schemes and the microbial water quality were also assessed at the baseline and the endline to measure changes before and after the water safety intervention. By contrast, regular monthly monitoring activities served as less intensive "spot checks" to capture temporal variations in water quality and sanitary indicators. In this way, regular monitoring data informed the ongoing implementation of interventions within each scheme by gauging their effectiveness and identifying any unaddressed system vulnerabilities. Baseline and Endline Surveys The baseline data collection took place in June 2017 and the endline data collection in January 2018. The field teams were composed of staff members of Eawag, Helvetas-Nepal, and the local non-governmental organization (NGO) Social Services Center. All the questionnaires were translated and conducted in Nepali. Only households using the water scheme were eligible for enrollment. Eligible households were selected randomly from the water project beneficiaries list and enrolled following informed consent about the project's purpose and anonymity of the questionnaire. At the study baseline, if the household declined to participate in the study or if no Sketch of a typical gravity-fed piped water scheme (or sub-scheme). Each scheme is composed of 1-4 sub-schemes. Sub-schemes comprise one water project for the same community but make use of independent water sources. Within a sub-scheme, one water source can feed several reservoir tanks that distribute water to different areas of the village. The intermediate structures can be distribution and collection chambers, purge valve chambers, break pressure tanks or interruption chambers, and air valve chambers. Study Design and Sample Strategy Two distinct research strategies were used: one for the baseline and endline surveys and the other for regular monitoring. The baseline and endline surveys aimed to assess community members' perceptions and behaviors regarding their drinking water. The sanitary state of the water schemes and the microbial water quality were also assessed at the baseline and the endline to measure changes before and after the water safety intervention. By contrast, regular monthly monitoring activities served as less intensive "spot checks" to capture temporal variations in water quality and sanitary indicators. In this way, regular monitoring data informed the ongoing implementation of interventions within each scheme by gauging their effectiveness and identifying any unaddressed system vulnerabilities. Baseline and Endline Surveys The baseline data collection took place in June 2017 and the endline data collection in January 2018. The field teams were composed of staff members of Eawag, Helvetas-Nepal, and the local non-governmental organization (NGO) Social Services Center. All the questionnaires were translated and conducted in Nepali. Only households using the water scheme were eligible for enrollment. Eligible households were selected randomly from the water project beneficiaries list and enrolled following informed consent about the project's purpose and anonymity of the questionnaire. At the study baseline, if the household declined to participate in the study or if no adult was available at the time of the visit, another household was selected randomly as a replacement. A total of 15 households were enrolled at each water scheme for a total of 120 surveys. During the endline period, the same households from the baseline were interviewed. The survey questions probed the households' drinking water supply characteristics, sanitation and hygiene practices, and socio-economic statuses. A drinking water sample was taken at each household by collecting 100 mL of water in the same manner as if getting a cup of water to drink. At each of the 8 study schemes, water samples were also taken at the inlet of all reservoir tanks and from three randomly selected taps during the baseline and endline visits. Regular Monitoring At each of the five intervention schemes, one source, one reservoir tank, one tap, and one household were regularly monitored every three-six weeks between August and December 2017 for both drinking water quality and sanitary status ( Table 2). Sanitary inspection forms for sources, reservoir tanks, taps, and stored water were developed based on the updated forms provided by the WHO [17], with modifications made to suit the field context. Each form was composed of 10 yes/no questions from which a risk score out of 10 points was calculated, with a higher risk score indicating a greater health risk posed at the specific point (see Table A1 for the content of each sanitary inspection form). A trained person from each WSP task force was responsible for selecting monitoring points, taking the water samples, and performing the sanitary inspections. Monitoring points were rotated each month and were all water-connected: the household used water from the corresponding tap that was connected to the reservoir tank and the source that was being monitored. Care was taken to ensure that households were not aware of monitoring visits in advance. Regular monitoring is planned to be continued after the study's end as an integral part of the water safety framework. Further details on the regular monitoring strategy are provided in Supplementary Materials Section S1. Water Safety Plan, Interventions, and Laboratories A WSP approach was adopted within the intervention communities. A WSP task force was formed as a subgroup of the pre-existing water users' committee. The task force members' main responsibilities were to evaluate and identify risks to their water scheme and to support efforts towards improved water security management practices. Based on the full sanitary inspection performed at the baseline, the WSP task force and Helvetas-Nepal's technical team collaboratively decided on one or more scheme upgrades to improve the water quality and devised a participatory approach for implementation. The five intervention schemes received the system upgrade measures shown in Table 1 during November and December 2017. Additional details on the water scheme upgrades are provided in the Supplementary Materials Section S2. The upgrading process was based on a participatory approach that emphasized community members' involvement in the decision making to increase their sense of ownership over the project [28]. The communities also contributed to the system upgrade efforts by providing unskilled labor and local materials for construction. In addition, two water quality laboratories were installed at a village health post and a secondary school close to the five interventions schemes. These laboratories consisted of a simple field incubator connected to a solar photovoltaic setup and all the materials required to perform the microbial water quality analysis (E. coli and total coliforms). The laboratory technicians received targeted group training followed by supervised field work. All the data gathered during the regular monitoring was collected by the trained WSP task force members and lab technicians under the supervision of a local NGO staff member who had also previously received intensive training. Mobile Data Collection All the data, including the baseline and endline household surveys and regular sanitary inspections, were collected using tablets (Samsung Galaxy Tab A, Seoul, Korea) equipped with the Akvo Flow application (Akvo Foundation, Amsterdam, The Netherlands). The data were uploaded to the cloud and made available to project team members to be analyzed remotely. Water Sampling and Microbial Water Quality Testing Protocol Water samples collected at the reservoir tanks were taken directly from the inlet, which is the closest point to the water source that was available to sample; therefore, the sample collected is representative of the water entering but not the water being stored at the reservoir tank. At the taps, water was run for 30 s before sampling to wash out any deposited residue and ensure a representative sample from the piped system. The household water samples were collected at the point of consumption (i.e., 100 mL of water was collected in the same way a glass of water for drinking would be prepared). All the water samples from a single scheme were collected on the same day. The water samples were collected in sterile 100 mL Whirl-Pak sampling bags (Nasco, Fort Atkinson, USA). For chlorinated schemes, Whirl-Pak Thio-bags (Nasco, Fort Atkinson, USA) containing sodium thiosulfate were used to inactivate any residual chlorine. Because the electricity required to support a cold chain was not available, the samples were transported to the field laboratories in cooler boxes without ice. The samples were processed by membrane filtration using Nissui Compact Dry EC plates (Nissui Pharmaceuticals, Tokyo, Japan) and a modified filtration device (DelAgua, UK), followed by incubation at 35 ± 2 • C for 24 h. All the samples were transported and processed within two hours of collection. If transportation to laboratories within two hours was impossible, the samples were processed on site and incubated later. A detailed protocol for the membrane filtration method and further information on the construction of the field incubators are available in the Supplementary Materials Sections S3 and S4, respectively. Bacteria Enumeration and Quality Control After incubation, E. coli and total coliform were enumerated on Compact Dry EC according to the manufacturer's instructions. Counts higher than 300 colonies per plate were reported as too numerous to count (TNTC). The results are reported as colony forming units (CFU) of total coliforms or E. coli per 100 mL (CFU/100 mL). To assess the replicability of the method, a duplicate was performed every tenth sample during the baseline and endline data collection. In addition, a random duplicate was taken from one of the sampled sites (tank, tap, or household) during each round of a scheme's regular monitoring. Negative controls (blanks) were processed daily. The statistical analyses of all control measures are found in the Supplementary Materials Section S5. Data Analysis Water quality and survey data were initially compiled and cleaned using Excel 10 (Microsoft, Redmond, WA, USA). Coding and statistical tests of intervention effects were performed using IBM SPSS (IBM, New York, NY, USA). The microbial concentrations were observed to be exponentially distributed; therefore, bivariate comparisons made use of non-parametric tests (e.g., Mann-Whitney U test and central tendency reported as median CFU/100 mL) or parametric tests (e.g., Student's t-test for independent samples following Log 10 transformation of E. coli data and central tendency reported as mean CFU/100 mL). For all Log 10 transformations, zero counts were set to 0.5 CFU/100 mL and TNTC values were set to the upper limit of detection (300 CFU/100 mL). Ethics Statement All participating households gave their informed consent before being interviewed. The research was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Eawag ethics committee (protocol 16_09_072017). The study received government approval in Nepal as part of the Helvetas-Nepal IWRM research program. Generalities The average household had 6.5 (SD = 2.3) family members, with 0.8 children who were 5 years old or younger. Virtually all of the interviewed households (99%) were active in agricultural and farming activities. The monthly expenses per household ranged from 1550 to 50,000 Nepalese Rupees (NPR, M = 10,610, SD = 7800), corresponding to 15.5-500 United States Dollars (USD, M = 106, SD = 78) using a rounded average currency exchange rate of 2017 (Exchange rate calculator, http://www.x-rates.com/average/?from=USD&to=NPR&amount=1&year=2017). When asked about their main concern within their community in the baseline survey, households most frequently mentioned water supply services (31% of intervention and 53% of control households). Most of the households interviewed had walls made of wood or mud (>73%), a floor made of mud, sand, or dirt (>87%), and a roof made of metal (>36%) or thatch (>14%). Concerning sanitation, most of the households reported using an improved private latrine (>89%). There was no electrical grid in the project area, but most households had installed small private solar systems to power lighting and mobile phones (>89%). A further description of the household characteristics is available in the Supplementary Materials Section S6. Hygiene Practices and Reported Illness At the baseline visit, most households reported washing their hands after going to the toilet (>91%), before eating (>93%), and before cooking (>67%). The frequency of soap use during handwashing increased from 43% to 63% between the baseline and endline among the households using the intervention schemes, whereas the frequency decreased from 80% to 60% among the control schemes over the same period. The availability of dedicated handwashing stations with a faucet increased at the intervention schemes from 65% to 83% and stayed constant at the control schemes at 82%. The Supplementary Materials Section S6 contains detailed results of the households' handwashing practices. Most households (96%) did not report having experienced any diarrhea or respiratory illness cases among their family members in the week prior to the survey. A total of six people at the baseline and four people at the endline had experienced a case of diarrhea or respiratory illness, with about half of these cases being children under the age of five. All the households reporting illness during the baseline were using the interventions schemes, whereas at the endline most of the households reporting illnesses (three of the four) were using a control scheme. Perception of Drinking Water Quality and Water Treatment Practices At the baseline visit, most households perceived their drinking water taste and smell as good (>98%), color as clear/good (>92%), and as generally safe to drink (>85%). By the endline visit, the share of households reporting their drinking water was safe had increased slightly in the intervention schemes (99%) and decreased in the control schemes (79%) (Figure 3). However, households using the interventions schemes that received chlorination as part of the WSP intervention reported greater dissatisfaction with the taste of the water by the endline visit; among the two schemes where chlorination was introduced, chlorine taste and "bad or funny smelling water" was reported by 15% and 14%, respectively, of the 29 households interviewed. Further details on the perceptions of the drinking water quality are available in the Supplementary Materials Section S6. treatment coverage was only statistically significant among the intervention schemes (c 2 (1, n = 147) = 26.18, p = 0.00), with all the households reporting that they practiced some form of household water treatment by the endline. At the baseline, the households that said their drinking water was not generally safe indicated the main reasons as being an unprotected source (36%) or animal waste (29%). However, more than a quarter (29%) of the households did not know why they thought the water was unsafe. At the endline, half of the households that did not consider their water to be safe reported toilet waste as the major reason. The other major concerns mentioned at the endline included animal waste (38%), an unprotected source (25%), and chemicals (13%). Water Supply Characteristics In Nepal, efforts have been made in recent decades to provide access to an improved drinking water supply for all rural households. In the study area, the designs of the gravity-fed schemes are all similar, with source water directed to one or several reservoir tanks, which are then opened daily for distribution. The study schemes served from 29 to 108 households or 177-683 people (see the additional scheme characteristics in Table 3). The water services were intermittent, meaning that reservoir tanks were manually opened once or twice per day at a defined hour. The opening times and durations varied throughout the year depending on the source water availability and the time required to fill the reservoir. Usually, the opening duration ranged from one to two hours, with shorter times during the dry season. All the water points within the study communities were functional at the time of the research team's baseline and endline visits. Most households (>80%) reported that their water supply scheme functioned well in general, and most (>85%) reported that they were confident that their water system would still be functional in a year. Most of the interviewed households (>82%) had access to a public tap, and among these households, nearly all (>95%) reported it as their main drinking water source. The average reported time taken for a round trip to the drinking water source, including queuing time, was 10 min (SD = 9). A trained local maintenance worker was responsible for regular maintenance and repairs for each scheme. Most interviewed households (>87%) Regarding water treatment practices, at the baseline visit, fewer households in the intervention schemes reported treating their drinking water (70%) compared with the households in the control schemes (85%). The share of the households adopting the treatment practices increased among all households from the baseline to endline visits ( Figure 3). However, the observed difference in the treatment coverage was only statistically significant among the intervention schemes (c 2 (1, n = 147) = 26.18, p = 0.00), with all the households reporting that they practiced some form of household water treatment by the endline. At the baseline, the households that said their drinking water was not generally safe indicated the main reasons as being an unprotected source (36%) or animal waste (29%). However, more than a quarter (29%) of the households did not know why they thought the water was unsafe. At the endline, half of the households that did not consider their water to be safe reported toilet waste as the major reason. The other major concerns mentioned at the endline included animal waste (38%), an unprotected source (25%), and chemicals (13%). Water Supply Characteristics In Nepal, efforts have been made in recent decades to provide access to an improved drinking water supply for all rural households. In the study area, the designs of the gravity-fed schemes are all similar, with source water directed to one or several reservoir tanks, which are then opened daily for distribution. The study schemes served from 29 to 108 households or 177-683 people (see the additional scheme characteristics in Table 3). The water services were intermittent, meaning that reservoir tanks were manually opened once or twice per day at a defined hour. The opening times and durations varied throughout the year depending on the source water availability and the time required to fill the reservoir. Usually, the opening duration ranged from one to two hours, with shorter times during the dry season. All the water points within the study communities were functional at the time of the research team's baseline and endline visits. Most households (>80%) reported that their water supply scheme functioned well in general, and most (>85%) reported that they were confident that their water system would still be functional in a year. Most of the interviewed households (>82%) had access to a public tap, and among these households, nearly all (>95%) reported it as their main drinking water source. The average reported time taken for a round trip to the drinking water source, including queuing time, was 10 min (SD = 9). A trained local maintenance worker was responsible for regular maintenance and repairs for each scheme. Most interviewed households (>87%) reported that they could get help from their local maintenance worker for necessary repairs and that repairs could be completed within a week (>71%). A water tariff system had been implemented prior to the start of this study, with most households (>85%) reporting that user fees were collected to pay for repairs on an as-needed basis. Detailed water supply characteristics are available in the Supplementary Materials Section S6. Water Supply Management The household survey probed the community members involved in the management, operation, and maintenance of their water supply scheme. A total of 44% of the households interviewed at the endline indicated having a family member who was either a member of the water and sanitation users' committee or the WSP task force or had served as a maintenance worker, community health volunteer, or tap stand care taker. The water users' committee met together regularly (most often monthly) to discuss issues related to the water supply scheme. During the construction of the scheme, the water users' committee also met with community members monthly to discuss the project, establish a fund for operation and maintenance, assign maintenance workers, collect contributions toward construction, and eventually, conduct public reviews of the committee's income and expenditures. After construction was completed, the water users' committee generally met with community members only once every year or every second year to perform the aforementioned duties, as well as reform the water users' committee as needed. During the baseline, 60% and 40% of households using the intervention and control schemes, respectively, indicated that they were aware of the water users' committee meetings within their community. At the endline, these percentages increased to 79% and 67% for the intervention and control schemes, respectively, suggesting that the study served to raise awareness regarding the water users' committee activities. More detailed results are available in the Supplementary Materials Section S6. Activities within Intervention Schemes Among the households served by the intervention schemes only, additional questions were asked at the endline visit to assess the activities taking place during the WSP implementation. Nearly all (88%) the households served by the intervention schemes were aware of the WSP strategy, and among these households, about half (54%) had participated in its development and implementation through their membership in the WSP task force, involvement in the regular scheme chlorination, or the installation of the intake filter. A total of 93% of households had heard about the laboratories that had been installed for monthly water quality testing, and 71% said that the results of the microbial analysis had been reported back to them by local NGO staff members or members of the water users' committee. Among the 51 households that had received their test results, 37% indicated that their water quality was contaminated. In response, all of these households had begun to treat their water using a ceramic candle filter (100%) and boiling (16%). When asked about their desire for future water quality testing, 96% of interviewees responded positively and said they would pay up to 500 NPR (or 4.78 USD) per test, with a median value of 50 NRP (or 0.48 USD) per test (Exchange rate calculator, http://www.x-rates.com/average/?from=USD&to=NPR&amount=1&year=2017). Among the nine households served by intervention schemes that had a family member in the water users' committee, all had been informed about the results of the monthly water quality monitoring. All but one of the households had then discussed these results with the water users' committee, and in about half of the instances (44%), actions to improve the water scheme had been undertaken. Further details on the activities within the intervention schemes are provided in the Supplementary Materials Section S6. Household Stored Water Sample Characteristics Among all the households, most of the water samples collected from the stored water containers were clear at both the baseline (>96%) and the endline (>81%) visits. The share of stored water samples treated by household ceramic filters or boiling among the intervention schemes increased from 63% at the baseline to 100% at the endline (Table 4). By the endline visit, three-quarters of these samples had also received some form of scheme-level treatment, such as chlorination; however, no monitoring of chlorine residual in the stored water was conducted as confirmation. By contrast, the share of stored water samples that had been treated at the household level within control schemes remained relatively constant, from 76% at the baseline to 86% at the endline (see the Supplementary Materials Section S6 for additional details). Baseline Water Quality and Qualitative Sanitary Observations At the study baseline, the microbial water quality was assessed at each of the surveyed households, as well as at the all the reservoir tanks and three taps per scheme. All the data were analyzed based on E. coli concentrations unless otherwise stated. Table 5 shows the median and the mean Log 10 E. coli contamination at the intervention and control schemes. The Mann-Whitney U tests showed no statistical differences in the E. coli concentrations between sampling points at the intervention and controls schemes at the baseline (p ≥ 0.05). The sanitary inspections of the water schemes at the baseline visit indicated high risk scores at all the spring sources due to inadequate protection measures. The infiltration of contaminated runoff water and open intakes were the main hazards identified. Additionally, for most of the spring sources, the inspections revealed that intake maintenance was not possible without compromising the integrity of the intake covering and the protective gravel and sand layers. Any blockage at the intake would require the removal of these covering layers, thereby risking that the intake would not be properly covered afterwards. Generally, the other structures, such as the reservoir tanks and the distribution pipes, were in good condition. Nevertheless, the tank covers were pinpointed as vulnerabilities, because contamination could enter during rain events or when the covers were opened. Occasional pipe leaks were observed, and the taps were found to be damaged or leaking in some of the schemes. Monthly Monitoring of Intervention Schemes Regular monitoring of the intervention schemes included water quality testing and structured sanitary inspections that provided a calculated risk score. Figure 4 shows the mean risk scores and mean E. coli concentrations at the source, reservoir tank, tap, and household. The microbial water quality was not measured at the sources because no samples could be collected without damaging the integrity of their protective structures. Monthly Monitoring of Intervention Schemes Regular monitoring of the intervention schemes included water quality testing and structured sanitary inspections that provided a calculated risk score. Figure 4 shows the mean risk scores and mean E. coli concentrations at the source, reservoir tank, tap, and household. The microbial water quality was not measured at the sources because no samples could be collected without damaging the integrity of their protective structures. The average risk score was higher at the sources (due to poor protective measures) and the households (due to recontamination vulnerabilities) than at the taps and reservoir tanks. However, the microbial water quality of household stored water was on average better than at the taps and reservoir tanks. With these results, the sanitary inspections did not accurately predict the water quality test results at each given point. The household water treatment practices appeared to improve the stored water quality, even if the overall sanitary state of the household was poor according to the inspection forms. Rain events during the monitoring day and the preceding day were recorded in the sanitary inspection forms and examined as a potential factor explaining variations in the microbial water quality. However, the results did not reveal any meaningful impact of rain on the observed microbial concentrations. Endline Water Quality and Qualitative Sanitary Observations Water quality at the endline was assessed at the same points as during the baseline ( Table 5). The Mann-Whitney U tests showed a small but significant difference in the E. coli contamination levels of the household stored water samples between the intervention (median = 0 CFU/100 mL) and control schemes (median = 4 CFU/100 mL), U = 1073, p = 0.004. No significant differences in the E. coli contamination of the intervention and control scheme reservoir tanks or taps were observed (p ≥ 0.05). The sanitary inspections during the endline visit showed that all the source intakes of the intervention schemes had been structurally improved. Each had a new intake filter made of fine sand and gravel layered and packed in a net. The intake was also topped with a plastic cover to avoid surface water infiltration. Rain water diversion ditches were constructed around the source intakes to prevent rainwater runoff from entering the intake area. In some cases, additional shields against The average risk score was higher at the sources (due to poor protective measures) and the households (due to recontamination vulnerabilities) than at the taps and reservoir tanks. However, the microbial water quality of household stored water was on average better than at the taps and reservoir tanks. With these results, the sanitary inspections did not accurately predict the water quality test results at each given point. The household water treatment practices appeared to improve the stored water quality, even if the overall sanitary state of the household was poor according to the inspection forms. Rain events during the monitoring day and the preceding day were recorded in the sanitary inspection forms and examined as a potential factor explaining variations in the microbial water quality. However, the results did not reveal any meaningful impact of rain on the observed microbial concentrations. Endline Water Quality and Qualitative Sanitary Observations Water quality at the endline was assessed at the same points as during the baseline ( Table 5). The Mann-Whitney U tests showed a small but significant difference in the E. coli contamination levels of the household stored water samples between the intervention (median = 0 CFU/100 mL) and control schemes (median = 4 CFU/100 mL), U = 1073, p = 0.004. No significant differences in the E. coli contamination of the intervention and control scheme reservoir tanks or taps were observed (p ≥ 0.05). The sanitary inspections during the endline visit showed that all the source intakes of the intervention schemes had been structurally improved. Each had a new intake filter made of fine sand and gravel layered and packed in a net. The intake was also topped with a plastic cover to avoid surface water infiltration. Rain water diversion ditches were constructed around the source intakes to prevent rainwater runoff from entering the intake area. In some cases, additional shields against landslides were installed as added protection. Protection and regeneration of the micro-catchment through the 3R (Recharge, Retention, Reuse) intervention (see Table 1) were observed but only at their early stages. It is expected that this plantation work will deliver its full potential as a conservation measure several years after its completion. The intervention schemes were also improved through the replacement or repair of leaking pipes throughout the network and improved maintenance of the public taps. Average Contamination by Scheme and Sampling Point The mean E. coli contamination of household stored water is shown in Figure 5a. These results showed that the contamination during the baseline was on average greater in the intervention schemes than in the control schemes. By the endline visit, the opposite situation was observed, with most intervention schemes having lower contamination levels within the stored water on average, as compared with the control schemes. At most reservoir tanks and taps at the intervention and control schemes, the water quality at the baseline had improved by the endline visit (Figure 5b,c). A particularly high level of fecal contamination was observed in the reservoir tanks and taps of the control scheme number six during the baseline visit. measure several years after its completion. The intervention schemes were also improved through the replacement or repair of leaking pipes throughout the network and improved maintenance of the public taps. Average Contamination by Scheme and Sampling Point The mean E. coli contamination of household stored water is shown in Figure 5a. These results showed that the contamination during the baseline was on average greater in the intervention schemes than in the control schemes. By the endline visit, the opposite situation was observed, with most intervention schemes having lower contamination levels within the stored water on average, as compared with the control schemes. At most reservoir tanks and taps at the intervention and control schemes, the water quality at the baseline had improved by the endline visit (Figure 5b,c). A particularly high level of fecal contamination was observed in the reservoir tanks and taps of the control scheme number six during the baseline visit. Figure 6 shows the mean E. coli concentrations for the intervention and control schemes at each sampling location. The greatest reductions in the contamination between the baseline and endline measurements are seen at the households and taps among the interventions schemes (see the Supplementary Materials for additional microbial analyses across the sampling points (Section S7); within the chlorinated schemes specifically (Section S8); among the households using and not using ceramic water filters (Section S9); and other detailed microbial results (Section S10)). Statistical Comparisons of Fecal Contamination at the Baseline and Endline Measurements for Intervention and Control Schemes A Student's t-test was used to compare the E. coli contamination levels at the baseline to the endline measurements within the intervention and control schemes ( Table 5). The results show a statistically significant difference in mean contamination levels at the households and the taps within the intervention schemes only. The mean Log10 E. coli concentration at the households served by the intervention schemes was 1.25 CFU/100 mL at the baseline and 0.36 CFU/100 mL at the endline. At the Figure 6 shows the mean E. coli concentrations for the intervention and control schemes at each sampling location. The greatest reductions in the contamination between the baseline and endline measurements are seen at the households and taps among the interventions schemes (see the Supplementary Materials for additional microbial analyses across the sampling points (Section S7); within the chlorinated schemes specifically (Section S8); among the households using and not using ceramic water filters (Section S9); and other detailed microbial results (Section S10)). Figure 6 shows the mean E. coli concentrations for the intervention and control schemes at each sampling location. The greatest reductions in the contamination between the baseline and endline measurements are seen at the households and taps among the interventions schemes (see the Supplementary Materials for additional microbial analyses across the sampling points (Section S7); within the chlorinated schemes specifically (Section S8); among the households using and not using ceramic water filters (Section S9); and other detailed microbial results (Section S10)). Statistical Comparisons of Fecal Contamination at the Baseline and Endline Measurements for Intervention and Control Schemes A Student's t-test was used to compare the E. coli contamination levels at the baseline to the endline measurements within the intervention and control schemes ( Table 5). The results show a statistically significant difference in mean contamination levels at the households and the taps within the intervention schemes only. The mean Log10 E. coli concentration at the households served by the intervention schemes was 1.25 CFU/100 mL at the baseline and 0.36 CFU/100 mL at the endline. At the Statistical Comparisons of Fecal Contamination at the Baseline and Endline Measurements for Intervention and Control Schemes A Student's t-test was used to compare the E. coli contamination levels at the baseline to the endline measurements within the intervention and control schemes ( Table 5). The results show a statistically significant difference in mean contamination levels at the households and the taps within the intervention schemes only. The mean Log 10 E. coli concentration at the households served by the intervention schemes was 1.25 CFU/100 mL at the baseline and 0.36 CFU/100 mL at the endline. At the intervention scheme taps, a reduction in the mean Log 10 E. coli concentration from 1.14 CFU/100 mL to 0.13 CFU/100 mL was observed. No significant difference in the average contamination levels between the baseline and the endline was observed at the intervention reservoir tanks or at any of the sampling points in the control schemes (see the Supplementary Materials Section S10 for further discussion and statistical analysis). When examining whether the samples met the WHO guidelines for drinking water safety (<1 CFU E. coli/100 mL), the results show that the share of the household stored water samples from the intervention schemes with no detectable E. coli increased significantly from 17% at the baseline to 53% at the endline (c 2 (1, n = 147) = 24.01, p = 0.00). Also significant was the increase in the tap samples from the intervention schemes that met the WHO guidelines, from 7% at the baseline to 50% at the endline (c 2 (1, n = 28) = 6.30, p = 0.03), with all the tap samples at the endline having less than 10 CFU E. coli/100 mL. Other sampling points did not yield meaningful changes in the share of samples meeting the WHO criteria (see Table A2 for detailed results and the Supplementary Materials Section S11 for temporal representations of the baseline, endline, and regular monitoring data). Difference-in-Differences Analysis A difference-in-differences analysis was used to compare the household water quality data from the intervention and control schemes at the baseline and the endline. Estimating the natural change at the control sites and subtracting it from the intervention sites indicated that the effect of the interventions on the household water quality caused a decrease of the mean Log 10 concentration of E. coli of 0.681 CFU/100 mL (SE = 0.26, n = 235, t = −2.614, p = 0.01) among the intervention schemes. The difference-in-differences analysis for the water quality at the reservoir tanks was +0.168 Log 10 CFU/100 mL and at the taps was −0.13 Log 10 CFU/100 mL. This is interpreted as meaning that the interventions were responsible for an increase in the contamination at the reservoir tanks and a decrease at the taps (reservoir tanks: DD = 0.168, SE = 0.512, t = 0.329, p = 0.744, n = 46; taps: DD = −0.13, SE = 0.464, t = −0.281, p = 0.78, n = 46). This unexpected finding could be explained by the fact that the control scheme number six showed exceptionally high contamination at the baseline as compared with all the other schemes (Figure 5b), resulting in a large improvement of the mean water quality at the control schemes' reservoir tanks. The difference-in-differences analysis at the taps is aligned with the results presented above and indicates a statistically significant improvement in the water quality due to the interventions. Study Novelty and Insights While past studies have investigated water safety interventions in rural areas of Nepal, to the authors' knowledge, no study to date has reported outcomes based on comparison to a set of control communities. The aims of this research were to describe an approach for improving the drinking water safety that is adapted to this unique setting, as well as to rigorously evaluate whether this strategy was capable of achieving measurable improvements in the water quality. The findings reported here will be of interest to government agencies, water program managers, system operators, and program managers throughout Nepal and are applicable to other remote rural areas dependent on gravity-fed piped supplies. This study revealed several insights relevant to the rural water sector. First, we observed universal uptake of the household water treatment (ceramic water filters and/or boiling) within the intervention communities. This finding suggests that the suite of water safety interventions delivered through the WSP, including the intensive WASH promotion activities, were very effective in motivating behavior change over an eight-month period. The WASH promotion activities included the communication of the stored water quality results to most households following testing. The survey data revealed that all the households who received the results indicating contamination of their stored water subsequently adopted treatment practices. Moreover, nearly all the survey respondents said that they would be interested in further water quality testing at an average price of 0.70 USD per test. The high uptake of the household water treatment and increased demand for water quality testing among the households could also be attributed to a generally high level of awareness and involvement among the community members. For example, 88% of survey respondents said they knew about the WSP activities within their community. Over half of these households had served as an official member of the WSP task force or participated in infrastructural improvements, such as the installation of intake filters or the implementation of chlorine dosing. Taken together, these results suggest the broad level of engagement by the households in the planning and implementation of their water safety interventions contributed to the successful outcome observed over an eight-month period. More generally, the study findings suggest a dynamic interaction between the community members' participation in the water supply stewardship, the delivery of targeted water quality information, and the demand for safe drinking water. A second insight from this research is that the sanitary inspection risk scores did not accurately predict the microbial water quality at different points across the system. According to the sanitary inspection metric used, the risks were on average greatest at the sources and households and lowest at the reservoir tanks and taps. These findings were driven by the poor physical protection of the sources and factors indicating the recontamination potential of the stored drinking water at the household level. Surprisingly, however, the water quality measurements revealed the opposite trend; the fecal contamination of the household stored water was on average lower than at the collection taps and reservoir tanks (it was not possible to measure the microbial water quality at the source). These findings may be explained by the uptake and consistent use of ceramic water filters by the households following enrollment in the study, thereby improving the water quality even if other measures of the household's sanitary state remained poor according to the inspection form. Finally, statistical comparisons of the microbial water quality revealed improvements at all points monitored for both the intervention and control schemes. However, the improvements observed in the average E. coli concentrations from the baseline to the endline were only statistically significant for the taps and the household stored water containers in the intervention group (and not so in the control group). Examining the microbial data at the scheme level, we found that the household stored water quality consistently improved from the baseline to the endline for all the intervention schemes, whereas an inconsistent trend was observed for the three control schemes. In addition, the intervention communities showed universal adoption of household water treatments by the endline, resulting in over half of the households having stored water meeting the WHO guidelines for water safety (0 CFU E. coli/100 mL). The reduction in the fecal contamination among the intervention taps is notable as well, with half of the taps meeting the WHO guidelines (up from only 7% at the baseline) and all the taps delivering water with less than 10 CFU E. coli/100 mL. These results, while promising, do not indicate perfect compliance with international water safety guidelines for all the intervention schemes. Thus, the water safety interventions applied may be considered as an effective and viable interim solution in efforts to eventually achieve universal access to safely managed drinking water in rural settings. Study Limitations Some features of this study design limit our ability to generalize the findings beyond the sampled population. Most notable is the generally high level of water service experienced within both the intervention and control schemes. Nine out of every 10 survey respondents reported at the baseline that their main water source tasted and smelled good and was generally safe to drink. Moreover, all the water points were functioning at the time of the research team's visit, and most survey respondents believed it would likely continue to function well over the coming year. This generally high level of satisfaction and confidence among water users may be unique to the program setting and is likely a driving factor in the households' willingness to pay for water services and engage in stewardship of the infrastructure over time. Another issue is the enrollment of only three control schemes as compared with the five intervention schemes. This research design limitation was driven by both resource constraints and ethical considerations; within a set budget, there was the need to ensure the potential benefits of the study (water supply upgrades) outweighed the potential costs (lost time due to participation). As a result, the sample sizes in the control group were roughly three-fifths the size of those in the intervention group, which underpowered comparisons of small effective sizes. Finally, the follow-up period for this study was eight months, which only allows for preliminary conclusions to be made regarding the sustainability of the interventions examined. Future research should ideally monitor the outcomes reported here over a longer period (at least one year and ideally up to five years). This is especially critical for understanding the sustainability of behavioral measures known to decline over time, such as household ceramic filter use. Recommendations for Water Sector Policy and Practice There are several recommendations for water sector practitioners arising from this research. First, these study results indicate that over the short term (eight months), the applied water safety interventions were highly effective in motivating uptake and use of household water treatments. Such promotional activities were tailored to the needs of the households in rural Nepal and were integrated into a broader WSP framework. To replicate this success, program managers should strive for a comprehensive approach that merges household-centered WASH promotional activities with system-scale water safety efforts. Second, sanitary inspection scores did not reliably predict the microbial concentrations at various sampling points and are therefore insufficient for assessing actual health risks due to drinking water consumption. Based on these findings, standardized sanitary inspection packages should be combined with regular water quality testing for a comprehensive risk management approach. Finally, the applied interventions, while effectively improving water quality at the taps and in the household stored water containers, did not achieve perfect compliance with the international guidelines over the eight-month study period, in part, because interventions such as micro-catchment restoration required more time to deliver their intended benefits. Future research should therefore explore additional treatment options, for example, disinfection by automated chlorine dosing or ultraviolet treatment devices. Conclusions This study characterized and assessed a risk-based strategy for improving the drinking water quality of gravity-fed piped schemes in the hilly regions of Mid-Western Nepal. This research was motivated by the need to accelerate progress towards achieving universal access to safely managed drinking water in similar contexts, where effective treatment and regular monitoring of piped supplies is often challenged by geography, limited resources, and unreliable supply chains. The results showed that simplified field laboratories equipped for microbial testing can inform ongoing decision-making regarding targeted system upgrades and mitigation measures. These interventions led to positive changes in the drinking water quality at the taps and within the households over eight months of implementation. Of particular note was the achievement of 100% coverage of household water treatments with ceramic filters and boiling across all intervention schemes. In addition, the results showed high levels of involvement by the households in planning and implementing the WSP within their community, especially through regular engagement with the local water and sanitation users' committee. The study also revealed the inconsistent predictability of microbial contamination using standard sanitary inspection forms alone. This finding suggests that such forms, while useful for identifying potential hazards, should be combined with regular water quality testing for a comprehensive risk management approach for piped schemes. By the study endline visit, half of the samples collected from households' stored water containers and taps were free of fecal contamination-a significant improvement from the baseline visit when only 17% and 7% of the households and the taps, respectively, met the international guidelines for microbial safety. Despite all the water points sampled not meeting the stated guidelines, the applied strategy nevertheless proved promising as an intermediate step towards achieving universal access to safe drinking water in rural areas. Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/15/8/1616/ s1. Section S1: The Regular Monitoring Strategy; Section S2: Water Scheme Upgrades; Section S3: Membrane Filtration Protocol; Section S4: Construction of Field Incubators; Section S5: Water Quality Tests Validity Measurements; Section S6: Detailed Household Survey Results; Section S7: Comparison of Individual Sampling Points; Section S8: Details of Chlorinated Schemes; Section S9: Comparison between Use and Non-Use of Ceramic Candle Filters; Section S10: Detailed Microbial Results; and Section S11: Temporal View of Baseline, Regular Monitoring, and Endline Microbial Data. Scheme-level water quality data are also provided as a supplementary file. Funding: This research was supported by the Swiss Agency for Development Cooperation and the REACH programme funded by UK Aid from the UK Department for International Development (DFID) for the benefit of developing countries (Aries Code 201880). The views expressed and information contained in this manuscript are not necessarily those of or endorsed by these agencies, which can accept no responsibility for such views or information or for any reliance placed upon them. Acknowledgments: The authors thank the community members in the study area who served as laboratory technicians, water samplers, WSP task force members, and household survey participants. Jiban Singh, Manuel Holzer, the Helvetas staff, and the local enumerator teams provided excellent field support during the training and data collection. Many thanks to Arnt Diener for contributing to the REACH grant submission and Tim Julian for constructive comments on the results section of the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Table A2. Baseline and endline water quality at the intervention and control schemes for the households, reservoir tanks, and taps: percentages at guidelines, low risk, and high risk.
2018-08-12T01:51:47.995Z
2018-07-31T00:00:00.000
{ "year": 2018, "sha1": "3ccc3bb69858aeaacc3e83b182d124ec13e34dbd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/15/8/1616/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe797322b78026f64c3979bd687480ead8006772", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
119342981
pes2o/s2orc
v3-fos-license
Dynamics of relaxor ferroelectrics We study a dynamic model of relaxor ferroelectrics based on the spherical random-bond---random-field model and the Langevin equations of motion. The solution to these equations is obtained in the long-time limit where the system reaches an equilibrium state in the presence of random local electric fields. The complex dynamic linear and third-order nonlinear susceptibilities $\chi_1(\omega)$ and $\chi_3(\omega)$, respectively, are calculated as functions of frequency and temperature. In analogy with the static case, the dynamic model predicts a narrow frequency dependent peak in $\chi_3(T,\omega)$, which mimics a transition into a glass-like state. I. INTRODUCTION in a typical dielectric relaxation experiment. In Sec. II we introduce the uniaxial SRBRF model Hamiltonian in the representation of eigenstates of the random interaction and write down the Langevin equations of motion. The asymptotic solution is studied in Sec. III, where the static linear and nonlinear susceptibilities are derived. In Sec. IV the dynamic linear response is given, and in Sec. V the corresponding results for the third-order nonlinear response are derived. Finally, in Sec. VI we present our conclusions. II. DYNAMIC SRBRF MODEL In general, the polarization of i-th polar cluster, i = 1, 2, ..., N, is a three component (n = 3) vector S i = (S ix , S iy , S iz ), its length being restricted solely by the spherical condition i ( S i ) 2 = 3N. In the present work we will discuss the simpler uniaxial (n = 1) case The SRBRF model Hamiltonian is thus where J ij are the randomly frustrated intercluster interactions, h i local random electric fields, E an applied uniform electric field, and g the appropriate dipole moment [7]. As usual, J ij is assumed to be infinitely ranged and distributed according to Gaussian statistics with average value J 0 /N and cumulant variance J 2 /N. The Gaussian random fields h i are characterized by the random average The uniaxial SRBRF model (2) has potential applicability to uniaxial relaxors such as Sr 1−x Ba x Nb 2 O 6 (SBN). The present results can be, however, generalized to the isotropic n = 3 case as long as there is no mixing of the x, y, z components [7]. The Langevin equations of motion for the variables S i (t) are written as Here τ is the characteristic relaxation time for the reorientation of polar clusters. Eq. (4) implies that τ is site independent, however, some variation of τ across the system should in principle not be excluded, resulting in a distribution of relaxation times [2,6]. The function z(t) plays the role of a Langrange multiplier enforcing the spherical condition (1) at all times [12]. The stochastic Langevin forces ξ i (t) ensure the proper equilibrium distribution and are determined by their ensemble averages ξ i (t)ξ j (t ′ ) av = 2τ δ ij δ(t − t ′ ) . (5) Following the theory of spherical spin glasses we now transform to the representation of eigenstates ψ λ (k) and eigenvalues J λ of the random matrix J ij [14,12,13]. This is done in two steps [15]: First, one introduces "spin wave" states S k = N −1/2 i exp(ikR i )S i ; next, these are expanded in normal modes The transformed equation of motion (4) becomes explicitly Here Ψ λ (0) = Nψ λ (0) and we have rescaled the time to a new dimensionless variable t → t/τ . Assuming a field E(t) applied at t = 0 and introducing the integrating factor we obtain the solution The correlation function must satisfy the equal time relation C(t, t) = 1 at all times in view of the spherical condition (1). From Eqs. (9)-(10) with the aid of Eq. (3) we thus find This is an implicit equation for the Lagrange multiplier z(t). The two types of averages are defined as where ρ 0 (J λ ) and ρ(J λ ) are the densities of eigenvalues in the k = 0 and k = 0 sector of the spectrum, respectively. The eigenvalues J λ have a continuous spectrum −2J < J λ < 2J. If |J 0 | > J, there is also a discrete eigenvalue at J m = J 0 + J 2 /J 0 [14]. Here we will only discuss the case |J 0 | < J. The k = 0 density of states is given by the Wigner semicircle law [14,15] The k = 0 sector, on the other hand, has the density [15] ρ This density of states has a statistical weight O(1/N) and is thus relevant only in averages containing factors of the type Ψ λ (0) 2 ∝ O(N). The dielectric polarization of the system can be expressed in terms of the solution (9) as As shown by Cugliandolo and Dean [13], for times larger than a limiting time t c the system in which ∆ = 0 will always reach an equilibrium state and will thus be characterized by equilibrium dynamics. All information about the initial state S λ (0) is lost for t ≫ t c , i.e., the first term in Eq. (16) becomes irrelevant. In the present case, t c is estimated as t c ≈ 2τ JT /∆. Typically, the asymptotic regime t ≫ t c is explored in a dielectric relaxation experiment. In the following, we will limit ourselves to this regime. Also, for simplicity we will henceforth set g = 1. III. STATIC DIELECTRIC RESPONSE We first consider the case of a static electric field E(t) = E applied at t = 0. At asymptotic times t/τ ≫ 1 the system reaches equilibrium and the Lagrange multiplier z(t) tends to a constant value z. Thus the function (8) becomes and we can evaluate the integrals in Eqs. (11) and (16). Assuming that 2z > βJ λ for all λ (to be justified later) we derive the equation for z: The static linear susceptibility χ 1 = (∂P/∂E) E=0 is derived from Eqs. (16) and (17): The averages in Eqs. (18) and (19) can be expressed in terms of the generalized averages obtained by adding an imaginary generating field iy to the variable g λ , namely, These averages can be evaluated with the aid of Eqs. (12)-(15) for n = 0, differentiating n times with respect to iy, and setting y = 0. For example, from Eqs. (19) and (20) with n = 0 and y = 0 we find: where r ≡ z 2 − β 2 J 2 . The n = 1 average is given by where z(y) ≡ z − iy/2, r(y) ≡ z(y) 2 − β 2 J 2 , and D(y) ≡ β 2 (J 2 + J 2 0 ) − 2βJ 0 z(y). The above equation for z, Eq. (18), becomes in this notation: where the averages χ 1 (0) 0 are obtained by setting J 0 = 0 in Eqs. (21) and (22), respectively, and χ [1] 1 (0) is given by Eq. (22) with y = 0. We will also need the n = 2 average A numerical solution z(T ) of Eq. (23) in zero field (E = 0) can be found at all temperatures and is independent of J 0 as long as |J 0 | < J. An example is shown on Fig. 1 for ∆/J 2 = 0.001. The inset shows that z − βJ is always positive, and since 2J is the largest eigenvalue of J λ one can see that indeed 2z > 2βJ λ for all λ as assumed earlier. When both E = 0 and J 0 = 0, there are in general two complex solutions for z(E, T ) and the present theory is not applicable. In the following we will only consider the cases in which a real solution z(T ) exists and has a real second derivative z ′′ = d 2 z/dE 2 at E = 0. A numerical evaluation shows that the expression (21) for the static linear susceptibility fully agrees with the result obtained by means of replica theory in Ref. [7]. IV. DYNAMIC RESPONSE We now consider the case of an oscillating electric field E(t) = E 0 cos(ωt). This is inserted into Eq. (16). At asymptotic times t ≫ t c the response can be written by analogy with Eq. (25) as [16] where P ω and P 3ω are the amplitudes of the first and third harmonic response, respectively, which are given by Here we have introduced the linear dynamic response χ 1,0 (ω), the third-order nonlinear responses χ 1,1 (ω) and χ 3,0 (ω), etc. We will focus on the first harmonic linear response χ 1,0 (ω), which is equivalent to the dynamic linear susceptibility χ 1 (ω) = χ 1,0 (ω), and on the third-order nonlinear response χ 3,0 (ω). The latter is typically measured by monitoring the third harmonic component of P (t) at small amplitudes of the field E 0 [19]. In order to ensure the proper static limit ω → 0 we will define the third-order nonlinear dynamic susceptibility as χ 3 (ω) = −χ 3,0 (ω). From Eqs. (28)-(30) we thus find In the asymptotic regime, the function φ λ (t) in Eq. (16) behaves as , with z representing the solution of Eq. (18). The first part of the response (16), which will be proportional to ∼ E 0 exp(−iωt), is now given by A. Linear dynamic susceptibility The part of P 1 (ω), which is linear in E 0 , is trivially obtained from Eq. (34) by noting that ϕ(t) = 0 for E 0 = 0. We can thus evaluate the integral and using Eq. (31) we find Comparing with Eqs. (19) and (20) we realize that the averages of the above type can be evaluated with the aid of Eq. (20), in which we set y = ω and n = 0, yielding (with τ restored) For ω → 0 this obviously reduces to the static susceptibility (21). The temperature behavior of χ 1 (ω) will crucially depend on the temperature variation of the relaxation time τ (T ). The SRBRF model (2) and the equations of motion (4) contain no information about τ (T ). It has been found empirically [17,2,6] that some of the properties of relaxors can be described by assuming a Vogel-Fulcher (VF) relationship for τ , namely, where T 0 is the VF temperature. This expression is valid for T > T 0 and would lead to a divergence of τ for T → T 0 . There is no a priori relation between T 0 and the parameters of the SRBRF model. A similar situation occurs in Ising dipolar glasses, where a probability distribution of relaxation times g(ln τ ) has been used in combination with an empirical Debye-type response [18]. With τ lying in the range τ min < τ < τ max , the VF temperature T 0 has been identified with the freezing temperature T f . On the other hand, τ min has been fitted to an Arrhenius-type expression τ min ∝ exp(E/T ). The same approach was found to be applicable to relaxors as well [19]. An alternative approach is based on the master equation for the reorientation of cluster polarization assuming a VF relaxation time of the type (37), where the barrier heights U are distributed according to a Gaussian probability function [2,6]. Such an approach was found to be applicable to PMN and PST in the region T > T 0 . In general, we can thus introduce the average dynamic susceptibility where the probability distribution of relaxation times g(ln τ ) is physically justified by the fact that relaxors are inherently inhomogeneous systems due to compositional disorder. Thus one may imagine, for example, that the relaxor system consists of a set of macroscopic regions, which are formally characterized by the same microscopic equation of motion, but differ in the value of the parameter τ . One encounters serious difficulties in attempting to describe the dynamic response at T < T 0 . Formally one could assume that τ → ∞ for T ≤ T 0 , but this will lead to a zero value of χ 1 (ω) = 0 at all temperatures T ≤ T 0 . We can single out the following representative cases: (i) a single VF relaxation time (37); (ii) a nonsingular distribution of barrier heights g(U); (iii) a distribution of relaxation times g(ln τ ) such that its normalization τmax τ min dτ g(ln τ )/τ diverges as τ max → ∞. The first case is illustrated in Fig. 2, where we show the calculated real and imaginary parts of χ 1 (T, ω) = 0 for several values of frequency assuming a single VF relaxation time (37). As in Fig. 1 we assume J 0 = 0.9J and ∆/J 2 = 0.001, as well as T 0 = J. Such behavior is incompatible with experiments, which generally show a smooth decrease of χ ′ 1 (T, ω) and χ ′′ 1 (T, ω) across the region where T 0 is expected to be located. A more realistic description can be obtained, for example, by assuming a distribution w(T 0 ) of VF temperatures T 0 , where T 0 is allowed to vary in the range 0 < T 0 < T max 0 . Using the relation d(ln τ )g(ln τ ) = dT 0 w(T 0 ) in Eq. (38) and choosing a linear distribution w(T 0 ) = 2(1 − T 0 /J)/J with T max 0 = J, we obtain the temperature dependence of the linear susceptibility shown in Fig. 3. Here we used the same set of parameters as in Fig. 2. In contrast to the single VF temperature case, the above distribution leads to nonzero values of χ 1 (ω) at all temperatures. The shape of the real and imaginary part of χ 1 (ω) is in qualitative agreement with the observed relaxation spectra in PMN [19] and PLZT [20]. It should be noted, however, that the above result for the linear susceptibility contains only the contribution of polar clusters. Other contributions may exist-for example, that of optic phonons-which are not expected to show any anomalies near T f . In general, such contributions can be written as a sum of Debye-like terms, with the possibility of an average over the corresponding relaxation times. At present, the problem of a realistic relaxation time distribution in relaxors, which would be appropriate at all temperatures, has not yet been resolved. B. Zero-field-cooled susceptibility In analogy with Ising spin glasses [21] and dipolar glasses [22] one can introduce an effective relaxation time for the low-frequency response Returning to Eq. (31) and using the above definition of χ 1 (y) from Eq. (22) in which we set y = ωτ , we obtain the result It will be shown later that the real part of the nonlinear susceptibility χ 3 (ω) is also proportional to χ [1] 1 (ω) and in the static limit shows a sharp peak at the "freezing" temperature T f ≈ J for ∆ ≪ J 2 , while it actually diverges if ∆ = 0. Since χ 1 (ω) is a well behaved function of temperature, it follows that τ ef f increases as ∼ (T − J) −1 on approaching T f , however, it remains finite at T f . Thus the behavior of τ ef f mimics the freezing transition in this case. A true freezing transition at T f = T 0 can be described by assuming a VF temperature dependence of τ , which is then transferred to τ ef f via Eq. (40). When τ ef f is large, the system will only reach equilibrium at times t ≫ τ ef f , which may become very long. In measuring the static susceptibility χ 1 one should distinguish between the field-cooled and zero-field-cooled susceptibility, χ F C 1 and χ ZF C 1 , respectively. Here χ F C 1 is given by Eq. (19) and corresponds to an experiment carried out on a time scale t ≫ τ ef f . On the other hand, χ ZF C 1 can be obtained by turning on a field E(t 1 ) = E at time t 1 = 0 and measuring the response P (t) at time t > t 1 as described by Eq. (16). Thus we can write and using Eq. (17) we find For t → ∞ this reduces to the previous result (19), which corresponds to χ F C 1 . The difference between the two susceptibilities δχ The value of the integral can be estimated by first using Eqs. (13) and (15) and expanding the integrand in powers of J 0 /J [23]. To leading order the result is independent of the parameter J 0 and shows that for ∆ ≪ J 2 and T ≤ J one has a power law behavior δχ 1 (t) ∼ (t/τ ) −1/2 , implying a slow decay and a large difference between the two susceptibilities, which has been observed experimentally [19]. On the other hand, for T > J the asymptotic behavior is a combination of power law and exponential, i.e., δχ 1 (t) ∼ (t/τ ) −3/2 exp[−2(z − βJ)t/τ ]. Thus in this regime the difference decays much faster and the two susceptibilities become indistinguishable on a typical experimental time scale. V. NONLINEAR DIELECTRIC SUSCEPTIBILITY To calculate the third order partial derivative in Eq. (32) we return to Eq. (34), in which ϕ(t) is now a function of E 0 . In general, ϕ(t) will be a sum of terms, which are even powers of E 0 e ±iωt . We will focus on the second-order term ∼ E 2 0 e −2iωt . Introducing the function we can express the third partial derivative in Eq. (32) as The function X(t) will be calculated from Eq. (11) in the asymptotic limit. Considering only the terms, which asymptotically behave as ∼ e −2iωt and taking the second derivative with respect to E 0 /2 leads to The last double integral becomes for asymptotic values of t We now apply the Laplace transform to Eq. (46) using the definitioñ and obtain the result: The averages can be expressed in terms of the generalized responses (22). The function X(t) can be obtained by the inverse Laplace transform. Its behavior is determined by the poles p k = p ′ k + ip ′′ k ofX(p). A numerical evaluation shows that all poles are such that p ′ k ≤ 0. At asymptotic times only those poles for which p ′ k = 0 will be relevant. There is only one such pole, namely, p 0 = −2iω leading to Inserting this expression into Eq. (45), evaluating the integral, and returning to Eq. (32), we obtain the final result for the complex third-order nonlinear susceptibility (with τ restored): Here χ [1] 1 (ωτ ) is given by Eq. (22) with y = ωτ , and χ 1 (ω) by Eq. (36). This expression may now be averaged over the distribution of relaxation times τ as argued in the paragraph preceding Eq. (39). In Fig. 4 the real and imaginary parts of χ 3 (ω) are plotted as functions of temperature for several values of frequency. In analogy with the case of χ 1 (ω) above, a VF behavior (37) of τ and a linear distribution of VF temperatures T 0 has been used. The values of the parameters are again J 0 /J = 0.9 and ∆/J 2 = 0.001. The real part χ ′ 1 (T, ω) has a sharp peak near T ≃ J, whose origin can be traced back to the function χ [1] 1 (ω) appearing in Eq. (51). As in the linear susceptibility case, a strong frequency dispersion is evident. In the limit of small frequencies, i.e., ωτ 0 ≪ 1, the behavior of χ ′ 3 (T ) is the same as in the static case studied by replica theory [7]. At high frequencies and low temperatures, χ ′ 3 (T, ω) may become negative due to the last factor in the numerator of Eq. (51), whose imaginary part changes sign. This effect cannot be observed easily, since this would require a measurement of the nonlinear susceptibility in the range where the absolute value of χ ′ 3 (T, ω) is extremely small compared to its peak value. It is easily verified that in the limit ω → 0, Eq. (51) reduces to the static result (27). It has been shown in the static theory [7] that a crucial quantity, which can discriminate between the dipolar glass-like and ferroelectric behavior, is not χ 3 (T ) but rather the rescaled static nonlinear susceptibility a 3 = χ 3 /χ 4 1 . In spin glasses without random fields a 3 (T ) diverges at T f , and in a relaxor it develops a peak near T f , whereas in a ferroelectric with long range order a 3 decreases with decreasing temperature on approaching the critical temperature T c [19]. It has been suggested that a 3 (T ) could also be extracted from the dynamic linear and nonlinear susceptibilities by considering the following generalized function [16,24]: In Fig. 5, a ′ 3 (T, ω) is plotted as a function of temperature for the same set of parameters as in Fig. 4. Each of the three factors in Eq. (52) has been averaged over the linear distribution of T 0 . Obviously, a ′ 3 (T, ω) develops a peak near T f ≃ J at all frequencies shown. On the high-T side of the peak, a ′ 3 (T ) is independent of frequency and agrees with the static a 3 . Near the peak and on its low-T side, however, strong frequency dispersion appears. Recent experiments in PMN and PLZT [5] indicate that the high-T quasistatic part of a 3 (T ) exhibits a crossover between the paraelectric-like decreasing behavior and a glass-like increasing behavior on approaching T f . This type of behavior is characteristic of relaxor ferroelectrics. The crossover can be qualitatively described by the present dynamic SRBRF model, as shown in the inset of Fig. 5. The model also correctly predicts the onset of strong frequency dispersion in a ′ 3 (T, ω) at low temperatures, which has been observed in both PMN and PLZT [5]. VI. CONCLUSIONS We have presented a dynamic model of uniaxial relaxor ferroelectrics based on the recently developed static spherical random-bond-random-field (SRBRF) model [7]. Following the theory of spherical models of spin glasses [12,13] the order parameter field is assumed to obey the Langevin equation of motion written in the representation of eigenstates of the random interaction matrix with the spherical condition being enforced at all times. The equations of motion are exactly solvable in the asymptotic limit where the relaxor system reaches an equilibrium state. The linear and the third-order nonlinear dynamic response functions have been derived. In the static ω → 0 limit these results are precisely equivalent to the static linear and nonlinear susceptibilities χ 1 and χ 3 , respectively, obtained earlier by the replica method [7]. In analogy with the static case, the dynamic theory does not predict a sharp transition into a dipolar glass-like state. Rather, in the case of weak random fields the third-order susceptibility shows a narrow peak at a temperature T f , which mimics the freezing transition. Within the context of a dynamic model the freezing transition would correspond to the divergence of the longest relaxation time in the system. However, the dynamic SRBRF model contains no information on the behavior of the relaxation time τ appearing in the equations of motion, and does not lead to the divergence of the effective relaxation time on approaching T f . In order to describe the observed freezing transition one should therefore introduce a divergent behavior of τ . This can be done, for example, by assuming a Vogel-Fulcher (VF) law for the temperature behavior of τ in accordance with empirical findings [2,6], however, this will suppress the dynamic response at all temperatures lower than the VF temperature T 0 . We have shown that by introducing a probability distribution of VF temperatures T 0 one can obtain linear and nonlinear response functions which remain finite at all temperatures, in qualitative agreement with experiments. The largest value of T 0 has been set equal to the static "freezing" temperature T f , which is determined by the random bond strength parameter J. The actual shape of χ 1 (T, ω) and χ 3 (T, ω), where the bar means an average over τ or equivalently over T 0 , strongly depends on the probability distribution of relaxation times g(ln τ ). Within the framework of the dynamic SRBRF model we have also calculated the scaled dynamic nonlinear susceptibility a ′ 3 , which allows one to discriminate between the ferroelectric-like and glass-like behavior of relaxors [5,24]. In the quasistatic regime above T f , a ′ 3 (T, ω) is practically independent of ω and its temperature dependence shows a crossover between paraelectric-like and glass-like behavior on approaching T f from above. This crossover behavior has recently been observed both in PMN and PLZT [5]. The calculated shape of χ 1 (T, ω) and χ 3 (T, ω), and hence of a ′ 3 (T, ω), strongly depends on the probability distribution of relaxation times g(ln τ ). In the present work we did not attempt to investigate in detail the effects of g(ln τ ) on the behavior of these quantities; however, we have shown that if one assumes a linear distribution of VF temperatures T 0 , the predicted behavior of χ 1 (T, ω) and χ 3 (T, ω) is in qualitative agreement with experiments in PMN [5,19,24] and PLZT [20,5]. On the other hand, a ′ 3 (T, ω) obtained in this manner has a peak near T f ≃ J and shows a strong frequency dispersion below T f . The predicted peak in a ′ 3 (T, ω) has not been observed experimentally [24], suggesting that one should perhaps search for a more realistic distribution g(ln τ ). This problem will be dealt with in a future publication.
2014-10-01T00:00:00.000Z
2000-10-02T00:00:00.000
{ "year": 2000, "sha1": "7457b4d2309b94b851c5df64f1191000bd1b6993", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0010022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7457b4d2309b94b851c5df64f1191000bd1b6993", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249128459
pes2o/s2orc
v3-fos-license
The Potential Role of Cyclopeptides from Pseudostellaria heterophylla, Linum usitatissimum and Drymaria diandra, and Peptides Derived from Heterophyllin B as Dipeptidyl Peptidase IV Inhibitors for the Treatment of Type 2 Diabetes: An In Silico Study Dipeptidyl peptidase 4 (DPP4) inhibitors can treat type 2 diabetes by slowing GLP-1 degradation to increase insulin secretion. Studies have reported that Pseudostellaria heterophylla, Linum usita-tissimum (flaxseed), and Drymaria diandra, plants rich in Caryophyllaceae-type cyclopeptides and commonly used as herbal or dietary supplements, are effective in controlling blood sugar. The active site of DPP4 is in a cavity large enough to accommodate their cyclopeptides. Molecular modeling by AutoDock Vina reveals that certain cyclopeptides in these plants have the potential for DPP4 inhibition. In particular, “Heterophyllin B” from P. heterophylla, “Cyclolinopeptide C” from flaxseed, and “Diandrine C” from D. diandra, with binding affinities of −10.4, −10.0, and −10.7 kcal/mol, are promising. Docking suggests that DPP4 inhibition may be one of the reasons why these three plants are beneficial for lowering blood sugar. Because many protein hydrolysates have shown the effect of DPP4 inhibition, a series of peptides derived from Heterophyllin B precursor “IFGGLPPP” were included in the study. It was observed that IFWPPP (−10.5 kcal/mol), IFGGWPPP (−11.4 kcal/mol), and IFGWPPP (−12.0 kcal/mol) showed good binding affinity and interaction for DPP4. Various IFGGLPPP derivatives have the potential to serve as scaffolds for the design of novel DPP4 inhibitors. About Diabetes Diabetes is a metabolic disorder characterized by increased blood glucose levels. With urbanization and social and cultural change in food behavior, the prevalence of diabetes is rapidly increasing. Diabetes and its extended vascular complications, diabetic nephropathy, retinopathy, etc., cause heavy physical and mental pressure to patients [1]. Diabetes is classically divided into three types: type 1, type 2, and gestational diabetes, with type 2 diabetes (T2D) accounting for the majority. Type 2 diabetes is caused by insulin resistance, which refers to impaired sensitivity to insulin-mediated glucose disposal or insufficient insulin secretion [2]. Type 2 diabetes can be prevented by maintaining a normal weight, regular exercise, and eating a healthy diet [3]. However, some will still face the need for medication. Metformin is the first-line hypoglycemic agent recommended when most patients are initially on drug control. In addition to metformin, new hypoglycemia treatments are constantly being proposed due to the high demand for T2D drugs and the emergence of resistance. Mechanisms of Incretins (GLP-1 and GIP) in Glucose Homeostasis and Diabetes Treatment Incretins are hormones secreted by endocrine cells of the intestinal epithelium with food to maintain blood sugar balance. The two most studied incretins are glucagon-like peptide 1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP) [4]. Studies have found that GLP-1 and GIP can stimulate pancreatic β-cells to increase insulin synthesis and secretion to help stabilize blood sugar after meals [5]. More follow-up studies have indicated that GLP-1 has a greater impact on blood sugar than GIP. In addition to stimulating insulin secretion from β cells, GLP-1 can also inhibit the secretion of glucagon from α cells and increase the secretion of somatostatin by δ cells. Therefore, GLP-1 accounts for more than GIP in studies of diabetes based on the incretin system [6,7]. The bioactive forms of GLP-1 include GLP-1 (7−37) and GLP-1 (7−36) NH 2 . These active peptides maintain blood glucose homeostasis by activating the GLP-1 receptor (GLP-1R) on β cells, triggering a series of downstream reactions. The activation of GLP-1R by GLP-1 or GLP-1 analogs directly results in cAMP accumulation, followed by increased insulin secretion. Longer-term effects also include promoting β-cell proliferation and reducing β-cell apoptosis. This is of great significance in delaying the depletion of pancreatic islet cells in diabetic patients [6,7]. The effects of GLP-1 are also involved in extra-pancreatic events. The topic of greatest interest was the effect of GLP-1 on appetite and weight loss. The release of GLP-1 and related effects delay gastric emptying and bowel motility, and in addition, exert pressure on the hypothalamus to alter satiety; thereby, suppressing appetite and assisting in weight control [7]. Endogenous GLP-1 in the circulation will be immediately degraded by dipeptidyl peptidase 4 (DPP4) into inactive metabolites GLP-1 (9−37) and GLP-1 (9−36) NH 2 (t 1/2~1-2 min). In T2D patients, reduced GLP-1 secretion or decreased GLP-1 response, also known as incretin deficiency, has been observed, resulting in poor postprandial blood glucose regulation [6]. DPP4 inhibitors are designed to interfere with the enzymatic activity of DPP4, reducing its rate of cleavage of GLP-1 to increase the concentration of active GLP-1 in plasma. Another incretin-based therapy for T2D is to apply GLP-1 analogs to mimic the effects of endogenous GLP-1. The hypoglycemic effects of incretin-based therapies for T2D have been proven [7]. Clinically available oral DPP4 inhibitors include Sitagliptin, Vildagliptin, Saxagliptin, Alogliptin, Linagliptin, etc. GLP-1 analogs such as Exenatide, Liraglutide, and Semaglutide (now also available in oral form) typically require administration by injection and are therefore less convenient than the orally available DPP4 inhibitors [7][8][9]. One of the advantages of incretin-based therapies is that since the effect of GLP-1 depends on blood glucose concentration, it rarely causes hypoglycemia. After the treatment reaches the tolerated dose of metformin and the patient's blood sugar control is still unsatisfactory, DPP4 inhibitors can be used alone or in combination with metformin as second-line therapy [10]. According to clinical observations, DPP4 inhibitors have an apparent influence on the activity of DPP4, reducing the baseline by more than 50% [11]. When Sitagliptin (the first listed DPP4 inhibitor) monotherapy is used to treat adult T2D patients, the observed efficacy includes improvement in HbA1c, fasting plasma glucose (FPG), and 2-h postprandial glucose (PPG). In addition, most studies have reported its beneficial effects on the regulation of triglyceride (TG), HDL-c, and LDL-c [6,12]. The Structure of DPP4 and the Interaction of DPP4 Inhibitors with DPP4 Dipeptidyl peptidase 4 (DPP4), also known as CD26 (cluster of differentiation 26), is a type II transmembrane serine protease with 766 amino acids (110 kDa) anchored to the membrane, that selectively cleaves the Xaa-proline or Xaa-alanine dipeptides from the N-terminus of GLP-1 [11]. Transmembrane DPP4 enhances its enzymatic activity through dimerization. Matrix metalloproteinases (MMPs) cleave DPP4 on the membrane. Cleaved DPP4 lacks the cytoplasmic domain (aa 1-6), transmembrane domain (aa 7-28), and flexible stalk (aa 29-39) and becomes a circulating or soluble form (sDPP4, aa [11,[13][14][15]. sDPP4 is less studied relative to DPP4 on the membranes. The extracellular part of the monomer in dimeric DPP4, in addition to the flexible stalk, mainly includes a large cavity constructed between the eight-bladed β-propeller domain (aa and α/β hydrolase domain (aa 39-51 and 506-766) with a set of entrances for GPL-1 and GIP, etc. to in and out [11,[13][14][15]. The catalytic triad (composed of Ser630, Asp708, and His740) and its adjacent amino acids Glu205 and Glu206 (to ensure the anchoring of the N-terminus of the substrate) have a great influence on the enzymatic activity of DPP4. Moreover, Arg125 and Asn710 contribute to electrostatic adsorption; Tyr662 and Tyr666 form hydrophobic pockets; and Tyr547 is responsible for oxygen anion holes, constituting a series of important amino acids in the active site of DPP4 [11]. The area enclosed by the amino acid residues Glu205, Glu206, Tyr662, Ser630, Trp629, and Tyr547 is the site where Linagliptin (PDB: 2RGU) and most DPP4 inhibitors are located ( Figure 1) [16]. dimerization. Matrix metalloproteinases (MMPs) cleave DPP4 on the membrane. Cleaved DPP4 lacks the cytoplasmic domain (aa 1-6), transmembrane domain (aa 7-28), and flexible stalk (aa 29-39) and becomes a circulating or soluble form (sDPP4, aa [11,[13][14][15]. sDPP4 is less studied relative to DPP4 on the membranes. The extracellular part of the monomer in dimeric DPP4, in addition to the flexible stalk, mainly includes a large cavity constructed between the eight-bladed β-propeller domain (aa 54-497) and α/β hydrolase domain (aa 39-51 and 506-766) with a set of entrances for GPL-1 and GIP, etc. to in and out [11,[13][14][15]. The catalytic triad (composed of Ser630, Asp708, and His740) and its adjacent amino acids Glu205 and Glu206 (to ensure the anchoring of the N-terminus of the substrate) have a great influence on the enzymatic activity of DPP4. Moreover, Arg125 and Asn710 contribute to electrostatic adsorption; Tyr662 and Tyr666 form hydrophobic pockets; and Tyr547 is responsible for oxygen anion holes, constituting a series of important amino acids in the active site of DPP4 [11]. The area enclosed by the amino acid residues Glu205, Glu206, Tyr662, Ser630, Trp629, and Tyr547 is the site where Linagliptin (PDB: 2RGU) and most DPP4 inhibitors are located ( Figure 1) [16]. Some inhibitors may also expand the discussion to the outer region formed by Arg125, Asn710, His740, Tyr752, Tyr48, and Lys554. There is also a design strategy for DPP4 inhibitors, in which the inhibitor approaches Lys554 in the S1′ pocket to form a salt bridge and establishes a hydrophobic interaction with Tyr547 to achieve the effect instead of binding to the amino acid residues in the catalytic center [17]. Additionally, it was observed that the (1-phenylpyrazol-5-yl) piperazine moiety of Teneligliptin (PDB: 3VJK) extends to Ser209, Phe357, and Arg358 closer to the β-propeller domain [18]. Vildagliptin (PDB: 6B1E) and Saxagliptin (PDB: 3BJM) are two cyanopyrrolidine-bearing compounds with smaller molecules than Linagliptin. The main structure of their crystals in DPP4 only Some inhibitors may also expand the discussion to the outer region formed by Arg125, Asn710, His740, Tyr752, Tyr48, and Lys554. There is also a design strategy for DPP4 inhibitors, in which the inhibitor approaches Lys554 in the S1 pocket to form a salt bridge and establishes a hydrophobic interaction with Tyr547 to achieve the effect instead of binding to the amino acid residues in the catalytic center [17]. Additionally, it was observed that the (1-phenylpyrazol-5-yl) piperazine moiety of Teneligliptin (PDB: 3VJK) extends to Ser209, Phe357, and Arg358 closer to the β-propeller domain [18]. Vildagliptin (PDB: 6B1E) and Saxagliptin (PDB: 3BJM) are two cyanopyrrolidine-bearing compounds with smaller molecules than Linagliptin. The main structure of their crystals in DPP4 only occupies the more concentrated area between "Glu205, Glu206, and Ser 630" and forms a covalent bond with Ser 630 (Figure 2) [19]. occupies the more concentrated area between "Glu205, Glu206, and Ser 630" and forms a covalent bond with Ser 630 (Figure 2) [19]. (c-f) Clinical DPP4 drugs. Compared with Sitagliptin, the molecular size and proline-contained structure of the dipeptides IP and IPA are closer to Vildagliptin and Saxagliptin. However, Vildagliptin and Saxagliptin are cyanopyrrolidine-bearing compounds that can form a covalent bond with DPP4. In order to establish more hydrogen bonds between the peptide and DPP4, it may be necessary to increase the length of the sequence. Natural Products with Relevant Reports on Lowering Blood Sugar and Their Mechanisms Many herbs have been shown to have the effect of regulating blood sugar and have become a choice of dietary supplements for patients with T2D. Studies have found that DPP4, PTP1B, α-glucosidase, AMPK, PPARγ, etc. are the targets that natural products may be involved in hypoglycemia [20]. These hypoglycemic natural substances include a large number of phenols, lignans, terpenes, alkaloids, protein hydrolysates, etc., but cyclic peptides (cyclopeptides) are still rare. Due to the highly charged nature of the catalytic domain of PTP1B, the oral drug design of PTP1B inhibitors remains a very challenging task [21]. Inhibition of α-glucosidase reduces the intestinal absorption of glucose and slows postprandial blood glucose rise, which is the possible hypoglycemic mechanism of many natural products [22]. However, if drug intake increases pancreatic β-cell density and improves fasting blood glucose, the pharmacological mechanism may go beyond the inhibition of α-glucosidase. DPP4 inhibition is another hypoglycemic target that may be involved. The triterpenoids quinovic acid-3β-O-β-d-glycopyranoside, lupeol, and phytosterol stigmasterol isolated from natural anti-diabetic plant Fagonia cretica L. and Hedera nepalensis K. Koch have been demonstrated to have inhibitory effects on DPP4 [23]. Flavonoids and phenols such as luteolin, apigenin, quercetin, isoquercetin, rosmarinic acid, naringin, and eriocitrin also revealed the efficacy in inhibiting the activity of DPP4 [24]. Curcumin is evaluated as an α-glucosidase and DPP4 inhibitor, and is recommended for the management of diet-induced hyperglycemia [25]. In addition, many protein hydrolysates from natural resources show potential as DPP4 inhibitors [26,27] such as LKPTPEGDL and LKPTPEGDLEIL from pepsin-treated bovine whey proteins [28]; LPQNIPPL from gouda-type cheese [29]; PPPP, GP, PP, MP, VA, MA, KA, LA, FA, AP, FP, PA, LP, VP, LL, VV, HA, IPA, and IPI from the hydrolysis of amaranth proteins [30]; and LP and IP from defatted rice bran ( Figure 2) [31]. The molecular weights of these peptides vary widely. Although the spacing of Phe357, Ser209, Glu205, and Ser630 to Lys554 in DPP4 is sufficient to accommodate large molecules, small molecules can also occupy the vicinity of the catalytic center and hinder enzyme activity. This may explain why LKPTPEGDLEIL and IP both show the bioactivity of DPP4 inhibition in related studies. Pseudostellaria Heterophylla, a Reported Natural Product with Hypoglycemic Effect Pseudostellaria heterophylla (Heterophylly Falsesatarwort Root, Taizishen, or P. heterophylla) is rich in cyclic peptides (cyclopeptides) and is reported to be a medicinal plant with hypoglycemic effect. P. heterophylla, belonging to the Caryophyllaceae family, is known as the "ginseng of the lungs" (similar to ginseng that is good for the lungs). According to the herbal pharmacopoeia record, it is suitable for improving dry cough, loss In order to establish more hydrogen bonds between the peptide and DPP4, it may be necessary to increase the length of the sequence. Natural Products with Relevant Reports on Lowering Blood Sugar and Their Mechanisms Many herbs have been shown to have the effect of regulating blood sugar and have become a choice of dietary supplements for patients with T2D. Studies have found that DPP4, PTP1B, α-glucosidase, AMPK, PPARγ, etc. are the targets that natural products may be involved in hypoglycemia [20]. These hypoglycemic natural substances include a large number of phenols, lignans, terpenes, alkaloids, protein hydrolysates, etc., but cyclic peptides (cyclopeptides) are still rare. Due to the highly charged nature of the catalytic domain of PTP1B, the oral drug design of PTP1B inhibitors remains a very challenging task [21]. Inhibition of α-glucosidase reduces the intestinal absorption of glucose and slows postprandial blood glucose rise, which is the possible hypoglycemic mechanism of many natural products [22]. However, if drug intake increases pancreatic β-cell density and improves fasting blood glucose, the pharmacological mechanism may go beyond the inhibition of α-glucosidase. DPP4 inhibition is another hypoglycemic target that may be involved. The triterpenoids quinovic acid-3β-O-β-d-glycopyranoside, lupeol, and phytosterol stigmasterol isolated from natural anti-diabetic plant Fagonia cretica L. and Hedera nepalensis K. Koch have been demonstrated to have inhibitory effects on DPP4 [23]. Flavonoids and phenols such as luteolin, apigenin, quercetin, isoquercetin, rosmarinic acid, naringin, and eriocitrin also revealed the efficacy in inhibiting the activity of DPP4 [24]. Curcumin is evaluated as an α-glucosidase and DPP4 inhibitor, and is recommended for the management of diet-induced hyperglycemia [25]. In addition, many protein hydrolysates from natural resources show potential as DPP4 inhibitors [26,27] such as LKPTPEGDL and LKPTPEGDLEIL from pepsin-treated bovine whey proteins [28]; LPQNIPPL from gouda-type cheese [29]; PPPP, GP, PP, MP, VA, MA, KA, LA, FA, AP, FP, PA, LP, VP, LL, VV, HA, IPA, and IPI from the hydrolysis of amaranth proteins [30]; and LP and IP from defatted rice bran ( Figure 2) [31]. The molecular weights of these peptides vary widely. Although the spacing of Phe357, Ser209, Glu205, and Ser630 to Lys554 in DPP4 is sufficient to accommodate large molecules, small molecules can also occupy the vicinity of the catalytic center and hinder enzyme activity. This may explain why LKPTPEGDLEIL and IP both show the bioactivity of DPP4 inhibition in related studies. Pseudostellaria Heterophylla, a Reported Natural Product with Hypoglycemic Effect Pseudostellaria heterophylla (Heterophylly Falsesatarwort Root, Taizishen, or P. heterophylla) is rich in cyclic peptides (cyclopeptides) and is reported to be a medicinal plant with hypoglycemic effect. P. heterophylla, belonging to the Caryophyllaceae family, is known as the "ginseng of the lungs" (similar to ginseng that is good for the lungs). According to the herbal pharmacopoeia record, it is suitable for improving dry cough, loss of appetite, fatigue, mental exhaustion, and physical weakness after illness. It is also used as a nutritional supplement for children with a weak physique. In modern times, P. heterophylla is one of the important materials used in clinical Chinese compound prescriptions for improving hyperglycemia. It is rich in polysaccharides, saponins, and cyclopeptides, among which Heterophyllin B (HB) is one of its quality indicators [32][33][34]. Studies have already shown that P. heterophylla's polysaccharides and saponins have the effect of lowering blood sugar [35,36]. However, a review of the hypoglycemic effect of its cyclopeptides is lacking. Heterophyllin A (HA) and HB were found in 1991 as the first cyclopeptides identified from P. heterophylla and encouraged a large number of studies on cyclopeptides in the next 30 years [37]. In the classification of cyclopeptides by NH Tan et al., the cyclopeptides from P. heterophylla including Heterophyllin A, B, C, J, and Pseudostellarin A~H were entered into the category of "Caryophyllaceae-type cyclic peptides" (CTCs) (CTCs: homo-mono-cyclopeptides formed with the peptide bonds, which include cyclic dipeptides to dodecapeptides) [38]. In 2013, PG Arnison et al.'s recommendations for a universal nomenclature suggested that plant N-C cyclic peptides lacking disulfide bonds and significantly biased toward hydrophobic amino acids could be classified as "Orbitides". The cyclic peptides listed in CTCs and Orbitides are approximately the same, involving at least nine independent plant families. In addition to Caryophyllaceae, it also includes Annonaceae, Linaceae, and Rutaceae, etc. Some non-Caryophyllaceae-derived cyclic peptides used to be called Orbitides such as those derived from flaxseed. Studies have shown that Orbitides have many biological activities including cytotoxicity, antiplatelet, antimalarial, immune regulation, immune suppression, etc. [39]. Recently, Feng Lu et al. found that P. heterophylla's cyclopeptides can ameliorate COPD (chronic obstructive pulmonary disease) and reduces lung inflammation via the TLR4/MyD88 pathway; moreover, the 28-days animal test of 500 mg/kg purified extract (by oral administration) showed no toxicity [40]. Recently, in related studies on DPP4 as a therapeutic target for lung diseases, it was found that DPP4 may be involved in the pathophysiology of COPD [41]. Moreover, DPP4 inhibition by Sitagliptin was found to be able to attenuate LPS-induced lung injury in mice [42]. Is it possible that the effect of P. heterophylla's cyclopeptides on COPD is related to the inhibition of DPP4? In addition to polysaccharides and saponins, does the indicator compound HB participate in the hypoglycemic mechanism? It can be observed that the diameter of HB (octacyclic peptide) is close to the length of Linagliptin ( Figure 3). Could parts of the HB ring be used to match or be close to the region in DPP4 where Linagliptin acts? Since natural ligands of DPP4 such as GLP-1 and GIP are peptides, there has been much discussion of whether protein hydrolysates are involved in DPP4 inhibition. Is it possible that the cyclopeptides of P. heterophylla are involved in the DPP4 inhibition similar to some protein hydrolysates? tabolites 2022, 12, x FOR PEER REVIEW 5 of 25 of appetite, fatigue, mental exhaustion, and physical weakness after illness. It is also used as a nutritional supplement for children with a weak physique. In modern times, P. heterophylla is one of the important materials used in clinical Chinese compound prescriptions for improving hyperglycemia. It is rich in polysaccharides, saponins, and cyclopeptides, among which Heterophyllin B (HB) is one of its quality indicators [32][33][34]. Studies have already shown that P. heterophylla's polysaccharides and saponins have the effect of lowering blood sugar [35,36]. However, a review of the hypoglycemic effect of its cyclopeptides is lacking. Heterophyllin A (HA) and HB were found in 1991 as the first cyclopeptides identified from P. heterophylla and encouraged a large number of studies on cyclopeptides in the next 30 years [37]. In the classification of cyclopeptides by NH Tan et al., the cyclopeptides from P. heterophylla including Heterophyllin A, B, C, J, and Pseudostellarin A~H were entered into the category of "Caryophyllaceae-type cyclic peptides" (CTCs) (CTCs: homo-mono-cyclopeptides formed with the peptide bonds, which include cyclic dipeptides to dodecapeptides) [38]. In 2013, PG Arnison et al.'s recommendations for a universal nomenclature suggested that plant N-C cyclic peptides lacking disulfide bonds and significantly biased toward hydrophobic amino acids could be classified as "Orbitides". The cyclic peptides listed in CTCs and Orbitides are approximately the same, involving at least nine independent plant families. In addition to Caryophyllaceae, it also includes Annonaceae, Linaceae, and Rutaceae, etc. Some non-Caryophyllaceae-derived cyclic peptides used to be called Orbitides such as those derived from flaxseed. Studies have shown that Orbitides have many biological activities including cytotoxicity, antiplatelet, antimalarial, immune regulation, immune suppression, etc. [39]. Recently, Feng Lu et al. found that P. heterophylla's cyclopeptides can ameliorate COPD (chronic obstructive pulmonary disease) and reduces lung inflammation via the TLR4/MyD88 pathway; moreover, the 28-days animal test of 500 mg/kg purified extract (by oral administration) showed no toxicity [40]. Recently, in related studies on DPP4 as a therapeutic target for lung diseases, it was found that DPP4 may be involved in the pathophysiology of COPD [41]. Moreover, DPP4 inhibition by Sitagliptin was found to be able to attenuate LPS-induced lung injury in mice [42]. Is it possible that the effect of P. heterophylla's cyclopeptides on COPD is related to the inhibition of DPP4? In addition to polysaccharides and saponins, does the indicator compound HB participate in the hypoglycemic mechanism? It can be observed that the diameter of HB (octacyclic peptide) is close to the length of Linagliptin ( Figure 3). Could parts of the HB ring be used to match or be close to the region in DPP4 where Linagliptin acts? Since natural ligands of DPP4 such as GLP-1 and GIP are peptides, there has been much discussion of whether protein hydrolysates are involved in DPP4 inhibition. Is it possible that the cyclopeptides of P. heterophylla are involved in the DPP4 inhibition similar to some protein hydrolysates? Many natural proline-rich cyclopeptides from marine organisms have been found to be very similar in appearance to plant-derived Orbitides, but their structure may be interspersed with non-peptide elements [43]. There have been many physiological and pharmacological studies on marine cyclopeptides including anti-fertility, anti-cancer, anti- Many natural proline-rich cyclopeptides from marine organisms have been found to be very similar in appearance to plant-derived Orbitides, but their structure may be interspersed with non-peptide elements [43]. There have been many physiological and pharmacological studies on marine cyclopeptides including anti-fertility, anti-cancer, antiviral, etc. Many sponge-derived cyclopeptides such as the "Phakellistatin 1-19" series are well-known cytostatic compounds for the development of anticancer agents. In the "structure-activity relationship" study of Phakellistatins, it was found that because proline residues can reduce the flexibility of the backbone, the proline-rich cyclopeptides can en-hance the selectivity and affinity of their receptors [43][44][45]. The health-oriented hydrolyzed dairy products also emphasize the role of their proline-rich peptides. The proline-rich polypeptide complex Colostrinin™, isolated from ovine colostrum, has immunoregulatory properties and shows beneficial effects on neurodegenerative diseases [46]. In addition, the aforementioned protein hydrolysates PP, PPPP, IP, and LPQNIPPL were reported to have an effect on DPP4 inhibition. The peptide sequence IFGGLPPP of Heterophyllin B is also enriched in proline. With a large number of plant-derived cyclopeptides, proline-rich may be an option to narrow down the search when screening for specific cyclopeptides of interest. Among the existing peptide drugs, cyclopeptides account for the majority due to their higher lipophilicity, higher membrane permeability, in vivo stability, and higher specificity for target receptors [47,48]. Plant cyclopeptides are currently receiving attention in many aspects such as anti-tumor, immune regulation, sedation, antibacterial, antiviral, and so on [38,39]. However, there are relatively few reports on the use of cyclopeptides for lowering blood sugar. Therapeutic drugs or dietary supplements for patients with type 2 diabetes need to consider the safety of long-term use, so it is necessary to avoid toxic species. Many marine cyclopeptides and disulfide-rich cyclotides are cytotoxic and are not suitable for development as dietary supplements. Besides, the possible changes in the configuration of cyclic peptides composed of disulfide bonds are much more complex relative to "Caryophyllaceae-Type Cyclopeptides". Is there a chance to find other natural substances with hypoglycemic reports in the group containing CTCs, in addition to P. heterophylla? In this context, Linum usitatissimum (flaxseed) and Drymaria diandra, which are rich in CTCs, have been examined and compared with P. heterophylla under the hypoglycemic theme. They have many pharmacological effects including lowering blood sugar, and are documented for nutrition or daily health care. In addition, many of their cyclopeptides contain two proline residues [38]. Linum Usitatissimum, Which Is Rich in Cyclic Peptides and Has Hypoglycemic Effect Reported Linum usitatissimum (flax, flaxseed, or linseed) contains α-linoleic acid, lignans, and cyclic peptides, etc., and is mainly used as flaxseed oil and dietary flaxseed meal. Many flaxseed studies take the term "Orbitides" to refer to its cyclopeptides. As of 2019, 39 flaxseed cyclopeptides have been isolated from flaxseed oil with a high content (more than 100 mg/100 g) [49]. According to the open-label study by Mani, U.V. et al., by supplementing 10 g of flax seed powder (FS) daily for 1 month and keeping drug intake unchanged, reductions in fasting blood glucose (FBG), glycated hemoglobin, total cholesterol, and triglyceride values were observed in the experimental group [50]. The hypoglycemic effects of flaxseed lignans have been reported in the nutrition literature. A study by A Pan et al. showed that flaxseed-derived lignan supplement improved HbA1c, but no significant difference was observed in fasting plasma glucose (FPG), insulin concentration, insulin resistance, and lipid profile [51]. There were differences in the results of hypoglycemic observations on flax lignans and FS, although the background of the experiment was not exactly the same. This led to consideration of whether there are ingredients other than lignans in flaxseed that are involved in the blood sugar-lowering mechanism and may have a broader impact. A de novo peptide sequencing study by the CycloNovo analysis method revealed that many flaxseed cyclopeptides in the human gut such as Cyclolinopeptide A, B, D, E, H, etc. [52]. The biological activity of the cyclopeptide further provides support for thinking about whether flaxseed Orbitides can reduce the enzyme activity of DPP4 and participate in the hypoglycemic mechanism. Drymaria Diandra, Which Is Rich in Cyclic Peptides and Has Hypoglycemic Effect Reported Drymaria diandra (D. diandra, Drymaria cordata Willd, or D. cordata), also known as tropical chickweed, belongs to the family Caryophyllaceae. It grows quickly in some humid and warm places in Africa, Asia, and the Americas and used as a folk medicine for anti-inflammatory, antibacterial, antipyretic, analgesic, and acute hepatitis [53][54][55]. Similar to P. heterophylla, D. diandra has also been reported as an antitussive effect. People in some areas use it when they have a cold or cough [56]. The main pharmacological ingredients of D. diandra are cyclopeptides, flavonoids, and alkaloids. D. diandra leaves can be sun-dried and boiled into herbal tea. In addition, it is said that the fresh leaves can be ground lightly and applied to the wound or diluted with honey water to treat fever [55,57,58]. Compared with the obvious symptoms of inflammation and fever, diabetes is a new concept for traditional medicine. Even in modern times, a considerable proportion of people do not know that they have diabetes. Therefore, it is not easy to find the "hypoglycemic" term in the local herbal pharmacopoeia. The ethnic groups of the Sikkim in India use D. diandra for various diseases including diabetes. D. diandra has become one of the few regional herbal medicines that can enter the field of contemporary diabetes research. S Patra et al. tried to treat diabetic rats with a D. diandra (D. cordata) methanol extract (DCME) to observe changes in various physiological indicators. DCME is still safe at an oral dose of 2000 mg/kg. Compared with the diabetes group, HbA1c, FBG, and lipid profiles in the DCME group were reduced, and the β cell density was improved in a dose-dependent manner. Their studies speculated that the α-glucosidase inhibitory activity of DCME and the antioxidant properties of flavonoids and alkaloids are responsible for the improvement of type 2 diabetes [59]. Since experimental values overlap with the effect of DPP4 inhibition, and the cyclopeptides "Diandrine A-D" are present in the methanol extract according to the early identification by PW Hsieh et al., is it possible that cyclopeptides from D. diandra are also involved in the hypoglycemic mechanism based on incretin [57]? Can Linear Precursors of Heterophyllin B "IFGGLPPP" Participate in DPP4 Inhibition? Numerous protein hydrolysates have shown the effect of DPP4 inhibition. However, the process of identifying functional peptide sequences from hydrolysis, isolation, purification, and bioassay to mass spectrometry analysis is difficult and time-consuming. Two linear precursors of Heterophyllin B (HB), "GGLPPPIF", and "IFGGLPPP" have been previously reported. The sequence of IFGGLPPP was verified by precursor gene (prePhHB) screening and in vitro and in vivo experiments, which were suggested to be more likely to be the precursor peptide of HB, later [33]. Is it possible that the linear peptide IFGGLPPP has a favorable binding affinity for DPP4? Additionally, can IFGGLPPP be modified to obtain more samples for comparative studies? For example, inserting other fragments such as PPPP, FP, WP, and PY into existing linear peptides, or changing the sequence of local fragments. DPP4 cleaves dipeptides such as Xaa-Proline or Xaa-Alanine (also includes Xaa-Gly, Xaa-Ser, Xaa-Val, etc., but mainly Xaa-Pro) from the N-terminus of the polypeptide, where Xaa stands for any amino acid. The sequence to be cleaved by DPP4 can be any amino acid except Proline at the third position [11]. Cleavage by DPP4 should be avoided when designing or developing linear peptides as DPP4 inhibitors. Since the samples to be explored contain a large number of cyclopeptides from P. heterophylla, flaxseed, and D. diandra as well as a series of IFGGLPPP-derived peptides, molecular modeling provides a feasible method for preliminary screening of potential cases. In the future, molecules with good DPP4 binding affinity screened by docking can be isolated, purified, or synthesized for more in vitro or in vivo studies. The Binding Affinity of Three Plant-Derived Cyclopeptides to DPP4 and Their Research Potential The natural substances P. heterophylla, flaxseed, and D. diandra are rich in cyclic 5-9 peptides, and related studies have reported their effects on lowering blood sugar. Regarding their possible hypoglycemic composition and mechanism, there is still a lack of research on cyclopeptides. Since cyclopeptide is an important component in these three plants, it may have an impact on the hypoglycemic effect. Considering that the catalytic center of DPP4 is located in a relatively large space, it may have a better chance of accommodating cyclopeptides than PTP1B, α-glucosidase, AMPK, and PPARγ (active site is smaller than DPP4). After docking Heterophyllin B (HB): (cyclo)-GGLPPPIF-(cyclo) with DPP4 (PDB: 3G0B) [60], the results showed that it can appear close to the place occupied by Linagliptin with favorable binding energy. The results of docking HB with DPP4 support the idea of extending the exploration to more cyclopeptides from P. heterophylla, flaxseed, and D. diandra. The relevant cyclopeptide sequences, abbreviations, and binding affinities involved in this docking study are shown in Tables 1-3 (in addition to drawing the HA structure separately, the initial structure files were downloaded from PubChem). The study of AM Bower et al. used AutoDock Vina to calculate the binding affinity of herbal components and provided in vitro IC 50 data for DPP4 inhibition, which can be used as a reference value at this time: Hispidulin (−9.4 kcal/mol), IC 50 = 0.49 (µM); Eriodictyol (−8.9 kcal/mol), IC 50 = 10.9 (µM); and Sitagliptin (−9.6 kcal/mol), IC 50 = 0.06 (µM) [61]. If these cyclopeptides can reach the docking value close to Eriodictyol, there may be a chance to achieve a certain effect on DPP4 activity with the dose of dietary supplement. Most of the cyclopeptides from the three plants can meet this standard. In Table 1, HB (−10.4 kcal/mol) had the best binding affinity, followed by PB (−9.6 kcal/mol), PD, PH, PE, HA up to PA. The overall result showed their potential for DPP4 inhibition. The ingredients of P. heterophylla may vary slightly due to different strains, origins, and harvest seasons. HA and HB are the first two molecules whose structures have been determined in the series of P. heterophylla cyclopeptides [37]. HB is considered to be an indicator to check the quality standards of P. heterophylla, while PB is commercially available. The docking results of HB and PB with DPP4 accounted for the first and second positions in the P. heterophylla series, and they happened to be the most important components in P. heterophylla. The result suggests that DPP4 may be the target of these cyclopeptides, and may also explain why P. heterophylla is often used in Chinese medicine prescriptions for the treatment of hyperglycemia. The research of Feng Lu et al. has shown that the cyclopeptides of P. heterophylla have oral effectiveness and safety [40]. Alcoholic extracts of Radix P. heterophylla with cyclopeptides inside may have the opportunity to become a nutritional supplement for diabetic patients after more research. CLA in flaxseed was confirmed as early as 1959. In 1997, when H Morita et al. purified flaxseed (80% methanol extract), in addition to CLA, CLB was newly discovered (CLB accounts for about 0.0002%) [62]. CLC is the oxidized form of CLB. In Table 2, the performance of CLC, CLA, and CLB ranked the top three, and the binding affinity was −10.0, −9.8, and −9.8 kcal/mol, respectively. The binding affinity of most flaxseed Orbitides was better than −9.0 kcal/mol, which explains the possible hypoglycemic reason for supplementing with flaxseed. This also suggests DPP4 as one of the research targets of flaxseed Orbitides. Flaxseed is a grain, which is more conducive to promotion as a nutritional product. The oxidation of flaxseed oil occurs during the preservation process. Several flaxseed Orbitides contain Met residue. Oxidation of the sulfur atom on Met produces a series of derivatized Orbitides. Relative to the effect on oil quality, Met-oxidized Orbitides seem to have no negative impact on the inhibition of DPP4 (oxygen may increase H-bonding). When PW Hsieh et al. identified the components of D. diandra, the cyclopeptide with the highest content was DdC (0.0004% of the MeOH extraction of dry whole herbs) [57]. In the compositional analysis of D. diandra by Z Ding et al., DmA and DmB accounted for 0.00014% and 0.011%, respectively [55]. The content of DmB and DdC (hexapeptide) was higher. The binding affinities of DdC, DmB, and DmA to DPP4 were −10.7, −8.9, and −10.2 kcal/mol, respectively (Table 3). DdC had the most prominent binding affinity when docked to DPP4. The Sikkim area in India uses D. cordata to treat diabetes. S Patra et al. confirmed that the D. cordata MeOH extract can reduce blood sugar and improve the blood lipid index in diabetic rats [59]. D. diandra's cyclopeptides are probably one of the influencing factors in its hypoglycemic effect. Inhibition of DPP4 may also help improve hyperlipidemia and control weight. This preliminary docking analysis of D. diandra's cyclopeptides in DPP4 may lead to more research. In addition to lowering blood sugar, D. diandra may also be an option for dietary supplements for weight management later. DdC contains "PYWP"; HB and DmA contain "PPP"; CLA, CLB, CLC, and PB contain "PPFF" or "PPF" residues. The docking results revealed that these proline-rich cyclopeptides appeared to have a better affinity for DPP4. Although the determination of binding affinity is mainly based on hydrogen bonding and π-π interactions, it cannot be directly inferred from the number of prolines contained. These cyclopeptides also happen to be the components that are easier to purify or have higher content in the original plant. In the process of biosynthesis, perhaps these proline-rich sequences are easier to cyclize through endogenous enzymes or less easily degraded. The docking suggests that DdC, HB, DmA, CLA, CLB, CLC, and PB have higher potential for further study. Analysis of the Configuration and Conformation of Plant Cyclopeptides Docking with DPP4 The interaction between a series of cyclopeptides and DPP4 is shown in Figures 4-6. The main masses of HA, HB, DmA, and DdC appeared in the area surrounded by Arg125, Tyr547, and Ser630, similar to the place where Linagliptin is located. Although PB has hydrogen bonds with Tyr547 and Glu205, its position shifts to the entrance of the propeller. CLA and CLC are offset to the interval where Arg560 and Asn562 are sited. The "IFGGL" of HB constrains the ring and assists the three prolines to establish hydrogen bonds with Arg125 and Ser630, thereby blocking the entry of the catalytic triad. The Y and W of DdC are wrapped by flexible Gly and then by Pro (favorable to generate β-turns). Due to the hydrogen bond between PYWP and Arg125 andTyr547, plus the π-π interaction with Tyr666, it obtained a good binding score. The hexapeptides DdA and DmB had similar sequences to DdC but their interaction with DPP4 was not as good as DdC. DdC was located closest to the catalytic center, while DmB drifted toward Tyr752. The relationship between CLC and the catalytic region was weaker than that of DdC and HB. However, its hydrogen bond with Arg560 stabilizes the main structure and guides its Phe(F) to interact with Tyr666, which also has an impact on the catalytic activity of DPP4. The location of CLA, CLC may also have a negative impact on the entry of GLP-1. In contrast, the location of PB and DmA may interfere with the β-propeller region of DPP4 (top view). binding affinity is mainly based on hydrogen bonding and π-π interactions, it cannot be directly inferred from the number of prolines contained. These cyclopeptides also happen to be the components that are easier to purify or have higher content in the original plant. In the process of biosynthesis, perhaps these proline-rich sequences are easier to cyclize through endogenous enzymes or less easily degraded. The docking suggests that DdC, HB, DmA, CLA, CLB, CLC, and PB have higher potential for further study. Analysis of the Configuration and Conformation of Plant Cyclopeptides Docking with DPP4 The interaction between a series of cyclopeptides and DPP4 is shown in Figures 4-6. The main masses of HA, HB, DmA, and DdC appeared in the area surrounded by Arg125, Tyr547, and Ser630, similar to the place where Linagliptin is located. Although PB has hydrogen bonds with Tyr547 and Glu205, its position shifts to the entrance of the propeller. CLA and CLC are offset to the interval where Arg560 and Asn562 are sited. The "IFGGL" of HB constrains the ring and assists the three prolines to establish hydrogen bonds with Arg125 and Ser630, thereby blocking the entry of the catalytic triad. The Y and W of DdC are wrapped by flexible Gly and then by Pro (favorable to generate β-turns). Due to the hydrogen bond between PYWP and Arg125 andTyr547, plus the π-π interaction with Tyr666, it obtained a good binding score. The hexapeptides DdA and DmB had similar sequences to DdC but their interaction with DPP4 was not as good as DdC. DdC was located closest to the catalytic center, while DmB drifted toward Tyr752. The relationship between CLC and the catalytic region was weaker than that of DdC and HB. However, its hydrogen bond with Arg560 stabilizes the main structure and guides its Phe(F) to interact with Tyr666, which also has an impact on the catalytic activity of DPP4. The location of CLA, CLC may also have a negative impact on the entry of GLP-1. In contrast, the location of PB and DmA may interfere with the β-propeller region of DPP4 (top view). Previously, in 2019, VCSR Chittepu et al. proposed a study on the DPP4 inhibition by the natural cyclic peptide oxytocin (IC 50 : 110.7 nM). Their molecular docking showed that oxytocin interacts with Arg 356, Phe 355, Tyr663, Glu 204, Glu 203, and Tyr548 [63]. Oxytocin research supports that the cyclic peptide may be a kind of DPP4 inhibitor. The oxytocin sequence is CYIQNCPLG-NH2, where CYIQNC forms a ring with disulfide bonds, and PLG is like a long side. Unlike oxytocin, cyclopeptides from P. heterophylla, flaxseed and D. diandra are plant-derived Orbitides without disulfide bonds; the longest side chains are only Met and Trp. Cyclic peptides without disulfide bonds can be calculated with the default setting such as general small molecules in AutoDock Vina. If the cyclic peptide contains disulfide bonds, the setting of the disulfide bonds and the interaction between the cyclic peptide and DPP4 may have more variables and uncertainties in the docking process. When the cyclic peptide does not have long branched side chains, the movements and rotations are less restricted. Such Orbitides can be rotated through various angles within the DPP4 cavity and finally placed in the most suitable location or close to the catalytic center of DPP4 with the lowest energy. These three types of plant-derived cyclopeptides are mainly composed of hydrophobic amino acids: AVILMFYW, plus Pro, and Gly. None of these plant-derived cyclopeptides have charged "RHKDE" amino acids. Since arriving at the active site of DPP4 means passing through the opening between the α/β-hydrolase and β-propeller domain, uncharged molecules may be able to avoid becoming stuck in the periphery due to the attraction of charges. The configuration of multiple energy levels of HB in the prediction may provide its possible movement in DPP4 (Figure 4). From the three predicted HB configurations with binding affinities of −8.5 kcal/mol, −10.1 kcal/mol, and −10.4 kcal/mol, it was found that as the affinity became stronger, the position of HB gradually approached the catalytic center. This may interfere with enzyme activity since HB drifts into DPP4, but eventually binds to the receptor in the conformation with the lowest binding affinity. In a series of research on marine cyclopeptides, it was observed that the rigidity of a proline-rich cyclic structure will reduce the entropy of the Gibbs free energy and enhance the binding force [43][44][45]64]. According to the observation of HB in DPP4, it was found that proline-rich fragments can provide multiple sites to establish hydrogen bonds in a small area. When this proline-rich fragment has the opportunity to be located near the catalytic center, it will increase the binding affinity, as shown by the case of the final stable conformation of HB. Under the intense competition of GLP-1, cyclopeptides may be driven away, but it will take time to get out of the cavity of DPP4. This will increase the interference time and influence the potential of cyclopeptides on the enzyme activity of DPP4. Cyclopeptides play an important role in peptide drugs [47]. There are many studies on natural cyclopeptides such as antibiotics, antiviral, and anticancer, while there are relatively few studies on antioxidant, hypoglycemic, and antihypertensive effects, probably because some cyclopeptides come from toxic sources and are not suitable for general health care use. The three plants discussed in this article are known to be safe at recommended doses, are readily available, and have a high content of cyclopeptides. Although preliminary discussions on the potential of cyclopeptides in DPP4 inhibition are limited to docking studies, it suggests that a large number of Caryophyllaceae-Type Cyclopeptides (CTCs) may find new research topics in health care or nutritional supplementation. CTCs are only composed of amino acids without disulfide bonds, and some can be synthesized artificially. B Poojary et al. synthesized PB and discussed its antibacterial, antifungal, antiinflammatory, and anthelmintic activities [65]. R Dahiya et al. designed coupling reactions of the tetrapeptide unit to synthesize DdC and indicated its antimicrobial and antihelmintic activity [66]. There are currently no reports on HB synthesis. Similar to DdC, HB has two glycines and two prolines that have the chance to induce β-turns and loop formation in the head-to-tail synthesis. In the tropics and remote areas, the properties of antimicrobial and anthelmintics may be the focus for D. cordata, and anti-inflammatory is its common application in folk records. HB has also been noted to improve inflammation by inhibiting the PI3K/Akt pathway [67]. Anti-inflammatory, an additional effect of phyto-cyclic peptides, may bring benefits to T2D as diabetic complications are often caused by long-term poor glycemic control and chronic inflammation [2]. The DPP4 inhibitor Linagliptin has been noted to reduce obesity-related inflammation and insulin resistance [68]. DdC, HB, and PB have anti-inflammatory, hypoglycemic, and synthesizable potentials and are attractive compounds for further research. Linear Peptide "IFGGLPPPP" as the Reference Coordinate of "IFGGLPPP" (HB Linear Precursor) Derivative Linear peptides may be quickly degraded under enzymatic hydrolysis, leading to loss of activity; however, the synthesis technology of linear peptides is much simpler than that of cyclopeptides. Their conformations in the liquid phase are more flexible and variable than cyclopeptides. This may bring benefits or disadvantages to the role of enzyme inhibitors. A series of linear peptides or protein hydrolysates that can survive after being hydrolyzed by gastrointestinal enzymes have been confirmed to show inhibitory effects on DPP4, but the discovery and verification process requires a lot of work. After docking the open-loop sequences "IFGGLPPP" and "GGPYWP" from HB and DdC with DPP4, the binding affinity of the two was found to be better than −9.0 kcal/mol. The configuration shows that the GGPY fragment of GGPYWP interacts with Arg125, Tyr456, and Tyr585. In contrast, the backbone of IFGGLPPP was more flexible, and its position on DPP4 was consistent with the appearance of most mainstream DPP4 inhibitors (from Ser209, Arg125 to Ser630). The predicted conformation of linear peptide IFGGLPPP in DPP4 was different from that of the cyclic HB. HB had cyclic constraints, directing its PPP residues to interact with the key amino acids on DPP4. How will linear peptides interact with DPP4 without the constraints of loops? By designing a series of IFGGLPPP derivatives (under the principle of avoiding DPP4 cleavage), it is possible to observe and compare their interaction with DPP4 and assess which ones have potential as DPP4 inhibitors. There is an example of the "PPPP" sequence in the proteolytic hypoglycemic peptide. After trying to dock IFGGLPPPP (nonapeptide) to DPP4, it was found that it extends from S2 Ext, S2, S1 to S1 area (from the propeller entry to the side entry). It could be seen that the length of nine peptides was enough to occupy the interval where GLP-1 appeared in DPP4, as shown in Figure 7. It could also be observed that the IFGGLPPPP sequence did not penetrate the vicinity of the catalytic center such as His740, Asn710, and Try662. Introducing F or W residues into the "GGLP" sequence may be a way to add additional π-π interactions, thereby making the linkage of the peptide to the catalytic center tighter. phyto-cyclic peptides, may bring benefits to T2D as diabetic complications are often caused by long-term poor glycemic control and chronic inflammation [2]. The DPP4 inhibitor Linagliptin has been noted to reduce obesity-related inflammation and insulin resistance [68]. DdC, HB, and PB have anti-inflammatory, hypoglycemic, and synthesizable potentials and are attractive compounds for further research. Linear peptide "IFGGLPPPP" as the Reference Coordinate of "IFGGLPPP" (HB Linear Precursor) Derivative Linear peptides may be quickly degraded under enzymatic hydrolysis, leading to loss of activity; however, the synthesis technology of linear peptides is much simpler than that of cyclopeptides. Their conformations in the liquid phase are more flexible and variable than cyclopeptides. This may bring benefits or disadvantages to the role of enzyme inhibitors. A series of linear peptides or protein hydrolysates that can survive after being hydrolyzed by gastrointestinal enzymes have been confirmed to show inhibitory effects on DPP4, but the discovery and verification process requires a lot of work. After docking the open-loop sequences "IFGGLPPP" and "GGPYWP" from HB and DdC with DPP4, the binding affinity of the two was found to be better than −9.0 kcal/mol. The configuration shows that the GGPY fragment of GGPYWP interacts with Arg125, Tyr456, and Tyr585. In contrast, the backbone of IFGGLPPP was more flexible, and its position on DPP4 was consistent with the appearance of most mainstream DPP4 inhibitors (from Ser209, Arg125 to Ser630). The predicted conformation of linear peptide IFGGLPPP in DPP4 was different from that of the cyclic HB. HB had cyclic constraints, directing its PPP residues to interact with the key amino acids on DPP4. How will linear peptides interact with DPP4 without the constraints of loops? By designing a series of IFGGLPPP derivatives (under the principle of avoiding DPP4 cleavage), it is possible to observe and compare their interaction with DPP4 and assess which ones have potential as DPP4 inhibitors. There is an example of the "PPPP" sequence in the proteolytic hypoglycemic peptide. After trying to dock IFGGLPPPP (nonapeptide) to DPP4, it was found that it extends from S2 Ext, S2, S1 to S1′ area (from the propeller entry to the side entry). It could be seen that the length of nine peptides was enough to occupy the interval where GLP-1 appeared in DPP4, as shown in Figure 7. It could also be observed that the IFGGLPPPP sequence did not penetrate the vicinity of the catalytic center such as His740, Asn710, and Try662. Introducing F or W residues into the "GGLP" sequence may be a way to add additional π-π interactions, thereby making the linkage of the peptide to the catalytic center tighter. Design and Analysis of "IFGGLPPP" Derivatives as Potential DPP4 Inhibitors In the discussion of DPP4 inhibitors derived from protein hydrolysates, there are short peptides such as IP and IPA as well as examples with longer sequences than IFG-GLPPPP. When designing IFGGLPPP derivatives, what strategies should be adopted to expect peptides to block the entrance of the catalytic zone and interact with important amino acids? The results of the docking of IFGGLPPP derivatives with DPP4 are shown in Table 4. After docking IP to DPP4, it was found that there was a hydrogen bond between its Ile (I) and Tyr 662, but the binding affinity was only −6.6 kcal/mol. The docking affinity of IPA and IPI to DPP4 was −7.1 and −7.4 kcal/mol, respectively, which was better than the dipeptide. IFP, IFPP, and IFPPP are obtained by removing the middle part of IFGGLPPP. It was found that as the length of the sequence increased, the binding affinity also increased. If GL is replaced with W and F (longer length and possible π-π interaction), the results of the docking are more diverse. A series of linear peptides starting with IF and ending with PPP seem to have some common relationships with DPP4 (Figures 8 and 9). The sequence of IFGWPPP, IFGGWPPP, IFFPPP, and IFWPPPP is created by replacing "GGL" of IFGGLPPP with GW, GGW, F, and WP. The Ile (I) at the beginning of the sequence establishes a key hydrogen bond with Glu205, followed by the π attraction between Phe (F) and Tyr662 and Tyr666, which become the basis for these peptides to associate with DPP4. Additionally, other possible π-π interactions between F/W and Try629 and Tyr547, or hydrogen bonds between PPP/PPPP and Ser630, Lys554, Asn562, Tyr752, etc., make these linear peptides (without secondary structure) seal the catalytic triad similar to a tape and produce a stronger effect than IP. This tape-like binding is the major interactive format between most IFGGLPPP derivatives and DPP4. IFFPPP (−10.8 kcal/mol), IFWPPPP (−11.2 kcal/mol), IFGGWPPP (−11.4 kcal/mol), and IFGWPPP (−12.0 kcal/mol) showed good binding affinity. Some of their interactions with DPP4 went beyond Lys554 (and even reached Arg560 and Gln527). Although this part had no direct effect on the catalytic region, it should provide substantial help for stabilizing the variable linear peptide structure on DPP4. The docking prediction showed that this functional "auxiliary PPP tail" had a quite different impact on DPP4 from PPP in HB. Short peptides ranging from tripeptides to tetrapeptides may be able to squeeze into the catalytic center region, thereby affecting the activity of DPP4. But such short peptides may also be easily pushed away by competition from GLP-1. The length of the 4-9 peptides may partly interact with amino acids near the catalytic center and partly serve to stabilize the structure. Longer peptides, however, may be at risk of degradation or low absorption. The docking score of multiple linear peptides was better than cyclopeptides. However, these are currently only reference values, and it is not clear which cyclic and linear peptides are more favorable for DPP4 inhibition under physiological conditions. The movement of cyclopeptides within the DPP4 cavity may interfere more with DPP4 Design and Analysis of "IFGGLPPP" Derivatives as Potential DPP4 Inhibitors In the discussion of DPP4 inhibitors derived from protein hydrolysates, there are short peptides such as IP and IPA as well as examples with longer sequences than IFGGLPPPP. When designing IFGGLPPP derivatives, what strategies should be adopted to expect peptides to block the entrance of the catalytic zone and interact with important amino acids? The results of the docking of IFGGLPPP derivatives with DPP4 are shown in Table 4. After docking IP to DPP4, it was found that there was a hydrogen bond between its Ile (I) and Tyr 662, but the binding affinity was only −6.6 kcal/mol. The docking affinity of IPA and IPI to DPP4 was −7.1 and −7.4 kcal/mol, respectively, which was better than the dipeptide. IFP, IFPP, and IFPPP are obtained by removing the middle part of IFGGLPPP. It was found that as the length of the sequence increased, the binding affinity also increased. If GL is replaced with W and F (longer length and possible π-π interaction), the results of the docking are more diverse. A series of linear peptides starting with IF and ending with PPP seem to have some common relationships with DPP4 (Figures 8 and 9). The sequence of IFGWPPP, IFGGWPPP, IFFPPP, and IFWPPPP is created by replacing "GGL" of IFGGLPPP with GW, GGW, F, and WP. The Ile (I) at the beginning of the sequence establishes a key hydrogen bond with Glu205, followed by the π attraction between Phe (F) and Tyr662 and Tyr666, which become the basis for these peptides to associate with DPP4. Additionally, other possible π-π interactions between F/W and Try629 and Tyr547, or hydrogen bonds between PPP/PPPP and Ser630, Lys554, Asn562, Tyr752, etc., make these linear peptides (without secondary structure) seal the catalytic triad similar to a tape and produce a stronger effect than IP. This tape-like binding is the major interactive format between most IFGGLPPP derivatives and DPP4. IFFPPP (−10.8 kcal/mol), IFWPPPP (−11.2 kcal/mol), IFGGWPPP (−11.4 kcal/mol), and IFGWPPP (−12.0 kcal/mol) showed good binding affinity. Some of their interactions with DPP4 went beyond Lys554 (and even reached Arg560 and Gln527). Although this part had no direct effect on the catalytic region, it should provide substantial help for stabilizing the variable linear peptide structure on DPP4. The docking prediction showed that this functional "auxiliary PPP tail" had a quite different impact on DPP4 from PPP in HB. Short peptides ranging from tripeptides to tetrapeptides may be able to squeeze into the catalytic center region, thereby affecting the activity of DPP4. But such short peptides may also be easily pushed away by competition from GLP-1. The length of the 4-9 peptides may partly interact with amino acids near the catalytic center and partly serve to stabilize the structure. Longer peptides, however, may be at risk of degradation or low absorption. The docking score of multiple linear peptides was better than cyclopeptides. However, these are currently only reference values, and it is not clear which cyclic and linear peptides are more favorable for DPP4 inhibition under physiological conditions. The movement of cyclopeptides within the DPP4 cavity may interfere more with DPP4 activity than that of linear peptides. Furthermore, cyclopeptides generally have better bioavailability than linear peptides. [18,19,69]. S1 and S2 pockets include W629, S630, N710, H740, R125, E205, E206, Y662, Y666, and R669 (S1 pockets generally refer to S630, N710, H740, W629, Y662, and Y666. S2 pockets refer to E205, E206, and R125. These amino acids are difficult to distinguish in some selected perspectives for 3D rendering. In the table, the ranges of S1 and S2 are listed in the same column). S2 extensive sub-site (S2 Ext) and surrounding include V207, S209, F357, R358, and E361. S1 and surrounding includes D545, V546, Y547, Q553, K554, N562, Y585, etc. Periphery includes Y752, and Y48 near S1 . (b) When marked as R125, E205, S630, etc. in the table, it indicates that there is a "hydrogen bond" between the designed peptide molecule and the amino acid of DPP4. (c) When there is an additional π tag such as F357π, W629π, Y662π, etc., it indicates that there is a "π-π interaction" with the amino acid of DPP4. The above interaction analysis comes from the analysis of PoseView. Unlike the linear display of IFWPPPP, IFWPPP acts on DPP4 in a configuration similar to that of a cyclic peptide. In the IFGGLPPP-derived peptide series, the peptides suffixed with PPPP may obtain a better performance than PPP, but the increased P may also affect the configuration. The "IF" of IFWPPP goes deep between Asn710 and Ser630, where its "W" has a π-π interaction with Tyr547, and the end "PPP" goes up to Ser209 to form a C-shaped appearance. IFWPPP is the only example that interacts with Asn710 among all the cyclopeptides and linear peptides mentioned in this study. Compared with IFGWPPP (−12.0 kcal/mol), which had the highest score for linear peptide docking, IFWPPP (−10.5 kcal/mol) had weaker binding affinity; however, IFGWPPP partially acted on the region relatively far from the catalytic triad. In contrast, all the effects of IFWPPP have been focused on the key amino acids that classic DPP4 inhibitors are interested in. More substantial research is needed to determine which one is better. Similar to IFWPPP, IFWWPPP has a C-shaped conformation in the active region of DPP4. The C-shaped Unlike the linear display of IFWPPPP, IFWPPP acts on DPP4 in a configuration similar to that of a cyclic peptide. In the IFGGLPPP-derived peptide series, the peptides suffixed with PPPP may obtain a better performance than PPP, but the increased P may also affect the configuration. The "IF" of IFWPPP goes deep between Asn710 and Ser630, where its "W" has a π-π interaction with Tyr547, and the end "PPP" goes up to Ser209 to form a C-shaped appearance. IFWPPP is the only example that interacts with Asn710 among all the cyclopeptides and linear peptides mentioned in this study. Compared with IFGW-PPP (−12.0 kcal/mol), which had the highest score for linear peptide docking, IFWPPP (−10.5 kcal/mol) had weaker binding affinity; however, IFGWPPP partially acted on the region relatively far from the catalytic triad. In contrast, all the effects of IFWPPP have been focused on the key amino acids that classic DPP4 inhibitors are interested in. More substantial research is needed to determine which one is better. Similar to IFWPPP, IFWW-PPP has a C-shaped conformation in the active region of DPP4. The C-shaped opening of IFWPPP faces the catalytic center, while the C-shaped opening of IFWWPPP faces outward. For IFWWPPP, the structure of two "W" in the sequence is more difficult to synthesize than one in IFWPPP. From the research on the docking of IFGGLPPP derivatives to DPP4, it was found that fragments obtained by cutting the cyclopeptides may be another way to create functional protein hydrolysates. These linear peptides fit DPP4 in a curved manner, possibly because they are derived from cyclopeptides or because the active area of DPP4 has a curvature. In addition, perhaps because the substrate of DPP4 is a peptide (GLP-1), this series of linear or cyclopeptides seem to have many interactions with DPP4 in docking. The docking of IFGGLPPP derivatives with DPP4 suggest that the method of extracting cyclopeptide sequences may bring a new option for the design of functional linear peptides. Molecular docking-recommended peptides can be further evaluated via subsequent in vitro/in vivo experiments. Later, peptides with developmental potential can be modified at the N, C-terminus, or specific molecules to make them more stable and effective in physiological environments [48]. Caryophyllaceae-Type Cyclopeptides provide hundreds of peptide sequences. If the cyclopeptide is freely cleaved, many peptide fragments can be obtained (e.g., IFGGLPPP, FGGLPPPI, LPPPIFGG, GGLPPPIF, IFGGLP, LPPPIF from Heterophyllin B, etc.). IFGGLPPP and GGLPPPIF behave differently in DPP4 space. They may also behave differently in different receptors. Docking these cyclic peptide-derived peptides to various drug targets may yield unexpected results. This may lead to new possibilities for drug design. Molecular Dynamics Simulation of Potential Cyclic and Linear Peptides Further molecular dynamics simulations were performed on the lowest energy configuration (RMSD = 0) of the potential compounds. The calculated temperature was about 300 K. Under the dynamic observation of 1000 frames, the average RMSD of DdC, CLC, HB, and PB ranged from 1.6 Å to 2.5 Å, which is about a bond-length distance ( Figure 10). MD simulation studies revealed stable binding throughout the simulation over 1000 frames, with small variation in configuration. Observing the changes in the configuration at 1000 frames, cyclopeptides can still maintain most of the interactions established with DPP4 at RMSD = 0, indicating the correctness of AutoDock Vina in the prediction. Among them, the DdC of small molecules had a relatively small offset and potential energy (Table 5). Compared with cyclopeptides, linear IFGWPPP and IFWWPPP have relatively large configuration changes and position shifts because linear peptides have a higher degree of freedom in structure. However, because IFGWPPP and IFWWPPP have a certain structure length, and the PPP tail has multiple interactions with DPP4, they still maintain a tape-like barrier to the exit of the catalytic zone, even with a relatively large average RMSD. The molecular dynamics simulation results were basically consistent with the prediction of the lowest energy configuration. Materials and Methods Molecular docking simulation places ligands in the binding site of receptors to find the configuration with the lowest binding energy. AutoDock Vina (Vina) is a program for molecular docking and virtual screening [70]. Vina uses an iterative local search global optimizer to provide high speed and accuracy of docking. The default setting of Vina is semi-flexible docking. During the docking process, the receptor is set to be rigid, while the ligand has a certain degree of freedom. When a large number of macromolecular ligands need to be evaluated, semi-flexible calculations provide a time-saving option. The study by Mishra A et al. titled "a Cyclic Octapeptide-Cyclosaplin from Sandalwood (cyclo-RLGDGCTR)" used Vina to screen a series of tumor-related receptors, which predicted that it could produce a stronger binding affinity with EGFR, VEGFR2, PKB, p38, etc. [71]. Y Hou et al. used Vina to demonstrate the possible conformation of Cyclo (PGFIPFTV) extracted from Tunicyclin L, acting on acetylcholinesterase (AChE) [72]. Z Wang et al. used Vina to analyze the interaction between natural Rubiaceae-type cyclopeptide (RA) and TAK1 protein to explain its possible involvement in the NF-κB pathway [73]. It was observed that AutoDock Vina is available for the evaluation of peptides as a variety of enzyme inhibitors, and may also be useful as an analysis tool to find potential DPP4 inhibitors. Materials and Methods Molecular docking simulation places ligands in the binding site of receptors to find the configuration with the lowest binding energy. AutoDock Vina (Vina) is a program for molecular docking and virtual screening [70]. Vina uses an iterative local search global optimizer to provide high speed and accuracy of docking. The default setting of Vina is semi-flexible docking. During the docking process, the receptor is set to be rigid, while the ligand has a certain degree of freedom. When a large number of macromolecular ligands need to be evaluated, semi-flexible calculations provide a time-saving option. The study by Mishra A et al. titled "a Cyclic Octapeptide-Cyclosaplin from Sandalwood (cyclo-RLGDGCTR)" used Vina to screen a series of tumor-related receptors, which predicted that it could produce a stronger binding affinity with EGFR, VEGFR2, PKB, p38, etc. [71]. Y Hou et al. used Vina to demonstrate the possible conformation of Cyclo (PGFIPFTV) extracted from Tunicyclin L, acting on acetylcholinesterase (AChE) [72]. Z Wang et al. used Vina to analyze the interaction between natural Rubiaceae-type cyclopeptide (RA) and TAK1 protein to explain its possible involvement in the NF-κB pathway [73]. It was observed that AutoDock Vina is available for the evaluation of peptides as a variety of enzyme inhibitors, and may also be useful as an analysis tool to find potential DPP4 inhibitors. The cyclopeptide structures (2D or 3D format) of Pseudostellaria heterophylla, Linum usitatissimum, Drymaria diandra, and related DPP4 inhibitors were downloaded from Pub-Chem. The biologic description of plant cyclopeptides including molecular weight, IUPAC Condensed, PLN, etc., also came from PubChem and were organized into tables. Mar-vinSketch (ChemAxon), ACD/ChemSketch (ACD Labs), ChemDraw, and Avogadro were used as 2D and 3D molecular editing tools to produce structure formats that meet the requirements of docking software or to draw diagrams for explanations in the text. The initial structure of the linear peptide was established in Avogadro and went through an optimization process. The DPP4 crystal structure PDB: 3G0B used as the receptor in the docking was retrieved from the RCSB Protein Data Bank [60]. Before docking, the receptor and the molecule to be tested is executed in the "dock prep" procedure under UCSF Chimera 1.13.1 including "delete solvent", "add hydrogens", and "add charges". Charges were computed using ANTECHAMBER [74,75]. The degrees of freedom of the ligands were automatically set during the preparation process. The prepared receptors and ligands were introduced into AutoDock Vina_1_1_2. The grid center was set to X = 42.049, Y = 34.288, Z = 14.618 (the centroid of the original ligand), and the grid size was set to 40 × 40 × 40. Vina automatically calculates the grid map. Then, the molecular docking program runs under the default settings. After the calculation, 10 sets of data within the maximum energy difference = 3 (kcal/mol) were obtained, and the lowest energy configuration with RMSD = 0 was selected as the analysis result. There may be very few exceptions when considering the most reasonable hydrogen bond interactions. For example, DmA's result is displayed in the second energy level. The 3D structure image was rendered with UCSF Chimera, which contains the predicted configuration/conformation, receptor (DPP4), and hydrogen bond labeling. The search criteria for hydrogen bonds was "Relax constraints by 0.4 angstroms and 20.0 degrees" by the "FindHbond" program, and H-bonds are displayed in red lines. PoseView was used to further analyze the results of the interaction of IFGGLPPP derivatives with DPP4. It clearly depicted the relationship between the linear peptide and its surroundings including hydrogen bonds and π-π interactions. However, the cyclopeptide image produced by PoseView is not easy to read due to the overlap of the front and rear perspectives. BIOVIA Discovery Studio (DS) Visualizer was used to obtain the ligand-receptor 2D diagram with a circular format (DS does not show hydrogen atoms that have no special effect). The hydrogen bonds demonstrated by UCSF Chimera may be slightly different from those displayed by PoseView or DS because the conditions for analyzing the forces (default setting) from different software may not be exactly the same. The Peptide Analyzing Tool (Thermo Fisher Scientific) was used to calculate the molecular weight of linear peptides and confirm the possibility of synthesis/purification (theoretical pI of most linear peptides derived from IFGGLPPP is 6.0 with moderate hydrophobicity). Molecular dynamics were calculated through the functionality built into Chimera using the interface designed by V. Munoz-Roles and J.-D. Marechal. The setting conditions of the molecular dynamics simulation were "Steepest descent steps = 100; Steepest descent steps size = 0.02 Å. Conjugate gradient steps = 10; Conjugate gradient step size = 0.02 Å. Settings = Minimization. Start frame = 1; Step size = 4; ending frame = 1001; Lower RMSD threshold = 1.3; Upper RMSD threshold = 1.8". Conclusions A series of cyclopeptides from Pseudostellaria heterophylla (P. heterophylla), Linum usitatissimum (flaxseed), and Drymaria diandra have been reported to be beneficial in the treatment of diabetes. These cyclopeptides exhibited binding affinities by docking to DPP4 ranging from −8.4 to −10.7 kcal/mol. The binding affinity of 18 of the 25 cyclopeptides was better than −9.0 kcal/mol. It was found that the DPP4 inhibition may be a workable pharmacological target of these cyclopeptides. It also explains the possible reasons why these plants can be a dietary supplement or herbal medicine for lowering blood glucose. Docking showed that DdC, HB, DmA, CLA, CLB, CLC, and PB with two or more prolines in the sequence could obtain better binding affinity to DPP4 and have higher potential for further research. These cyclopeptides also happen to be abundant in the original plant. DdC (−10.7 kcal/mol) and HB (−10.4 kcal/mol) were the top two scorers, have an anti-inflammatory property, have the potential to be mass-produced, and are the most recommended molecules to be studied individually. In addition, various derivatives of HB linear precursor IFGGLPPP such as IFGGWPPP, IFGWPPP, and IFWPPP exhibit the potential for DPP4 inhibition. It was found that the introduction of W into the sequence of IFGGLPPP increased the interaction of the derivatized peptide with the vicinity of the catalytic center, and the PPP tail could facilitate the stabilization of the linear peptide in the active site. This brings new ideas for the design of functional linear peptides, especially for receptors with peptide ligands, curved structures, and large active sites. Further research is needed.
2022-04-29T15:55:24.185Z
2022-04-24T00:00:00.000
{ "year": 2022, "sha1": "c3383ab12425f354baa8a9103ac7b6a6043652d7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/12/5/387/pdf?version=1650789346", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee529a9b531e2c95b213073364f8cc7a6d4b2fb3", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
212631449
pes2o/s2orc
v3-fos-license
Remarks on the group-theoretical foundations of particle physics I propose the group SL(4,R) as a generalisation of the Dirac group SL(2,C) used in quantum mechanics, as a possible basis on which to build a more general theory from which the standard model of particle physics might be derived as an approximation in an appropriate limit. Introduction The standard model of particle physics is based on four Lie groups, that is the gauge groups U (1), SU (2) and SU (3) of electromagnetism, and the weak and strong nuclear forces, respectively, together with the Dirac group SL(2, C) that acts on the Dirac spinor and the Dirac equation. All four of these groups are supposed to commute with each other, but this is really only true in the high-energy limit. At practical energies many of the symmetries implied by these groups are significantly 'broken'. This phenomenon is usually explained as having occurred immediately after the Big Bang, as the energy density fell rapidly, in a process of 'spontaneous' symmetry-breaking. In the case of the strong force, symmetry is restored by hypothesising abstract properties of 'colour', rather than seeking to model the three generations of fermions. The standard model thus does not contain any symmetry group relating to the three generations. Symmetry-breaking cannot be introduced into a commuting product of groups by group-theoretical methods, and therefore geometrical methods are used in the standard model. Group theory could only provide such a mechanism if the groups do not commute with each other. It might then be possible to describe the effect of a change of energy scale on the parameters of the standard model, provided only that the Dirac group fails to commute with the relevant gauge groups. In this paper I aim to provide a 'proof of concept' that a generalisation from commuting to non-commuting groups has the potential to explain symmetry-breaking at a deeper conceptual level than is possible in the standard model. This includes a possible explanation for the three generations, at least as far as electrons are concerned. The aim is to do this as far as possible by changing the mathematical axioms, with as little change to theorems or to physical applications as possible. The ambient group The total real dimension of the three gauge groups is 12. This extends to 18 if we include the Dirac group as well. An alternative is to work over complex numbers, so that the Dirac group is 3-dimensional and the total dimension is 15. The eight smallest dimensions of complex simple Lie groups are listed in Table 1. It is worth noting at this point that the group SU (5) of type A 4 was used for the Georgi-Glashow Grand Unified Theory [1] already in the 1970s. In restrospect, this group appears to have been too big, in that it predicted new particles and new forces, that have not been detected experimentally. The table therefore suggests that some group of type A 3 may be suitable, although there are certainly other possibilities. The obvious group of type A 3 to try is SL(4, C), although it is possible that some real form such as SU (4) or SL(4, R) might be more appropriate. Extensions to U (4) or GL(4, R) or GL(4, C) are also potential candidates. (Compare the Pati-Salam model [2].) For simplicity I shall work initially with SL(4, R). If any extensions to larger groups seem to be required, these can be incorporated later on. Note that SL(4, R) contains a subgroup of scalars of order 2, and the quotient group is that is, the connected component of the identity in the isometry group of a metric on a 6-dimensional real space. This 6-dimensional representation of SO(3, 3) • is the fundamental bosonic representation of SL(4, R), and can be constructed as the anti-symmetric square of the defining representation on a 4-dimensional space V . The latter, and its dual V ′ , are the fundamental fermionic representations. Since this bosonic representation is self-dual, it can also be described as the anti-symmetric square of the representation on V ′ . The Dirac gamma matrices The Dirac gamma matrices are a particular choice of basis for the algebra of 4×4 complex matrices, specifically chosen to exhibit the structure of a complexified Clifford algebra Cℓ (1,3). They can also be used to generate the Lie algebra gl(4, C), and specific choices among them generate the real Lie algebra sl(4, R). It is possible to produce a simple quaternionic notation for the Dirac matrices by identifying V with a copy of the quaternion algebra H. First we construct a Lie algebra su(2) R from right-multiplications by i, j, k. Then there is a corresponding algebra su(2) L generated by left-multiplications by −i, −j, −k. Let us write where we abuse notation to use i, j, k for right-multiplications, and i ′ , j ′ , k ′ for leftmultiplications by the quaternion conjugates −i, −j, −k. Then i, j, k commute with i ′ , j ′ , k ′ , so that the corresponding Lie brackets are 0. But in the ambient associative algebra we can also multiply these elements together in pairs. Thus we obtain a total of 15 linear maps that are easily seen to be linearly independent. Table 3. Matrix representation of the Lie algebra . . + . Since they have trace 0, they span the Lie algebra sl(4, R). They are listed in Table 2 together with the corresponding Dirac gamma matrices. It is an easy exercise to check that the multiplication rules are the same in both notations. For reference, a choice of matrix representation is also given in Table 3. From this correspondence it is easy to see that the top two rows of the table generate a Lie subalgebra u(1) ⊕ sl(2, C). The bottom two rows form two copies of the representation of sl(2, C) on real 4-dimensional Minkowski space. Note however the 'twisting' in the timelike coordinates, so that the two 4-spaces are Note also that the subalgebra u(1) is generated not by a scalar i (since the representation is real), but by iγ 5 . This fact has important consequences for the implementation of the Dirac spinor in the proposed new notation. Automorphisms and subgroups I have exhibited a copy of the Dirac group SL(2, C) inside SL(4, R), commuting with the subgroup U (1) of all elements of the form exp(i ′ x). In particular, SL(2, C) commutes with i ′ , and therefore with the inner automorphism of SL(4, R) defined by conjugation by i ′ . It turns out that the other groups we need, namely the gauge groups of the three forces, can be obtained in a similar way from outer automorphisms. The outer automorphism group of SL(4, R) is Z 2 × Z 2 , generated by two particular automorphisms that I shall call the chirality and duality automorphisms. The chirality automorphism is defined by quaternion conjugation q →q on V . This map has determinant −1, and swaps right-multiplication by q with leftmultiplication byq. It therefore transposes the multiplication table of the Clifford algebra, and centralizes the group GL(3, R), and the associated Lie algebra gl(3, R) spanned by The 3 × 3 scalar matrices in here are represented by elements of i ′ i + j ′ j + k ′ k . Note also that the compact subalgebra is The duality automorphism is defined by the transpose-inverse map on the group, or equivalently the transpose-negative map on the Lie algebra. The fixed subalgebra is This algebra also contains the subalgebra so(3) just mentioned. Remark 1. The product of the given chirality and duality automorphisms is a chiral duality automorphism with fixed subalgebra but that the two copies of this Lie algebra exhibited here are not equal. Indeed, they are disjoint subalgebras of sl(4, R). Moreover, the space V supports a spinor representation of sl(2, C), but a vector representation of so (3,1). This fact raises some interesting questions of interpretation of these two subalgebras, and more particularly of the physical interpretation of any isomorphism between them. This discussion is beyond the scope of this paper, however, and will be presented elsewhere. To summarise, the chirality and duality automorphisms have fixed subalgebras gl(3, R) and so(4), which intersect in so(3). If we restrict from so(4) to su(2) L , then we lose this intersection, and obtain a direct sum of vector spaces This is not a direct sum of algebras, however, since su(2) L does not commute with either gl(1, R) or sl(3, R). Correspondingly, the group SU (2) L does not commute with GL(1, R) or SL(3, R). If we were to work in the complex group SL(4, C) we could convert the split real forms GL(1, R) and SL(3, R) into the compact real forms U (1) and SU (3). By doing so we would obtain disjoint groups isomorphic to the gauge groups of the standard model. It is, of course, not obvious that the groups so obtained bear any relationship to the actual gauge groups. In order to try to throw more light on this question, I shall continue to work with the split real forms, with this understanding that we can convert between different real forms later on when/if necessary. The important point to note at this stage is that the groups inside SL(4, R), unlike the gauge groups of the standard model, do not commute with each other. They therefore contain within themselves the seeds of symmetry-breaking, which might therefore arise naturally from the group theory without having to be imposed from outside. Electroweak mixing The first test of this proposal is to see whether the mixing of electromagnetism with the weak force, as described by the Glashow-Weinberg-Salam model [3], can be sensibly re-constructed within SL(4, R). The four subalgebras mentioned above that may be involved in this process are The basic question to be resolved is how to mix the real scalar i ′ i + j ′ j + k ′ k with the imaginary scalar i ′ . In fact the above algebras gl(1, R) and su(2) L generate the full Lie algebra sl(4, R), so it is worth first looking at the subalgebra generated by the two elements i ′ and i ′ i + j ′ j + k ′ k. A straightforward calculation shows that this algebra is 5dimensional, and breaks up as a direct sum of three subalgebras: It is possible to combine the algebra gl(1, R) with either of the other two, and express the 5-dimensional algebra in one of the two forms This subalgebra suggests that the mixing may come in two parts, one of which is a mixing of the chiral u(1) generated by i ′ − i with the non-chiral u(1) generated by i ′ + i. The other is a mixing of the two real scalars i ′ i and j ′ j + k ′ k. In particular, the algebra su(2) L generated by i ′ , j ′ , k ′ is modified by • mixing i ′ with the element i from su(2) R , and • mixing j ′ and k ′ as j ′ + k ′ i and j ′ i + k ′ , with a further factor of j on the right and left respectively. This mathematical formalism is almost identical to that used in the standard model to describe electroweak mixing. But instead of being imposed from the outside, it arises naturally from the embeddings of the various algebras in sl(4, R). More specifically, the action of sl(2, R) W on H is given by so that it acts only on the 'left-handed' part of the spinor in the 1, i coordinates, not on the 'right-handed' part in the j, k coordinates. There is a corresponding 'right-handed' copy of sl(2, R) given by changing the signs, that lies inside sl(3, R), and is therefore (conjecturally) related to the strong force: The two copies of sl(2, R) commute with each other, and with i ′ i. The corresponding group is Remark 2. This group Spin(2, 2) is not to be confused with the subgroup SO(2, 2), although their Lie algebras are isomorphic. For example, we may take a metric in which 1 and i have norm 1 and j and k have norm −1, so that In this case the corresponding groups SL(2, R) both contain the scalar matrix −1, so their product is not a direct product but a central product This group has no obvious application in particle physics, as far as I know. Remark 3. The five groups SO(4), SO(3, 1), SO(2, 2), Spin(3, 1) and Spin (2,2) are, up to conjugacy, the only subgroups of SL(4, R) that are real forms of the Lie group of type A 1 A 1 . Of these groups, only Spin(2, 2) splits V into two halves that could be identified with the left-and right-handed spinors in the standard model. Remark 4. If we identify the complex structure defined by i in the Clifford algebra notation, with that defined by i in the quaternionic notation, then the effect on the Clifford algebra is to identify i with γ 2 γ 3 . Of course, γ 2 γ 3 does not commute with all of the Clifford algebra, so one has to take care that the multiplication is always done on the same side. We obtain an identification of γ 5 with γ 0 γ 1 , which has eigenvalues 1, 1, i, −i, so that 1 − γ 5 becomes a projection onto half the space as in the standard model. The subalgebra sl(2, R) W is likewise spanned by so that it behaves as an algebra that acts only on half of the spinor. In the proposed notation this half is a real 2-space, but since the required eigenvalues are complex, one needs to extend it to a complex 2-space in order to match the standard model. The strong force The above remark involves a choice of a particular direction in the Clifford algebra (defined by i ′ i) in which to define or measure spin. This choice splits the space V into two 2-dimensional eigenspaces. By symmetry it is necessary to replicate the construction for j ′ j and k ′ k. This gives three copies of a 5-dimensional algebra, which would be enough to cover the whole of sl(4, R), were it not for the fact that the terms i ′ i, j ′ j, k ′ k are double-counted. We are therefore missing three dimensions, which are spanned by the following elements, that are chosen to be perpendicular to all three copies of gl(1, C) ⊕ sl(2, R), with respect to the Killing form: Now a straightforward calculation shows that the Lie algebra generated by these three extra elements is sl(3, R), spanned by elements like the following (with irrelevant scalar factors suppressed): These elements appear in the three copies of gl(1, R) ⊕ u(1) ∼ = gl(1, C) already discussed in connection with the electroweak forces: If all these conjectural interpretations are valid, then these three complex numbers can be used to describe some mixing of the strong force with the electroweak forces. More specifically, there is a map from one copy of C 3 with a basis suited to the strong force, to another copy, with a basis suited to the weak force. Such a map can be written as a 3 × 3 complex matrix. It is therefore possible to insert the Cabibbo-Kobayashi-Maskawa (CKM) matrix [4,5] in this place in the model. In this way the model is able (at least in principle) to describe the mixing of the strong force with the electroweak forces. Since the CKM matrix describes the mixing of quark generations relative to lepton generations, the generations are described in the proposed model by three copies of the complex numbers, that in the lepton case may be taken to be as in (22). The Georgi-Glashow model Another subgroup of SL(4, R) that may be of interest is that obtained from SL(2, C) by adjoining one of the two vector representations. This subgroup is isomorphic to Sp(4, R). The corresponding Lie algebra is generated by the Dirac matrices iγ µ , for i = 0, 1, 2, 3, or in quaternionic notation, by k ′ , j ′ i, j ′ j, j ′ k. The ten dimensions of the algebra are given by and the remaining five dimensions form the vector representation of so(2, 3) ∼ = sp(4, R): The structure revealed by this group is reminiscent of that used in the Georgi-Glashow model based on SU (5). One difference is that the new model is real rather than complex, so that the 24-dimensional group SU (5) is replaced by the 10-dimensional group SO (2,3). This reduction in dimension could perhaps allow for a version of the Georgi-Glashow model that does not predict new forces or new particles. Another difference is that the 2 + 3 splitting appears naturally rather than having to be imposed. The 15 fundamental fermions of a single generation could then be allocated to the Lie algebra by using i, j, k for the three colours, and i ′ , j ′ , k ′ for the leptons. One might for example label the algebra thus: The other two generations could be labelled in a similar manner, using copies of Sp(4, R) defined by singling out i ′ or j ′ rather than k ′ from su(2) L . Using the Lie algebra rather than the Clifford algebra to describe these particles has the possible advantage that right-handed neutrinos, which have not been observed experimentally, do not appear. On the other hand, the fact that the Georgi-Glashow model does not address the question of the three generations implies that there may not be a close connection with the proposed use of SL(4, R). Further exploration of the fermionic representations of SL(4, R) will be undertaken in a forthcoming paper. A heuristic mass equation The proposed model allows (or requires) basic properties of fermions to be modelled in small fermionic representations of SL(4, R). The first generation of leptons is related to the operations i ′ i and i ′ + i that describe commutation by i in the group-theoretical sense and the Lie algebra sense respectively: The second and third generations of leptons are described by the corresponding maps for j and k. Abstracting these properties to a 4-dimensional representation on H allows us to represent the 3 generations by vectors i, j and k respectively. The real part 1 then appears to describe the charge of the leptons. On the other hand, i, j, k as right-multiplications appear in the model as colours of quarks, from which one can coordinatise protons and neutrons with all three coordinates i, j, k. If one adds together the 6 leptons and three protons, taking into account that the proton contains three colours of quarks, one sees a total of 15 dimensions of fundamental properties, encoded in the vector (0, 5, 5, 5). These 15 dimensions can be re-distributed by changing basis on the adjoint representation, and can apparently be allocated instead to five neutrons. Such a redistribution preserves the overall charge, which I have (conjecturally) put in the real part of H, and therefore can be achieved by an element of sl (3, R). This group corresponds to an action of the strong force, which in the standard model does not change particle masses. Therefore the model suggests that the total mass of the six leptons and three protons should be equal to the total mass of five neutrons. Since the neutrinos have negligible mass compared to all the other particles in this equation, the effective prediction is that Of these masses, the τ mass is the least accurately known, by about three orders of magnitude. Hence we can re-cast this equation as a prediction of the τ mass. Another mass equation One can apply a similar analysis to the fundamental bosonic representation of dimension 6. In this case the six coordinates appear in two sets of three, each set associated in some way to the triple i, j, k. Again it looks as though there is a distinction between charge and spin, though in this case we appear to require three charges (say +, − and 0) and three spins (no longer directly associated to specific generations of leptons). The three fundamental massive bosons are the intermediate vector bosons, that is the bosons Z 0 , W + and W − that mediate the weak force. In this case we have that Λ 2 (V ) is self-dual, and therefore Now Λ 2 (Λ 2 (V )) is again the adjoint representation, while S 2 (Λ 2 (V )) breaks up as a scalar plus a 20-dimensional irreducible representation that is used in the different context of general relativity for the Riemann Curvature Tensor. The remainder consists of two copies of the fermionic particle (0, 0, 0; 1, 1, 1), which was earlier identified as a neutron. Hence the model suggests the following equation: The prediction is accurate to 1σ, and predicts one more significant figure. Incidentally, without the small contribution of the neutron, required by the model, the calculation of the Higgs mass would differ from experiment by some 5σ. Conclusion In this paper I have shown how all the groups that appear in the foundation of the standard model of particle physics arises naturally from the group SL(4, R), and the associated Lie algebra sl(4, R). In particular, the coupling of mass to charge in a model based on this algebra seems to require the existence of three generations of leptons, and the existence of a chiral force that acts on the three generations, and has a gauge group that is either SU (2) or SL(2, R). Furthermore, this chiral force does not exhaust the Lie algebra, which contains three more dimensions that generate a Lie algebra sl(3, R), representing a non-chiral force that can plausibly be identified with the strong force. REMARKS ON THE GROUP-THEORETICAL FOUNDATIONS OF PARTICLE PHYSICS 11 Some mixing of the forces arises naturally from the group theory, and is an inevitable consequence of the fact that the groups that take the place of the gauge groups do not commute with each other. This mixing therefore occurs without the need to hypothesise any 'spontaneous' symmetry-breaking in the Big Bang. While I have not explained the values of the mixing parameters in this paper, I have explained the number of them, and their mathematical structure. In other words, many of the puzzling but essential ingredients of the standard model arise as an inevitable consequence of the algebraic structure of 4-dimensional space, and in particular of a group SL(4, R) of local symmetries of space, interpreted in the standard model as internal symmetries of elementary particles themselves. Of course, I have not constructed a complete model here. More work will be required to verify that the details of the standard model can be consistently included, and not just the broad structures. For example, it is not even clear that the proposals put forward in this paper are consistent with the basic axioms of quantum field theory. It remains to be seen whether this problem can be overcome. As an illustration of the power of the proposed new foundation for the standard model, I have provided a heuristic justification for two conjectured mass equations which are predictive, and which can be tested by measuring the masses of the tau lepton and the Higgs boson to greater accuracy than is currently known. One of these gives a possible hint as to how the differing masses of the three generations might arise.
2020-03-09T01:41:33.974Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "27212d9354bb31080067c9bde159b193dc052e63", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2205.13390", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3322a5c1fd46a943fb265fb4eeb1b1fa359b21bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
231675059
pes2o/s2orc
v3-fos-license
Measuring the Financial Literacy of Farmers Food Crops in the Poor Area of Madura , Indonesia DOI: http://dx.doi.org/10.24018/ejfood.2020.2.6.138 Vol 2 | Issue 6 | December 2020 1 Abstract — An effort to encourage increased food productivity is directly proportional to the limited access of farmers on capital financing resources. The purpose of the research is to describe the financial literacy rate of farmers in Madura. The data analysis method uses a descriptive (qualitative) analysis and a different test (T-Test) with IBM SPSS. 23 Software support. The results of the research obtained that food farmers in Sampang and Bangkalan districts have a relatively moderate financial knowledge with all its limitations. Their understanding is sufficient about the knowledge principle of bank interest calculation, the time value of money, the general rules of the bank, the definition of inflation, risk and profit received. However, financial behavior and attitudes are categorized less. The difference of the principal of Madura food farmers in both research areas namely Sampang Regency and Bangkalan District is located on the component of the Financial Knowledge Index. I. INTRODUCTION 1 The last decade of financial literacy issues has been the focus of government policy, banking industry, community as consumers, interested community groups, and other organizations. [3] provide the view that financial literacy has become increasingly complex over the last few years with a number of new financial products, while on the other hand, the minimum financial literacy level becomes a must for the community to use financial products and services can be effective. Financial literacy has been clearly defined among them by [9], [7], refer to financial literacy as the ability to understand financial conditions and financial concepts and to properly change knowledge into behavior. [17] defines financial literacy as the ability to use knowledge and expertise to manage financial resources to achieve welfare. Financial literacy with access to financial institutions is an inseparable part. The extant findings show still many people around the world financially illiterate [13], [14]. The results of the National Survey of Indonesian financial literacy conducted by the Financial Services Authority in year 2016 shows only 29.66% of Indonesian people who have financial literacy well. Based on the report released by Bank Indonesia in July 2014, Indonesian residents who have good access to informal financial institutions are only 32%, where this number is quite low compared to the total population of Indonesia. Central Bureau of Statistics of the Republic of Indonesia gives an overview of the agricultural sector to be the largest contributor to Indonesia's gross domestic product 1 Published on December 14, 2020. (e-mail: wahyu.agri upnjatim.ac.id) after the processing industry, but farmers ' households who are the main actors are grouped low-income population. The assumption that has evolved so far is that many farmers do not get access to banking. Whereas [23] reveals poor and low-income people also need access to financial services to live life and manage their business. People who understand literacy easier to understand all things related to the financial services industry, determine the products and services of financial services as needed and become life skill of each individual in living long term [16]. In fact, the financial future really exists in the hands of individual individuals, meaning the ability to make healthy financial choices based on basic knowledge of financial concepts [12], [14]. Likewise, the food farming sector is the backbone of the Madura people's life. Based on the results of the Inter-Census Agricultural Survey (SUTAS) 2018 East Java province mentions the number of households sub-food crops in Bangkalan district as much as 142,402, Sampang Regency as many as 147,838, Pamekasan Regency as much as 138,547 and Sumenep Regency as much as 241,599 [4]. The majority of farmers in Madura food are in rural areas of poverty. The limitation of farming capital is the impetus of food crops farmers in the poverty trap caused by low access to finance. [18], [6] argues that access to finance is crucial because it can be an opportunity for poor farmers to change their production systems and out of poverty. This research provides the concept of novelty in which exploring the importance of financial institutions for farmers of food crops from the perspective of financial literacy knowledge is still limited. Generally, people working in the informal sector have a low knowledge background as a result of understanding and financial related information is minimal. In line with the research of [20], [15], [5] provides recommendations for expanding the research that links financial knowledge and literacy. The purpose of the research is to describe the financial literacy rate of farmers in Madura. So, research can be a benchmark to understand the financial behaviour of farmers in food crops in understanding financial literacy comprehensively. A. Sampling The research area has been determined in Bangkalan and Sampang Regency, Madura as the region with the highest 5 ranked regions left in Indonesia. Based on data from the [4], the two areas also have the most agricultural households in Indonesia compared to other areas in Madura Island. Samples of research determined by purposive sampling methods consider several aspects among others a) farmers have the primary income of agriculture, b) has productive agricultural land with a minimum age of one year and c) participated and active in the farmer group. Amount of respondent samples was 60 respondents with each territory as many as 30 farmers. [21] declare that the sample size of the study in general is as much as 30 to 500 and the size is judged to have been quite representative of the population. B. Data Analysis Methods of data analysis use a descriptive analysis (qualitative) to describe the level of financial literacy of food farmers in Madura. In addition, a different test (T-Test) is used to determine whether there is a significant difference between components of financial literacy among food farmers in Sampang Regency and Bangkalan Regency, Madura. The research data is processed using the IBM SPSS. 23 program. Measurement of the Financial Literacy Index used a theoretical approach to [2] with a combination of three main indicator components for measuring the financial Literacy Index, namely: 1) Financial Knowledge Index is measured parameters related to the knowledge principle of bank interest calculation, time value of money, definition of inflation, bank general rules, diversification, risk and profit. 2) Financial Behavior Index is measured parameters related questions in deciding on the purchase of goods, the accuracy of paying bills, care in personal finance affairs, long-term financial and business objectives to achieve it, ownership of the household budget, saving or investment activity in the last year, decision of the financial product selection. 3) Financial Attitude Index is measured paramaters related to the bribery in spending or saving money in long term and short term financial planning. A. Profile of Farmer Food Crops The condition of food crops farming in Madura today still need serious attention marked by the high level of poverty, so it needs to improve their welfare by means of agricultural optimization, revitalizing and diversifying agriculture and facilitating capital for farmers. Considering this, it is necessary to display a farming profile that can be a cornerstone of the next policy and strategy. The farmer profile referred to in this study is a characteristic of farmers based on several variables relevant to the level of financial literacy. Farmer's age is the farmer's life at the time of research expressed in years. Age relates to the physical strength, spirit, experience, and level of adoption. Based on the data obtained from 60 people samples of farmers, the age of farmers ranged from 17 years to 64 years. The number of farmers aged between 20 to 30 years is 15 people, aged 31 years to 40 years and 15, while farmers who are over 40 years old as many as 30 people (Table 1). [20] conducted research on 73 smallholders in South Africa while the age of closely related farmers access to financial institutions. The climate in Madura is characterized by two seasons, the western season or the rainy season during October to April, and in the eastern or dry season. Soil composition and precipitation are not the same on the high slopes are precisely the most, while on the low slopes are not a shortage of Madura has a fertile soil. For Madurese farmers in general they have the proficiency in reading seasons and weather according to their region characteristics. Only in the alluvial land and the clay mixed with lime on the highlands that there is enough rainfall alone and most of the land is processed from the moor. This type of cultivated plant is adapted to the natural environment of Madura which tends to have a long enough dry season. Potential farming is done is confined to rice crops, corn, and some types of cassava. Table 1. shows that from 60 people samples of farmers, the number of farmers who planted paddy as many as 22 people, planted corn as many as 28 people, and farmers who planted cassava for 10 people. A farmer in the countryside earns profit or income a month or three months at the time of the harvest. Although the harvest is abundant and get a lot of profit, it turns out the farmers are quite difficult to set aside his money for saving. Rice farmers in Madura can harvest two times a year. Big and small profit in planting rice is when the season is not friendly, pests rampant, as well as the unavailability of pesticide. Not least the farmers keep the money in the plastic, then put under the pillow. This is ineffective because the money will not be consciously reduced because you use it for other purposes. Unlike the modern farmers who save their harvest money on the Bank or use the harvest profit by buying the rice field again to be planted more rice in a long period of time. There are several reasons for food farmers in Madura to save by considering the agricultural capital as many as 12 respondents, the anticipated unexpected problem is 15 respondents, used to buy valuables as many as 15 respondents and 7 respondents answered him for the future of his child/grandson (Table 1.). [5] found that extension contact, and saving habits had a positive influence on farmer access to the formal credit market. The distance in this study is operational is defined distance from the farmer house to the financial institutions both government banks and private banks, village unit cooperatives and agricultural cooperatives. The respondents house distance close to financial institutions makes it easy for farmers to reach financial access, making it easier for farmers to use financing for agricultural capital. Based on the research data from 60 people samples of farmers (Table 1.), there are several categories of farmer's house distance to the bank, among others are: less than 1 km, 1-3 km, 3-5 km, as well as more than 5 km. with the largest amount of farmer distance from home to bank as far as 3-5 km. of 30 people, the distance is 1-3 km, and the remaining less than 1 km as many as 2 people. The findings of [19] states the barriers to the financial access of farmers are the high cost of rural loan transactions due to small credit size, high transaction frequency, large geographical spread of the borrower's heterogeneity as well as the lack of network banking amounts in rural areas. [1] said that credit access can improve technical efficiency and alocative efficiency in the rice farming sector. This affects the level of technical efficiency of farmers so as to implement more capital intensive production methods, namely buying more machine inputs and markets. In addition, credit can also improve alocative efficiency by letting farmers replace nonmarket inputs with market inputs and increase the ability of farmers to bear risk. B. Measuring the Financial Literacy Index of Farmers Food Crops in Madura Financial literacy is believed to improve access to financial institutions such as saving, buying insurance premiums, investing, accessing credit, and more. [9] stated that financial literacy is the ability to understand financial conditions as well as financial concepts and to alter the proper understanding of behavior. The phenomenon of financial literacy needs in farmers arises when the efforts to encourage increased food productivity are directly proportional to the limited access of farmers to capital financing resources. Capital limitations also make the quantity and quality of the results obtained by farmers is not maximal. The nature of cultivation is more dependent on nature, the crop failure experienced by farmers is certainly a serious problem. Food farmers are hard to get out of the poverty trap, let alone have savings to cover their entrepreneurial losses and how farmers earn capital to restart their efforts, ranging from the purchase of seedlings, fertilizers, pesticides, and other agricultural production means, and therefore farmers will be prosecuted to get capital from other parties. Based on Table 2. it can be seen that in Madura Island in both Sampang and Bangkalan districts the average financial knowledge index has the highest value of 23.37 for Sampang district and 20.37 for Bangkalan district which means that there are many food farmers in the two The regency is able to meet the measured index related to the knowledge of the principle of calculating bank interest, the time value of money, the general rules of the bank, the definition of inflation, risk and profit received. While the financial literacy component that has the lowest average index value is the financial attitude for both Sampang and Bangkalan districts, which is 19.20 for Sampang district and 18.33 for Bangkalan district. Financial literacy as an effort to increase public sensitivity to the financial services sector, which begins with knowing, then believes, to be skilled to actively engage, in other words reach a society that has a good literary level (well literate) in the financial services sector. The definition of financial literacy has been clearly expressed among them [9], [7], explain financial literacy as the ability to understand financial conditions and financial concepts and to properly change knowledge into behavior. Financial literacy also means how one governs their finances in insurance, investing, savings and budgeting (budgeting) [8]. Financial literacy can be improved through financial education. [11] stated that efforts to improve financial literacy are an important way to raise savings and credit levels for poor and vulnerable consumers especially for those working in the informal sector. Constraints related to access to financing agricultural sector can be seen from two sides, namely the financial institution side and the customer side. Constraints from financial institutions are the absence of special treatment from the financial institutions for the agricultural sector. To date, policies related to the financing of agricultural sectors are always integrated with other sectors so that the agriculture sector is not competitive. The factors believed to cause difficult access to capital, in addition to the factors mentioned earlier, are that the financial institutions as well as the products and services are not well known by the public. The community has no understanding of adequate financial literacy, especially those who have low education and are in poor areas. Based on Table 3. it can be seen that the results of the t test show that there are significant differences between food farmers in Sampang district and Bangkalan district for financial knowledge index. This can be seen from the magnitude of the t value of 5,076 with a probability (significance) of 0,000 or it can be interpreted that both the average (financial) knowledge of food farmers in Sampang and Bangkalan districts is really different in the sense of food farmers in Sampang district have higher financial knowledge than food farmers in Bangkalan district. As for the financial behavior and financial attitudes of food farmers in both Sampang and Bangkalan districts, there is no difference by looking at the probability (significance) of the two variables, namely 0.98 and 0.180. Bangkalan and Sampang Regencies are areas that reflect the social characteristics of the Madurese community that cannot be equated or are unique to other ethnic communities in Indonesia. The stereotypical description of Madurese people is easily offended, easily suspicious of others, temperamental or irritable. Madura's expressiveness, spontaneity, and openness are always manifested when they must respond to everything they face, especially to the treatment of others. For example, if the treatment makes the heart happy, then frankly without further ado, they express their gratitude immediately. But on the contrary, they spontaneously react violently if the treatment of him is considered unfair and hurt his feelings. IV. CONSCLUSION The classic agricultural problematics in developing countries especially poor areas are the low level of education of the farmers, and old age such as the age of the 30s and above. It is a reality that happens to farmers in poor areas of Madura, Indonesia. Another research finding is that the large distance from the farmer's house to the bank by 3-5 km is a strong reason not to save his money to financial institutions. Nevertheless, for farmers who are buried in the modern era and want to save their harvest money on the Bank considering the reason for savings for agricultural capital, anticipating urgent and unexpected events as well as buying valuables. The majority of food farmers in Sampang and Bangkalan districts have a relatively moderate financial knowledge with all its limitations. Their understanding is sufficient about the knowledge principle of bank interest calculation, the time value of money, the general rules of the bank, the definition of inflation, risk and profit received. It's just less financial behaviour and attitudes. They lack understanding that managing finances is very important, but awareness to deposit funds in banks is also very less in line with feelings of anxiety because the wealth is managed by other people or institutions. The principal difference between food farmers in Sampang district and Bangkalan District on one of the financial literacy components, the financial knowledge Index. As for financial behaviour and the financial attitude between food farmers both have an equal understanding.
2020-12-31T09:03:29.946Z
2020-12-14T00:00:00.000
{ "year": 2020, "sha1": "4a0b64cd924719aca401290c13e4400051cd1f52", "oa_license": "CCBYNC", "oa_url": "https://www.ejfood.org/index.php/ejfood/article/download/138/101", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2591addb998cfe5bca0abd71ae5998c5ecdf3268", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [] }
228084067
pes2o/s2orc
v3-fos-license
Auto-MVCNN: Neural Architecture Search for Multi-view 3D Shape Recognition In 3D shape recognition, multi-view based methods leverage human's perspective to analyze 3D shapes and have achieved significant outcomes. Most existing research works in deep learning adopt handcrafted networks as backbones due to their high capacity of feature extraction, and also benefit from ImageNet pretraining. However, whether these network architectures are suitable for 3D analysis or not remains unclear. In this paper, we propose a neural architecture search method named Auto-MVCNN which is particularly designed for optimizing architecture in multi-view 3D shape recognition. Auto-MVCNN extends gradient-based frameworks to process multi-view images, by automatically searching the fusion cell to explore intrinsic correlation among view features. Moreover, we develop an end-to-end scheme to enhance retrieval performance through the trade-off parameter search. Extensive experimental results show that the searched architectures significantly outperform manually designed counterparts in various aspects, and our method achieves state-of-the-art performance at the same time. Introduction Along with the emergence of large 3D repositories [Chang et al., 2015;Wu et al., 2015] and the development of Convolution Neural Network (CNN), deep learning based 3D shape recognition has attracted strong interest in research [Su et al., 2015;Xie et al., 2017;Qi et al., 2017;Han et al., 2019]. Among different kinds of research works, multi-view based methods have achieved the best performance so far, in which images are generally first rendered from a set of views and then passed into CNNs to obtain a shape descriptor. Handcrafted networks are usually adopted as the backbone in current methods, where a variety of classic architectures (e.g. VGG [Simonyan and Zisserman, 2015], ResNet [He et al., 2016]) have been employed for feature extraction. Over the past years, the majority of researches emphasized leveraging relationships among view images in single-view feature level [Ma et al., 2019;He et al., 2019] or multi-view feature level [Feng et al., 2018;Han et al., 2019], and therefore devoted efforts to designing sub-network on top of the backbone. Despite the remarkable progress achieved in previous studies, the effect of CNN extractors is not fully investigated, which restricts their performance to some extent. Meanwhile, in order to avoid the excessive memory usage, a lot of research works Ma et al., 2019;He et al., 2019] develop a multi-stage training scheme which only uses the backbone to extract view features, while the relation between the feature extraction and the feature fusion is neglected. These drawbacks may not only degrade the performance but also lead to being time-consuming and increasing computation cost. As the neural network plays a crucial role in 3D shape recognition, it is desired to design an esfficient and powerful architecture that can process multi-view images with an end-to-end scheme. In recent years, due to the effectiveness of Neural Architecture Search (NAS) compared with the human-designed structures [Fang et al., 2020;Guo et al., 2020], its application field has also expanded on various benchmarks Real et al., 2019;Liu et al., 2019a;, before which NAS has been dedicated to image classification. Besides, it is worth mentioning that the combination of multiple loss functions is essential in multi-task learning, which has a large impact on neural architecture design. Unfortunately, many of AutoML methods using reinforcement learning and evolution algorithms have extreme computational demands. And there is a relatively small amount of works that study the balance of training as well as associated techniques in the searching process. Darts [Liu et al., 2019b] is a well-known gradient-based framework that largely reduces computation complexity. However, directly transferring Darts algorithm into 3D shape recognition is not an advisable choice. Firstly, Darts processes a single image instead of multi-view images, in which case the correlation across multiple views will be neglected. In addition, we aim to develop a unified model for both classification and retrieval tasks, which is different from Darts that focuses on single task optimization. To automatically search a suitable neural network for 3D shape analysis, in this paper, we propose our Auto-MVCNN which is particularly adaptive for multi-view shape recognition. Our network architecture contains three parts: a shared backbone for view feature extraction, a fusion module for multi-view feature fusion and a linear combination of loss functions for multi-task learning. The pipeline of our method is shown in Fig. 1. In the network, we propose a novel fusion cell which is specially designed for processing sequence view features. By continuous relaxation of discrete operations, it inherits the efficiency and effectiveness of Darts and can be integrated into the existing framework seamlessly. The equipped search space enables us to find appropriate fusion patterns that explore the correlation among views. For supervision signals, in addition to the shape classification loss on the top, we also add a view classification loss function for view feature enhancement and another retrieval loss function for the retrieval task. The trade-off parameters that linearly combine these loss functions are searched by an end-to-end scheme. To evaluate the performance of Auto-MVCNN, we carry out experiments on two large-scale datasets and conduct the comparison from various aspects. Compared to the handcrafted network, with or without ImageNet pre-training, our searched networks show the superiority in regard to both performance and computation resources saving. Besides, more experiments are implemented to compare our method with state-of-the-arts indicating the effectiveness of our proposed framework. Finally, we also analyze the impact of the number of initial channels and the stability of the searching process. To summarize, the contribution of our paper is four-fold: 1. To our best knowledge, this is the first work of neural architecture search in the field of multi-view 3D shape recognition that replaces the manually designed search with the automatic mechanism. 2. We propose a novel fusion cell to process multi-view features that can be integrated into the existing framework seamlessly. 3. We develop a simple scheme that dynamically searches loss weights of multiple loss functions, achieving appropriate training balance for multi-task learning. 4. Extensive experiments show that the searched CNNs achieve state-of-the-art performance, using much fewer parameters than other baselines. 2 Related Work Multi-view 3D Shape Recognition On the basis of different formats of the processed 3D data, methods in 3D shape recognition could be roughly divided into two categories: model based methods [Osada et al., 2002;Xie et al., 2017;Li et al., 2020] and view based methods Wang et al., 2017a]. In this section, we mainly introduce multi-view based methods which leverage 2D views' information to construct 3D descriptors. MVCNN [Su et al., 2015] is a typical framework in which the whole pipeline is divided into two parts. The first part is the backbone for extracting view features and the other part is responsible for processing further shape features. Between the two parts, the view features are aggregated into a single shape representation through the element-wise maximum operation. In , a postprocessing algorithm adopting the inverted file is proposed for fast retrieval. Recently, leveraging correlation among views has become more and more popular in some research works. GVCNN [Feng et al., 2018] introduces a multi-level descriptor by exploring the view-group-shape hierarchical correlation, which largely improves the performance on 3D shape classification and retrieval. [Huang et al., ] develops a local 3D shape descriptor, which makes full use of relations over points on the shape and can be directly utilized for a wide range of shape analysis tasks. Many research works Feng et al., 2018;He et al., 2018] show that metric learning is essential in 3D shape retrieval task. [Li et al., 2019c] designs two loss functions to separately deal with these two distances. The flexible combination property of the proposed loss functions provides effective tools to enhance retrieval performance. Neural Architecture Search The basic ideology of NAS is to find candidate network structures through a search strategy in a defined search space, based on the obtained feedback of the evaluation. The search space develops from the entire structure at the beginning to stacking cells . Cell-based search can greatly narrow the search space and improve the search efficiency, which has been applied in numerous subsequent works. However, search strategies based on reinforcement learning, such as the Q-learning algorithm in MetaQNN [Baker et al., 2017], require high computational complexity. AmoebaNet [Real et al., 2019] develops evolutionary algorithms instead of reinforcement learning to optimize performance. Although it achieves better results, it still takes 450 GPUs and 7 days in a row to complete the experiment. NAS is gradually approaching the very obvious problem of solving heavy computation, which enables gradient-based methods and other efficient methods to emerge. ENAS [Pham et al., 2018] employs weight sharing to accelerate validation, where the cell-based search mode greatly improves experimental results. Similar to ENAS, DARTS [Liu et al., 2019b] also searches subgraphs in designing cells and conducts weight sharing as well. EfficientNet and MobileNetv3 [Howard et al., 2019] use the network search to obtain a fixed set of scaling factors to scale the width, depth, and resolution of the network respectively, achieving better efficiency and accuracy. To search for an appropriate loss function for face recognition, AM-LFS [Li et al., 2019a] employs the REINFORCE [Williams, 1992] idea to automatically search for appropriate hyperparameters of the loss function, with great transferability at the same time. Auto-MVCNN Network In our method, a view image sequence of length N v is input to a shared backbone to obtain view feature vectors F = [f 1 , f 2 , ..., f Nv ] ∈ R m×Nv . Then a shape descriptor d ∈ R n is generated by fusing F . The pipeline illustration is described in Fig. 1. The whole neural network, which is called supernet, is formed of multiple cells and multi-task loss functions. It is divided into two parts according to their functions: the backbone is used to extract view features and the fusion module is designed for view feature fusion. The backbone consists of a number of normal cells inserted with several reduction cells which are presented in a stacking manner. Different from the backbone composition, the fusion module is generated by fusion cells. The details of these components will be described in Sec. 3.2 and Sec. 3.3. In our method, both classification and retrieval tasks are fulfilled with multiple supervision signals and we treat all the loss functions as important roles in neural network architecture. Besides the classification loss L 1 on the top of the supernet, we add an auxiliary classification loss L 2 and an auxiliary retrieval loss L 3 to enhance view features and boost retrieval performance respectively. The formulation of the loss functions will be described in Sec. 3.4. Backbone Search In the searching stage, the cell is the basic searching component that contains the combination of all candidate operations. Formally, a cell C k is defined as a directed acyclic graph (DAG) which contains an ordered sequence of 7 nodes x (1) , x (2) , · · · , x (7) . Each node represents a latent tensor (i.e. a feature map) and each edge e i,j consists of multiple parallel network layers. Two input tensors x (1) and x (2) are the out-put of previous cells C k−1 , C k−2 and one output tensor x (7) is computed as (6) ). For the intermediate node, the computation can be formulated as whereō (i,j) is a continuous relaxation of the search space O: Here, α is the architecture parameter that represents the weight of operation o in the edge e i,j . The continuous relaxation allows us to optimize α by gradient descent. O is the set of candidate operations and we choose the same set as previous works [Liu et al., 2019b;Liu et al., 2019a] to keep consistence: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 atrous convolution, 3 × 3 average pooling,3 × 3 max pooling, skip connection, and zero. A cell that remains the same spatial resolution as the previous cell is the normal cell and that divides spatial dimension by 2 is the reduction cell. In both training and inference stage, view features F are extracted simultaneously from the backbone. Fusion Module Search The NAS framework [Liu et al., 2019b] focuses on image classification that can generate single view feature. Though feasible, a simple combination (e.g. max-pooling, viewwise addition) of view-level features will lead to information loss that largely degrades the performance. By drawing on the experiences of previous works [Feng et al., 2018; He et al., 2019], we summarize that leveraging the spatial information and the feature correlation among views is essential for obtaining competitive performance. Following the principle mentioned above, we design the fusion module in Auto-MVCNN to aggregate view features into a compact and discriminative shape feature. This module consists of two sequential fusion cells that are developed to process sequence view features F ∈ R m×Nv . In order to integrate the fusion cell into the existing optimization framework, the fusion cell has a search space O f which is similar to O that applies the size adaption to the kernel size of all operations. Concretely, we regard F as a three dimension tensor of shape m × N v × 1 with m channels and spatial dimension N v × 1. And the size adaption is to change the kernel size from k × k to k × 1 for all operations in O. In this way, we can adopt all the operations in O f on F . The size adoption of operations is illustrated in Fig. 2. The fusion cell is then formed by linking these operations using Eq. 2. It is worth pointing that the size adaption is equivalent to padding zeros to F such that its shape is m × N v × k, while the operations remain the same in O. The compatibility of the fusion cell shows various excellent properties. For one operation o ∈ O f , o conserves the spatial relationship of input tensors and thus can model the spatial information among views. The correlation among view features can also be revealed by a variety of operations. In addition, the fusion cell inherits the diversity of the combination among different layers, which enables the fusion module to search for novel fusion patterns. Trade-off of Loss Functions Auto-MVCNN aims to develop a network for both shape classification task and shape retrieval task where training balance is extremely important. There are totally three loss functions in the supernet. L 1 and L 2 are softmax loss located on the top and middle of Auto-MVCNN, in charge of the shape classification and the view feature enhancement respectively. L 3 is the loss function proposed in [Li et al., 2019c] which is used for enlarging the inter-class distance: ( 3) where M and y are the batch size and the ground truth label respectively. In general, classification places emphasis on the right label prediction while for retrieval the feature distribution is more important. This phenomenon is also observed in Feng et al., 2018] and they adopt offline metric learning algorithms to improve retrieval performance. In this paper, by contrast, we tackle the issue by adding the loss function of metric learning L 3 with an end-to-end training scheme. In practice, multi-task loss functions are linearly combined, L = i ω i L i . Since our target is searching for proper training balance, we normalize the loss weight using trade-off parameters λ = (λ 1 , λ 2 , λ 3 ) and the total objective loss function is formulated as: An appropriate value distribution of λ could enhance performance without impeding the classification task, while a relative quick drop of loss i means its gradient is large in the backpropagation which hampers the training of other tasks. However, direct optimization via minimizing L total is infeasible. If L i is the minimum value among the three losses, λ i will be straight to become large, which is irrelevant to the balance of training. To tackle this issue, in this paper, we develop a scheme that searches for the balance of training leveraging the performance on the validation set. Specifically, let L i (t) denote the loss value in the t-th iteration on the validation set, and we define r i (t) = L i (t − 1)/L i (t) as the training rate of L i . We propose a regularization term that directly involves λ i and the training rate: L grad penalizes λ i when its corresponding loss drops quickly and, in turn, it augments the weight of one task if its training is relatively slow. Optimization In our method, searching neural network is a bilevel optimization problem that has two different sets of parameters: the network parameters W and the architecture parameters (α, λ). We follow the first-order approximation proposed in Darts [Liu et al., 2019b] and split the training data manually into two disjoint sets D train and D val . The optimization of W and (α, λ) is carried out in an alternating fashion on D train and D val until convergence, as shown in Alg. 1. The stability of search is clarified in Appendix. Algorithm 1: The Auto-MVCNN search algorithm Input: architecture parameters (α, λ), network parameters W , D train , D val while not converged do Sample mini-batch from D val , calculate L total (W, α, λ) and L grad (W, α, λ); Update α by descending ∇ α L total (W, α, λ); Update λ by descending ∇ λ L grad (W, α, λ); Sample mini-batch from D train , calculate L total (W, α, λ); Update W by descending ∇ W L total (W, α, λ); end Evaluation After the search convergence, the final cell for evaluation is pruned by selecting non-zero layers in the connection. The selection is achieved by retaining the top-k strongest operations from edge i to edge j: We use k = 2 in our method. The extracted cells are then stacked to form the supernet and retrained for evaluation. The final searched cells for evaluation are shown in Fig. 3 For the sake of stability, the loss weights need to rescaled to ensure the loss weight of L 1 is 1, i.e. (1, 0.95, 2.7), in the retraining. When we retrain the architecture, its initial number of channels changes to 24 and 36 (for matching the size of popular NAS architectures), generating our two representative models AM 24 and AM 36. Dataset and Metrics To evaluate the performance of our method, we conduct experiments on the Princeton ModelNet dataset [Wu et al., 2015] and the ShapeNetCore55 dataset [Savva et al., 2016b]. The ModelNet is a large-scale 3D shape dataset which contains 127,915 3D CAD models divided into 662 categories. We apply the extracted subset ModelNet40, which includes 12,311 models cleaned manually from 40 categories, in our evaluation. We follow the same training/testing split as described in [Su et al., 2015], by randomly selecting 100 unique models per category from the subset, where 80 models are used for training and the rest for testing. The evaluation metrics adopted in this dataset include the (per-class) classification accuracy, the mean average precision (mAP) and the area under curve (AUC). Their detailed definitions can be found in [Wu et al., 2015]. The ShapeNetCore55 dataset, introduced in the Shape Retrieval Contest (SHREC) 2016 competition track, contains 51,190 3D shapes from 55 common categories which is a subset of the full ShapeNet dataset with clean 3D models. Each model in this dataset is attached with a label from the 55 categories plus a fine-grained subcategory deriving from 204 subcategories. The dataset is divided into two versions, named as "normal" version and "perturbed" version, where the 3D shapes in the former version are aligned but more challenging in the latter one with all shapes are rotated randomly. In terms of training and testing split method, 70% shapes in the dataset are provided for training and another 10% shapes are for validation, with the remaining 20% shapes forming the testing set. Refer to [Savva et al., 2016b] for the definition of metrics F-Measure (F-1) and NDCG used in this paper. Implementation Details The experiments are carried out on a server with four Nvidia GTX2080Ti GPUs, Intel Xeon CPU E5-2678 v3 and 128G RAM. Before training and testing, each shape is rendered to generate 12 images with size 224×224, following the same protocol as . Architecture search on ModelNet40. In the experimental settings, we employ 3 normal cells, 2 reduction cells and 2 fusion cells to build the architecture space. The supernet contains a stem at the bottom with 7 cells stacked sequentially, Fig. 1 displays the architecture. During the searching process, half of the ModelNet40 training data is set as the validation set D val . The batch size is 36 and the initial number of channels is 16. Please refer to Appendix for other hyperparameter settings. Architecture evaluation on ModelNet40 and ShapeNet-Core55. To evaluate the performance of searched architectures, we need to retrain the derived supernet on our target dataset. The retraining set the batch size to 36. SGD with the initial learning rate 0.01 is adopted. The shape descriptor d is extracted for the retrieval task using cosine distance. For a fair comparison with other methods, we also pretrain the supernet on ImageNet classification benchmark. We propose two models, AM c24 and AM c36, with initial channels 24 and 36 respectively for evaluation. Please refer to Appendix for detailed hyperparameter settings. Main Results Comparison with hand-crafted networks. As for comparative experiments, we train several popular hand-crafted networks in this domain using the same training protocol. The comparative experiments involve several aspects, results of which are indicated in Tab. 1. We employ the network parameters (Params) and the multiply-accumulate operation (MACs) to measure the network size and the computation cost. Cputime is the network inference time averaged by 10 times running. Their values are obtained by inputting an image sequence (12 × 3 × 224 × 224) into the network. For a comprehensive comparison, we take the following factors into consideration: (1) the position of the fusion layer, AM c36 achieves better performance compared to AM c24. Our proposed Auto-MVCNN is also evaluated on ShapeNetCore55 perturbed dataset. This perturbed dataset is more challenging as all shapes are rotated randomly. Note that the architecture is also searched on the ModelNet40. We choose the participants of the competition [Savva et al., 2016a;Klokov and Lempitsky, 2017] and other popular methods as comparison. As is shown in Tab. 3, our method (AM c36) outperforms others in both mAP and NDCG metrics. We attribute the little lower performance on F-Measure to the transfer of datasets. Ablation Study Effect of fusion module. To demonstrate the effectiveness of our fusion cells, in our searched network, we manually replace the fusion cells with normal cells and conduct experiments on ModelNet40. Note that we maintain the same number of layers and same supervision signals. We also choose three popular cell-based NAS networks that have similar network size to ours as comparison (the supervision is single softmax loss). As these architectures are searched for single image classification, we adopt a view max-pooling operation to fuse the view features after the penultimate layer as [Su et al., 2015]. As we can see from Tab. 4, owing to the ability of the fusion module that can explore the intrinsic correlation among view features, our learned network outperforms other NAS architectures. When compared with [Howard et al., 2019] and , the following two other factors also contribute to performance improvement: (1) Our network is derived directly from the ModelNet40 dataset while others are searched on classification benchmarks; (2) Multiple supervision signals and corresponding appropriate loss weights enhance the performance on both shape classification and shape retrieval. Effect of dynamic loss balance. To reveal the superiority of our loss weights balance method, we choose several commonly adopted loss combinations and compare their performances. The results are shown in Tab. 5. Note that values in the loss combinations need to be rescaled to conduct retraining (see Sec. 5.2). We can see from the first three experiments that both L 2 and L 3 are essential for competitive retrieval performance and our result is better than others. When the loss combination is close to ours, its corresponding performance is also similar to ours. Conclusions In this paper, aiming at the problem of multi-view 3D shape recognition, we propose a novel neural architecture search framework to optimize architectures, which is named as Auto-MVCNN. It abandons hand-crafted networks as the backbone for the first time, which greatly reduces the number of parameters and computational complexity. The proposed fusion cell enables the whole network to explore the intrinsic connections of view features automatically, which fully utilizes the 3D information. In addition, we apply a searching scheme for the training balance with an end-to-end fashion, improving both classification and retrieval performances. Extensive experiments exhibit our Auto-MVCNN achieves the best performance in various aspects, and clarify its effectiveness at the same time. We adopt the stochastic gradient descent (SGD) with the initial learning rate 0.01, the momentum 0.9 and the weight decay 3e-4 to optimize the network weights W . The architecture parameter (α, λ) is initialized by a Gaussian distribution of the mean value 0 and the standard deviation 1e-3. λ is optimized using the same optimizer of W and the learning rate is 0.05. α is optimized by Adam [Kingma and Ba, 2015] with the initial learning rate 3e-4, the momentum (0.5, 0.999) and the weight decay 1e-3. For the sake of stability, the gradient clip is adopted and a warmup scheme is conducted during the searching process. A.2 Pretrain on ImageNet In 3D shape recognition, most approaches take advantage of ImageNet pretrained network to boost their performances. For a fair comparison, we also train the searched network on the ImageNet classification benchmark. What should be noticed is that we train separate view images for the classification task, therefore only the parameters of the first 5 cells are updated. The training process takes 800 as the batch size and total epochs are 120. SGD with the initial learning rate 0.1 and weight decay 3e-4 is used in the optimization. A.3 Architecture evaluation on ModelNet40 and ShapeNetCore55. To evaluate the performance of searched architectures, we need to retrain the derived supernet on our target dataset. The retraining set the batch size to 36. SGD with the initial learning rate 0.01 is adopted. The weight decay is 1e-3 without pretraining and 3e-4 with pretraining. The shape descriptor d with dimension init channels × 16 is used to conduct retrieval. A.4 Stability of search. Since the searching process is initialization-sensitive, the final searched architectures are generally distinct from one another due todifferent random seeds. To investigate the performance stability of the searching process, we conduct the search experiment 5 times with the same hyperparameters but different random seeds. The results of the experiments are shown in Tab. 6.
2020-12-11T02:15:46.813Z
2020-12-10T00:00:00.000
{ "year": 2020, "sha1": "dd70b4ee7f413fe10412d2bb8726e4ed611f633a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dd70b4ee7f413fe10412d2bb8726e4ed611f633a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252537171
pes2o/s2orc
v3-fos-license
Whole lifecycle observation of single‐spore germinated Streptomyces using a nanogap‐stabilized microfluidic chip Abstract Streptomyces is a model bacterium to study multicellular differentiation and the major reservoir for antibiotics discovery. However, the cellular‐level lifecycle of Streptomyces has not been well studied due to its complexity and lack of research tools that can mimic their natural conditions. In this study, we developed a simple microfluidic chip for the cultivation and observation of the entire lifecycle of Streptomyces development from the single‐cell perspective. The chip consists of channels for loading samples and supplying nutrients, microwell arrays for the seeding and growth of single spores, and air chambers beside the microwells that facilitate the development of aerial hyphae and spores. A unique feature of this chip is that each microwell is surrounded by a 1.5 µm nanogap connected to an air chamber, which provides a stabilized water–air interface. We used this chip to observe the lifecycle development of Streptomyces coelicolor and Streptomyces griseus germinated from single spores, which revealed differentiation of aerial hyphae with progeny spores at micron‐scale water–air interfaces and air chambers. Finally, we demonstrated the applicability of this chip in phenotypic assays by showing that the microbial hormone A‐Factor is involved in the regulatory pathways of aerial hyphae and spore formation. The microfluidic chip could become a robust tool for studying multicellular differentiation, single‐spore heterogeneity, and secondary metabolism of single‐spore germinated Streptomyces. INTRODUCTION Streptomyces is a genus of filamentous bacteria that play crucial roles in various habitats with their broad range of metabolic and biochemical processes, including degradation of chitin and cellulose [1][2][3] .They are the most important natural source of bioactive compounds, such as antibiotics and antitumor agents, producing two-thirds of the antibiotics of medical and agricultural interests [4][5][6] .In their natural conditions, Streptomyces grows at air-liquid-solid interfaces in soil within porous structures that retain water in micron-sized cavities and channels.Nutrients, oxygen, and water transport, and other environmental factors profoundly impact their physiology, morphological development, and secondary metabolism 7 .Recent research with advanced genetic tools has made significant progress in uncovering the physiological and metabolic potential of Streptomyces for natural products.However, many cryptic secondary metabolite pathways of Streptomyces remain either silent or poorly expressed for cells grown on agar plates or in liquid media in standard laboratory conditions, presumably due to the inability to recreate nutritional and environmental conditions in their natural soil habitat. Microfluidics has emerged as a new tool to study microbes, offering many advantages, such as micrometer-scale spatial resolution and flexible temporal control of nutrient exchange and chemical gradients 8 .Microfluidic techniques have been used to study microbiology in many ways 9,10 , such as single-cell isolation and cultivation 11 , bacterial chemotaxis 12 , quorum sensing 13 , and population dynamics 14 .Although high-throughput enrichment and sorting of soilderived Actinobacteria in microfluidic droplets have been described 15 , microfluidic devices that allow the development and differentiation of Streptomyces are rarely reported.The challenge in Streptomyces cultivation is that their growth and differentiation rely on a stabilized water-air interface.When cultivated on solid agar, Streptomyces has a differentiated lifecycle with precisely controlled stages, including germination of vegetative hyphae in the substrate, formation of hydrophobic aerial hyphae, and development of airborne spores that allow dispersion 16 .However, in a standard liquid medium, Streptomyces mainly exists as vegetative hyphae that tangle together to form many small pellets and clumps with very few aerial hyphae 17 .Therefore, direct miniaturization of standard liquid culture is not an ideal approach for studying the development of Streptomyces. To overcome these challenges, we describe a microfluidic chip integrating liquid containing microwells and air chambers to establish a stabilized water-air interface for cultivation, the whole lifecycle observation of Streptomyces differentiation, and phenotypic assay.The chip can achieve micron-scale spatial resolution, maintain long-term culture conditions, initialize time-dependent chemical exchange, and enable single-cell cultivation and observation.Thus, the chip is a versatile tool for exploring the development and behavior of Streptomyces under well-controlled circumstances.We evaluated the chip's performance by single-cell cultivation of two model representative Streptomyces strains.Moreover, we performed a precisely controlled phenotypic assay with A-Factor analog β-keto SCB2, which is involved in autoregulation of secondary metabolism and morphological differentiation in Actinomycetes 18 . Design of the microfluidic chip We designed a microfluidic chip with an array of microwells for the entire lifecycle observation of Streptomyces (Figures 1 and 2A).The chip incorporates two essential design features: (i) a stable water-air interface enabled by nanogaps between the microwells and air chambers; (ii) well-controlled nutrient and chemical exchange through the main channel.We mimicked the natural habitat of Streptomyces by creating liquid-containing microwells bridged with air chambers via nanogaps for vegetative and aerial growth, respectively (Figure 1A).The glass plates exhibited a hydrophobic surface after fluorinated silanization (Figure 2B-E).The nanogap between two assembled glass plates is 1.5 μm, generated by etched nanopatterns of the bottom plate.Stable water-air interfaces can form at the nanogap edge upon pipette-filling of the microwells.As a result, the high surface tension of the liquid-air interface at the nanogap ensures long-term observation without deleterious drift or shift, and the aerial hyphae (1 μm) can readily pass through the gap (Figure 1C).The chip contains 120 microwells symmetrically distributed along three parallel channels for observation of multiple single-spore events, which facilitates the study of cellular heterogeneity (Supporting Information: AutoCAD design). We regard the liquid surface as a spherical surface so that the capillary pressure ΔP can be derived from the following equation: Where σ is the liquid surface tension (7.28 × 10 −2 N/m); θ is the contact angle between the liquid surface and the solid plate (the maximum value is 105°) (Figure 2B); and the radius (r) equals one-half of the gap height (0.75 µm) (Figure 2C).Thus, the capillary pressure ΔP is calculated to be 5.03 × 10 4 Pa, which is large enough to form a stable gas-liquid interface (Figure 2D,E).Streptomyces spores were appropriately diluted and loaded into microwells to achieve single spore isolation in microwells following a Poisson distribution.Spores can germinate, form vegetative hyphae in microwells, pass through the nanogap, differentiate into aerial hyphae in air chambers, and eventually develop into mature spores.The lifecycle of Streptomyces can last for several days, and thus we infused culture media continuously from the channel to guarantee adequate nutrient supply.The mycelia would not be disturbed because of the narrow joint between the channel and the microwells.The entire developmental process could be monitored using an inverted microscope. On-chip lifecycle observation of Streptomyces coelicolor S. coelicolor is a model organism of Streptomyces, and the complete genome of the type strain S. coelicolor M145 has been sequenced; it is used in many studies of Streptomyces growth and development 2 .We cultivated S. coelicolor in the minimal medium on the chip and observed its entire lifecycle (Figure S1 and Movie S1).After 9 h of dormancy, the spore emerged from one germ tube, which prolonged and formed branches.Each branch showed apical growth, indicating that the group of cells grew at an exponential phase in the microwell.The hyphae could spread randomly in the liquid medium because there was no solid substrate confinement.The hyphae gradually approached the water-air interface, broke the surface tension, and grew into the air chamber at 28 h (growth almost perpendicular to the edge of the microwell).The aerial hyphae progressively elongated and formed branches in all directions.There were curls and spirals at the end of the hyphae.Meanwhile, the vegetative hyphae developed many layers and eventually almost filled the entire microwell.The vegetative and aerial hyphae stopped growing after 60 h (Movie S1). When cultivated in a flask-scale liquid medium, aerial hyphae formation and sporulation are blocked in most Streptomyces strains 19 , but when cultured in bioreactors, some strains may be able to sporulate due to stress conditions such as strong agitation 20 .It has been suggested that nutrient depletion and the reuse of materials led to the hyphae differentiation in liquid medium 21 , and that programmed cell death also triggered the differentiation process in liquid and solid media 17 .Although the specific signals are unclear, Nacetylglucosamine produced by the decomposition of peptidoglycan may be one of the signals 22 .However, single-cell whole-lifecycle development has not been observed before.In this study, we cultivated S. coelicolor in the chip and found that vegetative hyphae did not lyse; instead, they continually grew even after the emergence of aerial hyphae.Furthermore, the culture medium was supplied continuously into the chip such that the nutrient depletion did not occur, indicating that the differentiation phenomenon may not necessarily be correlated with nutrient depletion. On-chip differentiation of S. coelicolor in yeast extract-malt extract (YEME) medium S. coelicolor can form aerial hyphae and spores in standing liquid cultures with minimal media but not rich media 23 .Here, we inoculated single spores in microwells with a nutrient-rich YEME medium and cultivated the samples for several days to test whether they could differentiate (Figure 3).The results showed that S. coelicolor still had a complete lifecycle in liquid YEME medium, including vegetative hyphae in microwells (Figure 3A) and aerial hyphae in the air chambers (Figure 3B) with spiral spore chains on the aerial hyphae (Figure 3C).Scanning electron microscopy (SEM) revealed that the hyphae in the microwells had a relatively smooth surface (Figure 3A).Aerial hyphae in the air chamber had a layer of well-organized hydrophobic proteins 24 (Figure 3B).The mature spores formed spiral chains with compartments between each spore (Figure 3C).These results are consistent with the development of S. coelicolor grown on solid plates and previous reports on the microscopic features of hydrophobic proteins 25 . Accordingly, S. coelicolor had entire lifecycles in the liquid environment regardless of the nutrient status.An earlier study showed that the expression of most genes is comparable between liquid and solid cultures, including genes involved in the hydrophobic cover formation and even a few genes regulating the early stages of sporulation 26 .Genes involved in the final stages of hydrophobic cover/spore maturation are upregulated in solid cultures compared with liquid cultures.These findings suggest that S. coelicolor can differentiate in both solid and liquid cultures.Transcripts and proteins are ready before aerial hyphae formation.Once S. coelicolor senses the existence of air, they grow aerial hyphae and develop into mature spores.In standing liquid cultures, a physical constraint may hinder aerial hyphae formation.A nutrient-rich medium contains more complex ingredients, which are likely to attach to the hyphae surface and reduce the hydrophobicity of the hyphae, making it difficult for the aerial hyphae to erect through the liquid-air interface. Interestingly, we observed the merging of aerial hyphae when we cultivated S. coelicolor in YEME (Figure 3D and Movie S2).Two hyphal tips grew toward each other until contacted and fused.We also recorded hypha-to-peg or hypha-to-side fusion, as a hyphal tip approached the side of another existing hypha and merged with it.This universal phenomenon in Streptomycetes is called hyphal anastomosis or hyphal fusion, which was first confirmed in S. scabies 27 , and is considered to be very important for intrahyphal communication, nutrient/water translocation, and general homeostasis within a colony 28 . On-chip observation of wild-type Streptomyces griseus Next, we applied the chip to cultivating another model organism S. griseus, to investigate its differentiation in liquid cultures.We cultured S. griseus in liquid minimal medium (MM) and YEME medium, respectively, and observed its three lifecycle stages through optical microscopy and electron microscopy (Figure S2).The results confirmed that S. griseus could accomplish its whole lifecycle in both liquid cultures, with the exact differentiation mechanism as that on solid plates.Furthermore, we found that the growth of aerial hyphae and sporulation did not rely on the lysis of vegetative hyphae, suggesting that genes encoding extracellular proteases and protease inhibitors may not be necessary for the morphological differentiation of S. griseus. Phenotypic recovery of S. griseus ΔafsA mutant with A-Factor analog A-Factor is the master switch for morphological differentiation and secondary metabolism in Streptomyces 29 .For S. griseus growing on the solid plate, the differentiation process begins with the expression of afsA that controls the synthesis of A-Factor 29 .The binding of the A-Factor with its receptor protein, ArpA, relieves the suppression of adpA by ArpA.Afterward, AdpA stimulates aerial hyphae growth, spore development, and secondary metabolism by regulating various genes, including ssgA 30 , adsA 29 , amfR 31 , extracellular proteases 32,33 , and protease inhibitor encoding genes 34 .Hitherto, the effect of A-Factor the differentiation of S. griseus on solid agar and liquid medium has not been studied due to the inability to maintain a stabilized liquid-air interface to support aerial hyphae development. We constructed an S. griseus ΔafsA mutant via genetic engineering that could not form aerial hyphae on YEME agar (Figure 4A, B).We inoculated the mutant onto YEME agar and cultivated it for several days.Compared with the wild type, the mutant could not develop either aerial hyphae or pigmented spores on the agar (Figure 4C).The parallel on-chip cultivation revealed that the vegetative hyphae formed mainly in medium-filled microwells with very few hyphae outside microwells, which were very short even after being cultivated for several days and could not form spores. SEM images showed that the hyphae surface of S. griseus ΔafsA mutant was relatively smooth, indicating that these short hyphae were still vegetative hyphae.Therefore, the phenotype of S. griseus ΔafsA mutant grown in chip-based culture was consistent with that grown on the solid plate. Next, we synthesized an A-Factor analog β-keto SCB2 and fed it at different time points to the ΔafsA mutant to examine whether it could recover its differentiation phenotype.Previous studies showed that the production of the A-Factor is growth-dependent 35 .A-Factor accumulates during vegetative growth, reaches a peak concentration of 25-30 ng/ml, and rapidly decreases thereafter 35 .As shown in Figure 4D, the mutant formed aerial hyphae and spores when we added β-keto SCB2 at 20 and 30 h after inoculation.The SEM imaging confirmed the existence of hydrophobic proteins on the surfaces of aerial hyphae and spores of ΔafsA mutant when we fed β-keto SCB2 at 30 h (Figure S3).However, the mutant could no longer form aerial hyphae with β-keto SCB2 fed at 40 h after inoculation (Figure 4D).These results are consistent with previous studies that timing is critical for A-Factor's switching function 35 .There is a specific A-Factor-sensitive period in the middle of the exponential growth, after which the exogenous addition of A-Factor can no longer induce morphological differentiation under solid and liquid conditions. DISCUSSION We developed a microfluidic chip that achieved a nanogapstabilized liquid-air interface for single-spore cultivation and lifecycle observation of Streptomyces.Two model strains (S. coelicolor and S. griseus) were cultivated in the chip at single-cell/spore resolution under different nutrient conditions.Compared with other methods, our chip can achieve single-cell long-term cultivation and dynamic observation using sub-nanoliter microwells and air chambers.Although other devices, such as μ-dish, have been used to capture Streptomycetes growth on solid media 36 , the chip used in this study allows air permeability while maintaining the hyphae within a narrow microscopic focal range to facilitate whole lifecycle observation at high spatial resolution.Moreover, the chip can be easily disassembled for further in situ SEM imaging to reveal subcellular structures such as hydrophobic protein patterns on aerial hyphae.The chip's main channel can controllably supply nutrients and stimulants in a controlled manner.We can readily improve the throughput of the chip by increasing the number of microwells with extended channels.Besides, we may use it to investigate the cell-cell interaction between Streptomyces and pathogenic bacteria by serial loading and cocultivation of Streptomyces and pathogens. The whole lifecycle differentiation is essential for studying morphogenesis and secondary metabolism of Streptomyces 35 .Currently, morphological differentiation is mainly studied on-solid plates because the surface of liquid culture is unstable and cannot support the growth of aerial hyphae and spores.Using our chip, we found that the early development of aerial hyphae is not necessarily correlated with nutrient depletion as traditional solid-based cultivation studies have suggested.The S. griseus ΔafsA mutant showed similar differentiation phenomena under solid culture and chip-based liquid environments.Our chip provides higher spatial resolution and long-term stability.Furthermore, we successfully restored the wild-type phenotype of S. griseus ΔafsA mutant by adding β-keto SCB2.Using the chip, we can also study effects of various molecules on morphological differentiation and secondary metabolism with significantly reduced reagent consumption by virtue of miniaturization. Overall, we anticipate that the nanogap-stabilized microfluidic chip will provide a new platform for studying Streptomyces development under precisely controlled microenvironments at the single-cell level.Previous studies on the differentiation of Streptomyces in liquid media mainly focused on the analysis of pellet and clump formation 37 , which affects the production of secondary metabolites of Streptomyces such as S. coelicolor 17 and S. noursei 38 .We envision that our chip can help establish the developmental model of other Streptomyces strains in liquid culture, which will be beneficial for optimizing industrial fermentation.Having Streptomyces' complete lifecycle on the microfluidic chip may also awaken cryptic gene clusters for the secretion of secondary metabolites and lead to the discovery of novel antibiotics for combating the global antimicrobial resistance crisis. Bacterial strains and materials The microbial strains used in this study include S. coelicolor M145, S. griseus IFO 13350, and S. griseus ΔafsA mutant.These strains were cultured on the Mannitol-Soy agar plate at 28°C for about a week to allow spore germination.The spores were harvested by sterile cotton swabs and suspended in the sterilized culture medium.The suspension was filtered through a filter tube filled with cotton wool to remove aerial hyphae.The OD 600 of the spore suspension was adjusted to 0.15 to ensure that most microwells contain either one or zero spores.MM and YEME media were used for on-chip cultivation. Fabrication of the chip The microfluidic chip was made of two glass plates and fabricated by standard photolithography and wet chemical etching techniques 39 .The photomask was designed using AutoCAD and ordered from MicroCAD photomask Co. Ltd.The top plate has a 55-μm-deep channel, with 40 microwells symmetrically distributed along the channel with a volume of 280 pl.The bottom plate consists of an array of nanopatterns of 1.5 μm in height (Figures 1C and 2A).The top plate has two access holes drilled by a diamond drill bit 0.8 mm in diameter.The glass plates were cleaned with ethanol, oxidized in a plasma cleaner, and silanized with 1H,1H,2H,2H-perfluorooctyl trichlorosilane. Device operation and cell cultivation The glass chip was thoroughly cleaned with ethanol and tightly clamped by clips.The spore suspension was aspirated into a pipette and loaded into the channel leading to the microwells (Figure 1B).The suspension in the channel was aspirated from the outlet to remove excess spores to prevent channel block caused by hyphae growth, but the microwells could retain liquid medium and spores.Two syringes were connected to the chip by Teflon tubing to infuse the culture medium continuously for long-term cultivation.The chip was placed under an inverted microscope to capture pictures every hour.A CO 2 microscope cage incubator was placed around the microscope to maintain the temperature at 28°C for Streptomyces cultivation. Figure 1 . Figure 1.Illustration of the microfluidic chip for lifecycle observation of Streptomyces.(A) Schematic diagram of assembly and setup of the microfluidic chip.The dimensions of the microfluidic chip are shown in Supporting Information: Materials.(B) Spore suspension is loaded into the microwells by a pipette.The concentration of spores is controlled to allow single spore trapping in the microwells based on a Poisson distribution.The channel was drained by a pipette to remove extra spores.The culture medium was continuously infused into the chip to allow the whole lifecycle development process.(C) Schematic of the sectional view of the chip with Streptomyces lifecycle development shown in a zoom-in view. 2 . Characterization of the microfluidic chip with nanogap-stabilized liquid-air interfaces.(A) A picture of the assembled chip.There are 40 microwells symmetrically distributed along the channel with three parallel replicates.The 1.5-µm height nanopatterns on the bottom plate are observed via scanning electron microscopy.The chip was assembled and filled with red dye, as shown in the zoom-in view.(B) The silanized glass plates of the chip have a contact angle of 105°with deionized water.(C) A side view of the water-air interface between the microwell and the gas chamber shows the direction of liquid surface tension at the microwell edge.(D) Relationship between surface tension and gap size at the water-air interface.(E) The surface tension distribution along the microwell. Figure 3 . Figure 3. Development of Streptomyces coelicolor cultivated in a microfluidic chip.(A-C) The vegetative hyphae (A), aerial hyphae (B), and spores (C) of S. coelicolor were observed by optical microscopy (top panel) and electron scanning microscopy (bottom panel), respectively.(D) Time series imaging of hyphal anastomosis (fusion) in S. coelicolor.Some hyphal tips (arrows) were growing toward a hyphal peg for subsequent fusion. Figure 4 . Figure 4.The role of the A-Factor in the development of Streptomyces griseus.(A) Illustration of S. griseus ΔafsA mutant construction.(B) Electrophoresis of PCR products of wild-type (WT) S. griseus and its ΔafsA mutants.(C) Phenotypes of S. griseus wild-type and ΔafsA mutant on a solid plate.After cultivation on the chip, we observed aerial hyphae of S. griseus wide-type and ΔafsA mutant via optical microscopy and SEM.(D) The feeding of the A-Factor analog recovered aerial growth of ΔafsA mutant at 20 and 30 h after cultivation, but no effect was observed at 40 h.
2022-09-27T15:03:16.169Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "27f7006a617d26a6506ec855c0bfcb5b78675f80", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mlf2.12039", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04a2cd49ae77f19b1ddda233abe3b57c4c54e5d9", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119476227
pes2o/s2orc
v3-fos-license
Ion beam bunching via phase rotation in cascading laser-driven ion acceleration The ion beam bunching in a cascaded target normal sheath acceleration is investigated by theoretical analysis and particle-in-cell simulations. It is found that a proton beam can be accelerated and bunched simultaneously by injecting it into the rising sheath field at the rear side of a laser-irradiated foil target. In the rising sheath field, the ion phase rotation may take place since the back-end protons of the beam feels a stronger field than the front-end protons. Consequently, the injected proton beam can be compressed in the longitudinal direction. At last, the vital role of the ion beam bunching is illustrated by the integrated simulations of two successive stages in a cascaded acceleration. United Kingdom The ion beam bunching in a cascaded target normal sheath acceleration is investigated by theoretical analysis and particle-in-cell simulations. It is found that a proton beam can be accelerated and bunched simultaneously by injecting it into the rising sheath field at the rear side of a laser-irradiated foil target. In the rising sheath field, the ion phase rotation may take place since the back-end protons of the beam feels a stronger field than the front-end protons. Consequently, the injected proton beam can be compressed in the longitudinal direction. At last, the vital role of the ion beam bunching is illustrated by the integrated simulations of two successive stages in a cascaded acceleration. I. INTRODUCTION With the advent of compact intense lasers, the generation of energetic particles in the laser-plasma interactions has been drawing an enormous amount of attention [1][2][3][4] . Paramountly, laser-driven plasma-based accelerators can sustain an accelerating field as strong as a few hundreds of GV/m, which is many orders of magnitude stronger than that in the conventional radio-frequency (RF) accelerators. By such a strong accelerating field, charged particles can be accelerated to high energies over an extremely short distance. This allows laser-driven accelerators to be more compact and more economical than the conventional RF accelerators. Nevertheless, the particle energies achieved by laser-driven accelerators are still much lower than those by the conventional RF accelerators. In particular, it is still a big challenge to increase the ion energies in laser-driven ion acceleration. So there is still a substantial gap between the achievable ion energies of laser-driven ion sources and the necessary energies for some important applications, such as proton therapy 5,6 . In order to enhance the ion energies as well as the beam quality and energy conversion efficiency in laser-driven ion acceleration, a number of mechanisms have been proposed with various laser and target parameters. So far, the target normal sheath acceleration (TNSA) is still the most studied mechanism in experiments because of its relatively simple requirements on the laser and target conditions. In this mechanism, a strong charge-separation electrostatic field is formed at the rear sheath of a thin target when a portion of hot electrons produced in laser-plasma interaction go through the target 7 . By this sheath field, the ions can be accelerated up to tens MeV per nucleon 8 . Alternatively, the generation of energetic ions by the radiation pressure acceleration (RPA) has been intensively studied recently 9 . According to the theoretical model, the ions of an ultrathin foil can be steadily accelerated by the radiation pressure of a circularly-polarized intense laser pulse. Consequently, it is predicted that the ions could be accelerated by the RPA to much higher energies with a high conversion efficiency. However, the RPA has some extremely stringent requirements on the laser and target conditions in experiments 10,11 , including an ultrathin foil and an ultrahigh contrast laser pulse. Furthermore, the laser pulse should be circularly-polarized for suppressing the plasma heating, which is also a big challenge for the high-power laser pulse 12 . In addition to individual acceleration schemes, some hybrid schemes [13][14][15] or cascaded More interestingly, some cascaded acceleration schemes allow the spectral shaping of the resulting ion beams 18 . As a result, the ion energies and the energy spread could be simultaneously improved in a cascaded acceleration scheme 16,19 . However, the improvement on the energy spread will disappear if the ion beam duration is comparable to the lifetime of the accelerating field 19 . Therefore, the bunching would be critically required to control the longitudinal size of the ion beam in a cascaded laser-driven acceleration scheme 23 . In this paper, we propose a cascaded TNSA scheme, in which the ion beam bunching is realized via the ion phase rotation in the sheath field at the target rear side. As a typical configuration for the well-known TNSA mechanism, a thin foil target is irradiated by a short intense laser pulse in Fig. 1(a). A large number of hot electrons will be generated via collisionless heating mechanisms 25,26 . A strong charge-separation electrostatic field can be stimulated at the rear sheath when some hot electrons propagate through the target and expand into the vacuum 27 . In particular, it takes a considerable period of time before this sheath field attains its maximum strength 19 . If an ion beam is injected into this laserirradiated foil when its sheath field is rising, one can imagine that the back-end ions of the beam will feel a stronger sheath field than the front-end ions. Therefore, the back-end ions may be accelerated to higher velocities even if their initial velocities are lower than those of the front-end ions as shown in Figs. 1(b) and (c). Consequently, the back-end ions will overtake the front-end ions as shown in Figs. 1(d) and (e). Concerning this whole process, a half-cycle rotation of the ions in the phase space is accomplished. Using particle-in-cell (PIC) simulations, we verify that the longitudinal size of the injected ion beam can be well controlled via such a phase rotation, i.e., the ion beam bunching is achieved. Furthermore, the importance of the ion beam bunching in a cascaded acceleration is clarified by the integrated simulations of two successive acceleration stages. II. SIMULATION RESULTS AND ANALYSIS To visualize the ion beam bunching via the phase rotation, we have performed a series of 2D3V particle-in-cell (PIC) simulations of cascaded TNSA using the code EPOCH 28 . In the first set of simulations, a single stage of cascaded TNSA is investigated. In the simulations, a linearly-polarized laser pulse irradiates a carbon target obliquely with an angle of 45 • from the left side. It is assumed that the laser pulse has a wavelength λ = 1µm, a peak intensity I 0 ≃ 3.08 × 10 20 W/cm 2 (the normalized vector potential a 0 ≡ |eE/ωm e c| = 15). The pulse has Gaussian intensity profiles in both the transverse and longitudinal directions with a spot radius σ = 5µm and a duration τ . In a typical simulation, we set the duration τ = 12T 0 , where T 0 = 2π/ω is laser wave period. The time evolution of the sheath field are compared among the cases with different durations τ =6, 12, and 18T 0 . The simulation box has a size of 80λ×20λ, the spatial resolutions are ∆x = ∆y = λ/100, and 50 macro-particles per species per cell are allocated in the target region. The target is assumed to be a uniform fully-ionized Carbon foil with ρ ≈ 0.057 g/cm 3 (the electron number density n e = 15n c ) that locates in 0 ≤ x/λ ≤ 6 and −10 ≤ y/λ ≤ 10. Here, a near-critical-density target is used instead of a solid target in order to enhance the coupling of laser energy into hot electrons and hence the ion acceleration 10,15,29 . In addition, a quasi-monoenergetic proton beam is injected along the x-axis into the laser-irradiated foil. The proton beam initially has a size of 0.2 × 0.6µm 2 and a density n p = 0.1n c . The mean proton energy is assumed to be a function of x as E 0 (x) = [10 + 10( is the coordinates of the beam center. Correspondingly, the mean velocity of the protons at the beam center is about v x0 ≃ 0.145c. Although the mean proton energy is a function of x, an initially uniform temperature T i = 1 keV is assumed. By changing the initial center x-coordinate x 0 , the time when the proton beam arrives at the target rear (x = 6λ) and enters into the sheath field can be well controlled in simulations. For reference, the peak of the laser pulse is assumed to arrive at the target front (x = 0) at t = 0, and the simulations begin at t ≃ −16T 0 . A. Mechanism of ion beam bunching To understand the mechanism of the ion beam bunching in a cascaded TNSA, we first revisit the generation of hot electrons and electrostatic sheath field in a laser-foil interaction. Thanks to the efficient coupling of laser energy into the applied near-critical-density target, a large number of hot electrons are generated. As shown in Fig. 2(a), a considerable portion of these hot electrons can penetrate through the target and expand into the vacuum at the rear side. As a result, an electrostatic field as strong as a few TV/m is quickly rising in the charge-separation sheath at the rear side as shown in Fig. 2 displays the time evolutions of the sheath field peaks E x,p with different laser pulse durations. Correspondingly, the time evolutions of the electrostatic potential peaks φ p are displayed in Fig. 2(d). It is illustrated that the rising of the sheath field needs a response time on the order of the laser pulse duration τ if the latter is around ten laser wave periods. In a typical simulation case with τ = 12T 0 , the sheath field peak rises from 0 to the maximum The quick rising of the sheath field, usually accompanied by the broadening of the charge-separation sheath, will greatly raise the electrostatic potential peak φ p as shown in Fig. 2(d). Because of the broadening of the charge-separation sheath, the electrostatic potential peak φ p can continuously increase for some time after the sheath field peak achieves its maximum. energy from a stronger sheath field than the front-end protons if the proton beam arrives during the rising stage of the sheath field. The obtained energy of an proton from the TNSA is roughly approximate to the electrostatic potential peak φ p when this proton enters into the sheath field. For a quasi-monoenergetic proton beam of a length of L 0 locating in the region |x − x 0 | ≤ L 0 /2, we assume that the mean proton energies (velocities) at the front-end, the center and the back- are satisfied since the protons with higher energies will naturally propagate at the front after a considerable long propagation. Consequently, the ion rotation in the phase space can take place, i.e., the back-end protons overtake the front-end protons, only under the condition where While under the condition , a highly monoenergetic proton beam can be expected 16, 19 . In order to achieve the ion beam bunching during a single accelerate stage, however, the difference in the electrostatic potential |φ p (t b ) − φ p (t f )| must be substantially larger than the initial difference in the mean proton energy |E f −E b |. On the other hand, the final energy spread of the obtained proton beam will be obviously increased should be a relatively appropriate condition for achieving the efficient ion beam bunching and controlling the final energy spread on the initial level. B. Results of a single acceleration stage To some extent, the electrostatic potential difference |φ p (t b ) −φ p (t f )| can be controlled by modifying the arrival time of the proton beam at the sheath field, while the latter is realized by changing the initial x-coordinate x 0 of the proton beam center in the simulations. Figure 3 compares the time evolutions of the proton distributions in the x − p x phase space for two different injected proton beams initially centered at x 0 = 3.2λ and 2.6λ, respectively. In the case x 0 = 3.2λ, the front-end and back-end protons will arrive at the target rear side at t f = 1.8T 0 and t b = 5.1T 0 , respectively. As shown in Fig. 2(d), the corresponding MeV, which is about the double of the initial proton energy difference between the front-end and back-end protons. Consequently, the back-end protons will obtain more energy and quickly become faster than the front-end protons as indicated by the phase space distribution at t = 24T 0 in Fig. 3(a). Subsequently, the back-end protons can catch up with the front-end protons and the whole proton beam is highly compressed at t = 112T 0 . After this, the length of the proton beam will gradually increase due to the free expansion, and the proton distribution in the x − p x phase space becomes similar to the initial one. In this whole stage, a half-cycle rotation of these protons in the x−p x phase space is evidenced. In contrast, the proton beam will enter into the sheath field relatively later with t f = 5.7T 0 and t b = 9.5T 0 in the case x 0 = 2.6λ. Correspondingly, the electrostatic potential difference is reduced to φ p (t b )−φ p (t f ) ≃ 1.5 MeV, which is slightly smaller than the initial proton energy difference. As a result, the back-end protons will be accelerated to be nearly as fast as the front-end protons as shown in Fig. 3(b). However, The distance between two foil targets is 40 µm. The peak of the first laser pulse is assumed to arrive at the first target front (x = 0) at t = 0. While the second laser pulse peak arrives at the front of the second target (x = 46) with a time delay of t = 194T 0 or 175T 0 , respectively. In addition, a quasi-monoenergetic proton beam, centered at x 0 = 3.2λ or 2.6λ, is incident along the x-axis into laser-irradiated foil targets. the length of the proton beam always increases in the whole stage. The time evolutions of the beam qualities in a single acceleration stage are quantitatively compared between the cases with x 0 = 3.2λ and 2.6λ in Fig. 4. Figure 4(a) indicates that the proton beam initially centered at x 0 = 3.2λ can be accelerated up to a mean energy of ∼ 34 MeV which is slightly lower than that of the one initially at x 0 = 2.6λ, since the latter enters the rising sheath field later and feels a stronger field. Further, Fig. 4(b) demonstrates that the FWHM absolute energy spread can be efficiently suppressed to ∼ 1.2 MeV in the x 0 = 2.6λ case while it remains at around the initial value ∼ 2 MeV in the x 0 = 3.2λ case. As a result, Fig. 4(c) shows that the energy spectrum in the x 0 = 2.6λ case seems superior to the one in the x 0 = 3.2λ case. However, it must be pointed out that in the x 0 = 2.6λ case the FWHM longitudinal size of the proton beam monotonously increases up to ∼ 1.5λ, which is about one order of magnitude higher than the initial length. While the proton beam experiences an obvious shrink in the x 0 = 3.2λ case, the minimum FWHM longitudinal can be even a bit smaller than the initial size. This implies that the ion beam bunching is achieved simultaneously with the acceleration in the latter case. In the following paragraphs, we will show the importance of such ion beam bunching in the 6(a). While the proton beam initially centered at x 0 = 2.6λ, after the first acceleration, will arrive at the rear side of the second target in the time interval 196T 0 ≤ t ≤ 204T 0 . Then the peak of the second pulse is set to arrive at the front of the second target at t ≃ 175T 0 , so this proton beam will arrive at around the time of the second sheath field peak as in the first stage. Consequently, Fig. 6(b) shows no phase rotation in this case. The time evolutions of the beam qualities during these two successive acceleration stages in the cases with and without the bunching are compared in Fig. 7. Figure 7(a) shows that the energy of an initial 10 MeV proton beam can be boosted up to more than 60 MeV after two successive acceleration stages regardless of the ion beam bunching. As explained above, the scheme with the bunching is not the optimal choice for suppressing the energy spread in a single acceleration stage. However, it is worth noting that the energy spread can be well controlled after two acceleration stages in the case with the bunching as shown in Fig. 7(b). While the energy spread in the case without the bunching is increased obviously after the second acceleration stage. This is because without the bunching the proton beam will be greatly prolonged in the first acceleration stage, and it is much harder to control the energy spread of this prolonged proton beam in the second stage. As a result, the final energy spectrum in the case with the bunching can surpass the one without the bunching as displayed in Fig. 7(c). More importantly, Fig. 7(d) shows that without the bunching the FWHM longitudinal size of the proton beam will dramatically increase up to ∼ 5.5λ, which is severely adverse to the next acceleration stages. In contrast, the longitudinal size of the proton beam can be kept on the level of the initial size with the bunching. Therefore, with the bunching the acceleration of such a high-quality proton beam can be repeated continuously in the next acceleration stages. III. DISCUSSION AND CONCLUSION It is worth pointing out that the ion rotation in the phase space could also take place after two or more acceleration stages in the cases that the ratio between the electrostatic potential difference and the initial mean However, the longitudinal size of the ion beam in these cases may have already increased up to an intolerable level after two or more acceleration stages. Therefore, it would be better to achieve the efficient ion beam bunching in each single stage under the condition that the electrostatic potential difference is substantially large enough to compensate for the mean In order to save the computational cost, the laser-target parameters in the simulations are This allows the ion rotation in the phase space take place as soon as possible, in the meanwhile the beam energy spread doesn't obviously increase. In the cases that [φ the ion rotation in the phase space and the energy spread reduction would be achieved simultaneously in a single acceleration stage. This implies that not only the longitudinal size but also the energy spread of an injected ion beam could be effectively suppressed in a cascaded laser-driven ion acceleration scheme. More importantly, the ion rotation in the phase space will take place at a much slower pace in these cases. Accordingly, the distance between two successive targets should be enlarged dramatically. Although a much larger distance between two targets may cause inconvenience to the PIC simulations, it could be more favorable to the experimental realizations. Besides the laser pulse and the foil target as employed in a normal TNSA scheme, an initially quasi-monoenergetic proton beam is crucially required in this cascaded laser-driven ion acceleration scheme. Such a proton beam might be obtained from the TNSA with energy selection 30 or a laser-driven nanotube accelerator 31 and so on. In summary, we find that the ion beam bunching in a cascaded TNSA scheme can be achieved via the ion rotation in the phase space. By modifying the time delay between the injected proton beam and the laser pulse, one can allow the proton beam enter into the sheath field at the target rear side when it is quickly rising. Then the back-end protons of the beam will feel a stronger sheath field and be accelerated to higher energies than the front-end protons. Consequently, the back-end protons will overtake the front-end protons, i.e., the ion phase rotation takes place. More importantly, the integrated simulations of two successive acceleration stages verifies that the energy spread in a cascaded acceleration can be well controlled only when the ion beam is bunched via such phase rotations.
2018-09-05T08:33:55.000Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "6686a4a05d8bddaa82c34303375041c024645049", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.01383", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6686a4a05d8bddaa82c34303375041c024645049", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
208234652
pes2o/s2orc
v3-fos-license
The Influence of the Parameters of a Gold Nanoparticle Deposition Method on Titanium Dioxide Nanotubes, Their Electrochemical Response, and Protein Adsorption The goal of this research was to find the best conditions to prepare titanium dioxide nanotubes (TNTs) modified with gold nanoparticles (AuNPs). This paper, for the first time, reports on the influence of the parameters of cyclic voltammetry process (CV) -based AuNP deposition, i.e., the number of cycles and the concentration of gold salt solution, on corrosion resistance and the capacitance of TNTs. Another innovation was to fabricate AuNPs with well-formed spherical geometry and uniform distribution on TNTs. The AuNPs/TNTs were characterized using scanning electron microscopy, X-ray photoelectron spectroscopy, electrochemical impedance spectroscopy, and open-circuit potential measurement. From the obtained results, the correlation between the deposition process parameters, the AuNP diameters, and the electrical conductivity of the TNTs was found in a range from 14.3 ± 1.8 to 182.3 ± 51.7 nm. The size and amount of the AuNPs could be controlled by the number of deposition cycles and the concentration of the gold salt solution. The modification of TNTs using AuNPs facilitated electron transfer, increased the corrosion resistance, and caused better adsorption properties for bovine serum albumin. Introduction In recent decades, electrochemical biosensors have been an active research field, attracting considerable attention as potential successors to a wide range of analytical techniques with rapid response and high selectivity [1,2]. Titanium dioxide nanotube arrays have demonstrated a number of important applications, including biosensors for the detection of interleukin-6 [3] or glucose [4]. Titanium dioxide nanotube (TNT) structures can be produced through the anodization of titanium foil [5]. In addition, a recent study by the authors has shown that titanium dioxide nanotube arrays with large surface areas, easy and inexpensive preparation, and chemical and thermal stability arepromising for the immobilization of biomolecules, such as horseradish peroxidase for electrochemical biosensors [2]. The electric conductive and adsorption properties of TNT arrays depend on many factors, e.g., the morphology of the nanotubes (diameter, height) and the modification process. Protein adsorption has been demonstrated for nanotubes with a diameter ranging from 20 nm to 70 nm [6]. The electrical conductivity of TNT arrays can be significantly improved by introducing metal nanoparticles to the surface to facilitate electron transfer [1,[7][8][9]. Recently, much attention has been paid to using gold nanoparticles (AuNPs) in biosensor construction. Among the advantages of AuNPs are fast and simple methods of synthesis, low costs of production, the ease of binding with proteins in reaction to a thiol group (-SH), and increased electron transfer between the electrode surface and the analyte [8,9]. Several strategies, such as sputtering, photo reduction, soaking at high temperatures, and electrodeposition methods, have been applied to the deposition of gold nanoparticles on TNTs. Sputtering technology can result in the homogeneous dispersion of AuNPs and leads to the production of a thin layer of nonspherical nanoparticles [10,11]. Photo reduction is a multistage and long-lasting technique [12]. The soaking at high temperature and direct adsorption includes long-lasting processes depending on the size and number of the AuNPs [13][14][15]. These methods depend on many factors that influence the reproducibility of the nanoparticle deposition process. Additionally, AuNP deposition on TNT surfaces through one of the described methods can easily aggregate and have a large diameter. Electrodeposition is one of the easiest techniques for gold nanoparticle synthesis. These methods are characterized by convenient control mechanisms to obtain a homogeneous surface. In this process, Au 3+ ions from a tetrachloroauric acid (HAuCl 4 ) precursor can be reduced to metallic Au and subsequently deposited onto the nanotube surface. The galvanostatic method results in the formation of nanoparticles with a wide-ranging diameter [16,17]. In the case of modification of the TNT arrays with gold nanoparticles during the anodizing process, there is no linear relationship between the increase in concentration of the solution and the amount of gold impregnated on TNTs. It is important to point out that the anodization must be under strictly defined conditions to avoid precipitation of the nanoparticles during TNT array growth [18]. In the chronoamperometric method, gold nanoparticles are not uniformly deposited on the TNT surfaces and can easily form agglomerates [19]. Due to its simplicity and its convenient control mechanisms in obtaining a homogeneous surface, the cyclic volt amperometric deposition method can be applied. However, in the case of the voltammetric method, there is no clear indication of the effect of the number of cycles on the size and number of the deposited nanoparticles. According to Lianghsen et al. [20], an increase in the number of cycles causes only an increase in the number of nanoparticles; on the other hand, according to Babu et al. [21], this also results in an increase in the diameter of the nanoparticles. Previously developed biosensors based on AuNPs/TNTs have shown improvement in limit of detection (LODs), e.g., bisphenol A [20] and aflatoxin B1 [22]. Improvements in the LOD can be obtained after the formation of AuNPs with a well-formed spherical geometry and uniform distribution onto TNTs. In the literature, neither the influence of formation conditions of AuNPs through cyclic voltammetry (the number of CV cycles and the concentration of the gold salt solution) nor the effect of the concentration and size of gold nanoparticles on the capacitance of TNT layers has been reported. In this paper, AuNPs loaded onto TNT arrays prepared using the cyclic voltammetry method with a different number of cycles of deposition and different concentrations of gold salt solutions are described. The aim of our research was to compare the impact of the deposition process parameters and the number of deposited cycles of gold (8-80) on the AuNPs' diameter and agglomerate formation as well as the electrical conductivity of the developed platforms. The effects of the loading of AuNPs on surface morphology, electrical properties, and corrosion resistance were explored. To confirm the possibility of using AuNP-modified TNTs as sensing platforms for the label-free evaluation of protein adsorption, a series of experiments was carried out. Preparation and Thermal Modification of Ti/TNT Arrays Titanium dioxide nanotubes were prepared by electrochemical anodization of titanium foil in ethylene glycol solution and ammonium fluoride additive [3]. Titanium sheets were cut into 5 mm (width) × 15 mm (height) × 0.25 mm (thickness) and then sonicated in acetone, distilled water and dried under nitrogen. Anodization was performed in ethylene glycol electrolyte (85 wt%) containing ammonium fluoride (0.65 wt%) under potentiostatic conditions at 17 V (Autolab PGSTAT302N Metrohm Herisau, Switzerland) for 3750 s at room temperature. This process was carried out using a two-electrode system with a platinum sheet as a counter electrode and titanium foil as a working electrode, with the anodization surface of 5 mm × 5 mm × 0.25 mm. TNT layers were annealed in the AMP furnace (AMP, Zielona Gora, Poland) under argon atmosphere at 450 • C for 2 h with the heating and cooling rate of 6 • C·min −1 . The Electrodeposition of Gold Nanoparticles on Titanium Dioxide Nanotube Arrays Electrodeposition was performed in a standard three-electrode system with TNTs as the working electrode (5 mm × 5 mm × 0.25 mm), the standard silver/silver chloride electrode (E Ag/AgCl = 0.222 V) as a reference electrode, and a platinum sheet counter electrode, with the use of cyclic voltammetry scan from −1.25 V to −0.7 V (versus Ag/AgCl) with a scan rate of 0.05 V/s in a 3 mL of 0.01 M PBS (pH 7.4) containing tetrachloroauric acid. The process was carried out for different number of cycles (8,20,40,60,80) in 0.1 mM solution of HAuCl 4 and next for 8, 20, and 40 cycles in different concentration of HAuCl 4 -1 mM, 5 mM, and 10 mM. After deposition, samples were washed with distilled water and dried under nitrogen atmosphere. Deposition of Bovine Serum Albumin onto AuNPS/TNTs Bovine serum albumin was dissolved in 0.01 M PBS (pH 7.4) at a concentration of 1 mg/mL. 5 µL of BSA solution was deposited on the surfaces of the TNTs without and with gold nanoparticles for 30 min at 40 • C. The efficiency of BSA immobilization on the TNT arrays and the AuNPs/TNT arrays was analyzed using relative change in the values of electrochemical parameters, expressed in percentage form. Surface Characterization and Electrochemical Measurements Scanning electron microscopy (FESEM, JEOL JSM-7600F, Tokyo, Japan) and energy-dispersive X-ray spectroscopy (EDS, INCA, Oxford Instruments, Oxford, UK) were used to investigate surface morphology and chemical composition. XPS analyses were carried out in a PHI Versa Probe II Scanning XPS system using monochromatic Al Kα (1486.6 eV) X-rays focused to a 100 µm spot and scanned over the area of 400 µm × 400 µm. The photoelectron take-off angle was 45 • and the pass energy in the analyzer was set to 46.95 eV. Deconvolution of spectra was carried out using the PHI MultiPak software (v.9.9.0.8). Spectrum background was subtracted using the Shirley method. Open-circuit potential (OCP) measurements and electrochemical impedance spectroscopy (EIS) scans for Ti/TNTs and AuNPs/TNTs samples were annealed in 450 • C in argon atmosphere, and after BSA adsorption, recorded using a standard three-electrode configuration with titanium dioxide nanotubes before and after modification by gold nanoparticles. OCP measurements were carried out at room temperature (25 ± 2 • C) for 1800 s. EIS spectra were performed over a frequency range from 0.1 to 10 5 Hz with the signal amplitude of 0.01 V. All measurements (OCP, EIS) were recorded in PBS solution (0.01 M, 20 mL, pH 7.4). All the measurements were repeated three times (for three samples, n = 3). The results of electrochemical studies in the Bode and Nyquist representation show the curves for measuring the closest to the average value of the three samples. In order to select the equivalent circuit the Nova 2.1.4 software was used. The values of standard deviation (SD) and relative standard deviation (RSD, presented in Appendix A-Tables A1 and A2) for electrochemical parameter measurements (OCP, EIS) were calculated, and equations shown in (1), (2) below were used, where Xi stands for each of the values of the data for three samples, X for the mean of Xi, and n for the number of data points. For the calculation of the SD values for gold nanoparticles diameter, the number of analyzed nanoparticle measurements was 500. All tests were carried out with the use of Autolab (Metrohm) PGSTAT 302N potentiostat/galvanostat. Results and Discussion For the purpose of this study, the electrochemical parameters of titanium dioxide nanotubes with a diameter of 50 ± 5 nm and height of 1000 ± 100 nm, annealed at 450 • C in argon atmosphere for 2 h before and after deposition of gold nanoparticles using cyclic voltammetry (potential from −1.25 V to −0.7 V) [20] for different number of cycles (8,20,40,60,80) in 0.1 mM HAuCl 4 were examined. Further analysis compared the titanium dioxide nanotube electrochemical properties in terms of selected number of cycles (8,20,40) and different concentration of HAuCl 4 (0.1 mM, 1 mM, 5 mM, 10 mM). The analysis included comparison of the gold nanoparticles with similar diameter but deposited at different number of cycles and different concentrations of tetrachloroauric acid solution. To confirm the possibility of using this platform as a biosensor, BSA protein detection was carried out. Figure 1a shows SEM micrographs of the surface and cross-section of TNT arrays prepared by anodic oxidation at 17 V in NH 4 F/ethylene glycol/H 2 O electrolyte solutions, annealed in argon according toa process described in Section 2.1. The TNT arrays with diameter of 50±5 nm and height of 1000 ± 100 nm had smooth walls without any perforation and were uniformly arranged on the titanium foil. No damage on the TNT layers after annealing at 450 • C for 2 h was observed. Thermal modification at 450 • C in argon atmosphere for 2 h TNTs enables changing of TiO 2 from amorphous form (originally present in nanotubes) into crystalline form of rutile and/or anatase [23,24]. The most important advantage of annealing is formation of oxygen vacancies, resulting in the improvement of nanotubes conductivity and thus, facilitating the transfer of electrons attributed to the conversion of Ti 4+ to Ti 3+ [25,26]. It was suggested that thermal modification of Ti/TNTs carried out at 450 • C results in the predominance of anatase in their structure [2], which has a higher affinity to biomolecules. Characterization of TNTs before and after AuNPs Deposition-Influence of the Number of Cycles Figure 1b-f shows the SEM images of morphology of Ti/TNTs after deposition of gold nanoparticles. The samples were denoted correspondingly as xAuNPs/TNTs, where x is number of cycles of the deposition process, and x = 8, 20, 40, 60, 80. An [AuCl 4 ] − electrolyte solution can be ionized as seen in Equation (3). Au 3+ near the titanium dioxide nanotube arrays can receive electrons and be reduced to Au, according to the Equation (4) [19]: Mass transfer in solution occurs by diffusion, migration, and convection, whereas the diffusion and migration result from gradient and electrochemical potential difference respectively, and convection results from an imbalance of forces on the solution. Decrease of Au 3+ concentrations around the TNTs causes the occurrence of the concentration gradient between the bulk solution and the TNTs. Therefore, Au 3+ ions move towards the polarized TNTs surface. As a consequence, more reduced Au crystals are formed on the surface of TNT arrays [19]. The AuNPs are homogeneously distributed on the surface of the TNT arrays. Due to higher current densities, nucleation mostly takes place at the boundaries between titanium dioxide nanotubes (Figure 1b-f)-places of the nanotubes contact [19,27]. Countless boundaries (Figure 1a) provided many points of Au nanoparticles nucleation. The homogeneity of TNT arrays provides a homogeneous environment for nucleation of Au nanoparticles [27]. As can be seen in the Figure 1e,f, the possibility of nanoparticles aggregation increases with the increase of the number of cycles. To enhance the electrochemical responses, the size of nanoparticles should be small and homogeneously dispersed on TNT arrays. around the TNTs causes the occurrence of the concentration gradient between the bulk solution and the TNTs. Therefore, Au 3+ ions move towards the polarized TNTs surface. As a consequence, more reduced Au crystals are formed on the surface of TNT arrays [19]. The AuNPs are homogeneously distributed on the surface of the TNT arrays. Due to higher current densities, nucleation mostly takes place at the boundaries between titanium dioxide nanotubes (Figure 1b-f)-places of the nanotubes contact [19,27]. Countless boundaries (Figure 1a) provided many points of Au nanoparticles nucleation. The homogeneity of TNT arrays provides a homogeneous environment for nucleation of Au nanoparticles [27]. As can be seen in the Figure 1 e-f, the possibility of nanoparticles aggregation increases with the increase of the number of cycles. To enhance the electrochemical responses, the size of nanoparticles should be small and homogeneously dispersed on TNT arrays. Table 1 shows Au components on the surface through EDS analysis in three different places (mean value with SD). Due to low content of gold nanoparticles on the Ti/TNTs surface for the deposition process carried out for 8 and 20 cycles the resulting value of EDS were affected by high measurement error; therefore, it has not been included in Table 1. The result revealed that the loading amount of Au gradually increases with the increase of the number of deposition cycles from 1.42 ± 0.21 wt% for 40 cycles to 3.59 ± 0.18 wt% for 80 cycles. This result is consistent with the results described by Lianghsen et al. [20]. Additionally, the increase in the number the deposition cycles causes the increase of the diameter of deposited gold nanoparticles from 14.3 ± 1.8 nm for 8 cycles to 28.7 ± 5.2 nm for 80 cycles, similarly to Babu et al. [21]. A linear growth (R 2 = 0.998) between the number of CV cycles and diameter of gold nanoparticles deposited on the TNT arrays ( Figure 2) may be observed. However, increase of the number of cycles causes high deviation of gold nanoparticles diameter-a higher value of SD. According to Mahmud et al. [28], AuNPs deposition on annealed Table 1 shows Au components on the surface through EDS analysis in three different places (mean value with SD). Due to low content of gold nanoparticles on the Ti/TNTs surface for the deposition process carried out for 8 and 20 cycles the resulting value of EDS were affected by high measurement error; therefore, it has not been included in Table 1. The result revealed that the loading amount of Au gradually increases with the increase of the number of deposition cycles from 1.42 ± 0.21 wt% for 40 cycles to 3.59 ± 0.18 wt% for 80 cycles. This result is consistent with the results described by Lianghsen et al. [20]. Additionally, the increase in the number the deposition cycles causes the increase of the diameter of deposited gold nanoparticles from 14.3 ± 1.8 nm for 8 cycles to 28.7 ± 5.2 nm for 80 cycles, similarly to Babu et al. [21]. A linear growth (R 2 = 0.998) between the number of CV cycles and diameter of gold nanoparticles deposited on the TNT arrays ( Figure 2) may be observed. However, increase of the number of cycles causes high deviation of gold nanoparticles diameter-a higher value of SD. According to Mahmud et al. [28], AuNPs deposition on annealed TNTs surface compared to non-annealed TNTs surface promotes agglomeration around the pore of the titanium dioxide nanotubes with a rather poor size distribution. Table 1 shows the open-circuit potential average values for annealed TNTs samples before and after Au nanoparticles deposition process. After the AuNPs deposition, OCP of the samples is further enhanced compared to non-modified TNT arrays. Au nanoparticles deposited during eight cycles of cyclic voltammetry, indicating the lowest content of gold (based on EDS analysis), caused no change in the OCP value compared to TNTs. AuNPs deposition carried out for 20-80 cycles causes the general trend for the OCP values to increase. It can be explained by two factors, i.e., the homogeneous AuNPs distribution on the surface of the TNT arrays and the inherent inertness of gold [19]. For AuNPs/TNTs, deposition in cyclic voltammetry process carried out for 20-80 cycles, a positive charge of the surface was observed. The negatively charged protein molecules are easily attracted to the positively-charged matrix, which might be used in construction of the biosensing platforms. TNTs surface compared to non-annealed TNTs surface promotes agglomeration around the pore of the titanium dioxide nanotubes with a rather poor size distribution. Table 1 shows the open-circuit potential average values for annealed TNTs samples before and after Au nanoparticles deposition process. After the AuNPs deposition, OCP of the samples is further enhanced compared to non-modified TNT arrays. Au nanoparticles deposited during eight cycles of cyclic voltammetry, indicating the lowest content of gold (based on EDS analysis), caused no change in the OCP value compared to TNTs. AuNPs deposition carried out for 20-80 cycles causes the general trend for the OCP values to increase. It can be explained by two factors, i.e., the homogeneous AuNPs distribution on the surface of the TNT arrays and the inherent inertness of gold [19]. For AuNPs/TNTs, deposition in cyclic voltammetry process carried out for 20-80 cycles, a positive charge of the surface was observed. The negatively charged protein molecules are easily attracted to the positively-charged matrix, which might be used in construction of the biosensing platforms. The Nyquist diagrams (Figure 3a) determined for the titanium dioxide nanotube layers before and after the deposition of nanoparticles present fragments of wide, incomplete semicircles characteristic of thin oxide layers [29]. The values recorded in the lowest frequency (0.1 Hz) presented in Table A1 and Figure 3, show that the electrochemical parameters depend on the diameter of Au nanoparticles. Due to good electrical conductivity of the AuNPs, the impedance modulus of TNTs decreases. For 60 and 80 cycles, the impedance modulus value slightly increases. This may result from the formation of agglomerates on the TNTs surface (nanoparticles with a diameter in the range from 24.2 ± 4.4 nm to 28.7 ± 5.2 nm). As presented in Figure 1e-f, many of the The Nyquist diagrams (Figure 3a) determined for the titanium dioxide nanotube layers before and after the deposition of nanoparticles present fragments of wide, incomplete semicircles characteristic of thin oxide layers [29]. The values recorded in the lowest frequency (0.1 Hz) presented in Table A1 and Figure 3, show that the electrochemical parameters depend on the diameter of Au nanoparticles. Due to good electrical conductivity of the AuNPs, the impedance modulus of TNTs decreases. For 60 and 80 cycles, the impedance modulus value slightly increases. This may result from the formation of agglomerates on the TNTs surface (nanoparticles with a diameter in the range from 24.2 ± 4.4 nm to 28.7 ± 5.2 nm). As presented in Figure 1e,f, many of the nanoparticles are deposited on the inner surface of the titanium dioxide nanotubes, which causes their partial blockage. In addition, the ratio of the gold nanoparticle surface area to their volume is reduced, thus their electrochemical responses deteriorate. It can be noticed that the real impedance (ReZ) value of the xAuNPs/TNTs decreases when compared to non-modified TNT layers. However, this value increases with the increase of the gold nanoparticle diameters. The lowest impedance value 4066 ± 94 Ω and imaginary impedance 4051 ± 97 Ω was noted for nanoparticles with a diameter of 20.3 ± 2.9 nm (40AuNPs/TNTs). These samples have the lowest value of SD and RSD for each of the determined electrochemical parameters. The phase angle values presented in Bode plots (Figure 3b, Table A1) recorded in the lowest frequency (0.1 Hz) are related to the heterogeneity of the sample surface. The lowest heterogeneity value of the phase angle (86.7 ± 0.2 • ) was observed for the 8AuNPs/TNT layers with the smallest gold nanoparticles (Ø: 14.3 ± 1.8 nm). Heterogeneity of the analyzed structures increases with the increase of the gold nanoparticle diameters from 14.3 ± 1.8 nm for 8AuNPs/TNTs, to 28.7 ± 5.2 nm for 80AuNPs/TNTs. nanoparticles are deposited on the inner surface of the titanium dioxide nanotubes, which causes their partial blockage. In addition, the ratio of the gold nanoparticle surface area to their volume is reduced, thus their electrochemical responses deteriorate. It can be noticed that the real impedance (ReZ) value of the xAuNPs/TNTs decreases when compared to non-modified TNT layers. However, this value increases with the increase of the gold nanoparticle diameters. The lowest impedance value 4066 ± 94 Ω and imaginary impedance 4051 ± 97 Ω was noted for nanoparticles with a diameter of 20.3 ± 2.9 nm (40AuNPs/TNTs). These samples have the lowest value of SD and RSD for each of the determined electrochemical parameters. The phase angle values presented in Bode plots ( Figure 3b, Table A1) recorded in the lowest frequency (0.1 Hz) are related to the heterogeneity of the sample surface. The lowest heterogeneity value of the phase angle (86.7 ± 0.2°) was observed for the 8AuNPs/TNT layers with the smallest gold nanoparticles (Ø: 14.3 ± 1.8 nm). Heterogeneity of the analyzed structures increases with the increase of the gold nanoparticle diameters from 14.3 ± 1.8 nm for 8AuNPs/TNTs, to 28.7 ± 5.2 nm for 80AuNPs/TNTs. The equivalent circuit allows for good agreement between experimental data and simulated impedance plots for comparative estimation of specific components of the studied surfaces. The equivalent circuit which corresponds to both the TNT arrays/electrolyte interface and AuNPs/TNTs/electrolyte interface are shown in Figure 4, where Rs stands for resistance between sample and solution, parallel combination R1Q1 represents resistance and constant phase element with the capacitance C1 of the porous TiO2, combination R2Q2 determining titanium dioxide nanotubes layer, bare and modified with AuNPs. Due to the TNTs and AuNPs/TNTs surface heterogeneities, a constant phase element Q is used to build a model [6,30,31]. The capacitance values (C1, C2) were calculated according to the equation: The equivalent circuit allows for good agreement between experimental data and simulated impedance plots for comparative estimation of specific components of the studied surfaces. The equivalent circuit which corresponds to both the TNT arrays/electrolyte interface and AuNPs/TNTs/electrolyte interface are shown in Figure 4, where Rs stands for resistance between sample and solution, parallel combination R1Q1 represents resistance and constant phase element with the capacitance C1 of the porous TiO 2 , combination R2Q2 determining titanium dioxide nanotubes layer, bare and modified with AuNPs. Due to the TNTs and AuNPs/TNTs surface heterogeneities, a constant phase element Q is used to build a model [6,30,31]. The capacitance values (C1, C2) were calculated according to the equation: The electrical parameters obtained by fitting equivalent circuits to the measured data are shown in Table 2. For all electrodes modified by an AuNPs deposition, there is a decrease in charge transfer resistance values when compared with bare TNT, confirming easier electron transfer and presence of deposited gold. The time constant (T) calculated for R2C2 increase with the increased number of deposition cycles, and reaches the maximum value for 60AuNPs/TNTs. The T increase is accompanied by increasing number of gold nanoparticles providing pathways for the electrons. The significant The electrical parameters obtained by fitting equivalent circuits to the measured data are shown in Table 2. For all electrodes modified by an AuNPs deposition, there is a decrease in charge transfer resistance values when compared with bare TNT, confirming easier electron transfer and presence of deposited gold. The time constant (T) calculated for R2C2 increase with the increased number of deposition cycles, and reaches the maximum value for 60AuNPs/TNTs. The T increase is accompanied by increasing number of gold nanoparticles providing pathways for the electrons. The significant change of time constant recorder for 60AuNPs/TNTs and its further decreasing for 80AuNPs/TNTs indicates more unstable deposition process, i.e., agglomerates formation. Moreover, deposition of AuNPs on TNTs results in the decrease of the Rs, while the biggest change was observed for TNT/AuNPs free of agglomerates, i.e., 8AuNPs/TNTs, 20AuNPs/TNTs, 40AuNPs/TNTs. As shown in Tables 1, 2 and A1 the lowest value of |Z| was observed for gold nanoparticles deposition in 40 cycles, the lowest RSD for 8, 20, and 40 cycles, OCP close to 0 for 20, and 40 cycles, and minimal values of T providing the best electron transfer for 8, 20, and 40 cycles. That is why for further analysis, including impact assessment of the concentration of tetrachloroauric acid solution on the capacitance and adsorption properties of TNT arrays, the samples 8AuNPs/TNTs, 20AuNPs/TNTs, 40AuNPs/TNTs were selected. Characterization of TNTs after AuNPs Deposition-Influence of Various Concentrations of Gold Salt Solutions Figure 5a-i shows the results of the microscopic analysis of TNTs surface after the AuNPs deposition process carried out at potential ranging from −1.25 V to −0.7 V and for the selected (in the Section 3.1) number of cycles (8,20,40). The cyclic volt amperometry process was carried out in different concentrations of HAuCl 4 solution (1 mM, 5 mM, 10 mM). The samples were denoted correspondingly as xAuNPs/TNTs, where x = 8, 20, 40 is the number of deposition process cycles. As can be seen in the microphotographs, the Au nanoparticles were spherical and highly dispersed both outside and inside the surface of TNTs, especially on the top of the nanotubes. Due to high current densities, nucleation mostly takes place at the boundaries between the titanium dioxide nanotubes, which is consistent with the results obtained by Bai et al. [19] and Yang et al. [27]. The amount of AuNPs loaded on TNT arrays surface increased with the increase of the gold salt solution concentration. However, gold nanoparticles deposited in higher concentration of HAuCl 4 (5 mM, 10 mM) led to production of the nanoparticles with the diameter exceeding the diameter of titanium dioxide nanotubes and formation of many agglomerates (Figure 5d-i). According to Mahmud et al. [28], AuNPs deposition on the annealed TNT arrays compared to non-annealed TNT arrays promotes formation of agglomerates around the pore of the titanium dioxide nanotubes. This is due to the removal of residual ions by thermal modification process, which is unfavorable for dispersion of AuNPs and causes its agglomeration [28]. Additionally, the advantage of using solutions with lower concentration (0.1 mM) is that there is no need of using stabilizers such as polyvinylpyrrolidone [32], which prevents formation of agglomerates. Thus, in order to obtain well-spherical geometry AuNPs with small diameters, it is necessary to use solutions with lower concentration of tetrachloroauric acid. concentration. However, gold nanoparticles deposited in higher concentration of HAuCl4 (5 mM, 10 mM) led to production of the nanoparticles with the diameter exceeding the diameter of titanium dioxide nanotubes and formation of many agglomerates (Figure 5d-i). According to Mahmud et al. [28], AuNPs deposition on the annealed TNT arrays compared to non-annealed TNT arrays promotes formation of agglomerates around the pore of the titanium dioxide nanotubes. This is due to the removal of residual ions by thermal modification process, which is unfavorable for dispersion of AuNPs and causes its agglomeration [28]. Additionally, the advantage of using solutions with lower concentration (0.1 mM) is that there is no need of using stabilizers such as polyvinylpyrrolidone [32], which prevents formation of agglomerates. Thus, in order to obtain well-spherical geometry AuNPs with small diameters, it is necessary to use solutions with lower concentration of tetrachloroauric acid. According to Figure 6, linear growth may be observed between the number of CV cycles and the diameter of gold nanoparticles deposited on the TNT arrays in the lower concentration of gold salt solutions: 0.1 mM (R 2 = 1), 1 mM (R 2 = 0.987). For higher concentrations of HAuCl4, i.e., 5 mM and 10 mM, the diameter of AuNPs decreases logarithmically indicating less stable deposition process. Table 3 shows the Au components of the surface through EDS analysis in three different places (average value with standard deviation). The results show that the loading amount of Au gradually increases as the concentration of gold salt solution increases, which was confirmed by Babu et al. [21]. According to Bai et al. [19], TNT arrays modified with gold nanoparticles provide biocompatible environment favorable for cell attachment, which have a typically elongated morphology with an equal amount of gold of about 4 wt%. Deposition of gold nanoparticles in tetrachloroauric acid solution with various concentrations causes the increase of the nanoparticles According to Figure 6, linear growth may be observed between the number of CV cycles and the diameter of gold nanoparticles deposited on the TNT arrays in the lower concentration of gold salt solutions: 0.1 mM (R 2 = 1), 1 mM (R 2 = 0.987). For higher concentrations of HAuCl 4 , i.e., 5 mM and 10 mM, the diameter of AuNPs decreases logarithmically indicating less stable deposition process. Table 3 shows the Au components of the surface through EDS analysis in three different places (average value with standard deviation). The results show that the loading amount of Au gradually increases as the concentration of gold salt solution increases, which was confirmed by Babu et al. [21]. According to Bai et al. [19], TNT arrays modified with gold nanoparticles provide biocompatible environment favorable for cell attachment, which have a typically elongated morphology with an equal amount of gold of about 4 wt%. Deposition of gold nanoparticles in tetrachloroauric acid solution with various concentrations causes the increase of the nanoparticles diameter (Table 3). Thus, the obtained results confirmed that it is possible to control and modify the nanoparticle diameters and the deposited gold amount by varying the concentration of electrolyte and deposition cycles. As the concentration of gold salt solution increases, the possibility of nanoparticles aggregation and its heterogeneity (higher value of SD) increases as well. This reduces the ratio of surface area to the volume of the obtained gold nanoparticles. Table 3 shows the average OCP values for the samples after gold nanoparticles deposition process using cyclic volt amperometry. Just as increasing the number of cycles, the increase of the tetrachloroauric acid concentration also increases the value of open circuit potential to higher positive values. The obtained results are similar to the ones described in the literature. Bai et al. [19] analyzed TNT arrays after the AuNPs deposition with the use of chronoamperometry method for various time of process. The increase of the amount of the deposited gold causes the increase of the titanium dioxide nanotubes corrosion resistance. High value of open circuit potential was observed for the samples modified with the gold nanoparticles deposited in the solutions of higher concentration (10 mM for 20AuNPs/TNTs and 40AuNPs/TNTs). Electrostatic attraction of oppositely charged protein residues and the electrode surface facilitates the immobilization of the protein in an electro active orientation, further facilitating direct electron transfer between a redox center and the electrode. The application of a potential difference on the electrode can affect the behavior of the proteins on the surface and even its denaturation. For that reason, the electrode potential should be close to zero. process using cyclic volt amperometry. Just as increasing the number of cycles, the increase of the tetrachloroauric acid concentration also increases the value of open circuit potential to higher positive values. The obtained results are similar to the ones described in the literature. Bai et al. [19] analyzed TNT arrays after the AuNPs deposition with the use of chronoamperometry method for various time of process. The increase of the amount of the deposited gold causes the increase of the titanium dioxide nanotubes corrosion resistance. High value of open circuit potential was observed for the samples modified with the gold nanoparticles deposited in the solutions of higher concentration (10 mM for 20AuNPs/TNTs and 40AuNPs/TNTs). Electrostatic attraction of oppositely charged protein residues and the electrode surface facilitates the immobilization of the protein in an electro active orientation, further facilitating direct electron transfer between a redox center and the electrode. The application of a potential difference on the electrode can affect the behavior of the proteins on the surface and even its denaturation. For that reason, the electrode potential should be close to zero. (8,20,40). The electrical parameters obtained by fitting equivalent circuits are shown in Table 4. The T calculated for the lower concentration of HAuCl 4 (0.1 mM, 1 mM) is characterized by minimum value, providing good electron transfer and repeatability of deposition process. Increasing the concentration of HAuCl 4 results in agglomeration formation and less stability of deposition process confirming the higher value of T. For all electrodes modified by AuNPs deposition, the Rs increased with the increase of concentration of the gold salt solution. Among the analyzed samples, the 40AuNPs/TNTs deposition in 0.1 mM of gold salt solution is characterized by the lowest Rs, confirming the best electrical conductivity of this sample. Performing a cyclic voltammetry process, in which 0.1 mM solution of tetrachloroauric acid was used as the precursor, leads to the production of small diameter gold nanoparticles and prevents the formation of agglomerates. The homogeneity of the AuNPs/TNTs results in higher repeatability expressed by the lowest values of the relative standard deviations for deposition carried out in 0.1 mM HAuCl 4 (Tables A1 and A2). From these samples, 40AuNPs/TNTs is characterized by the lowest Rs and one of the easiest charge transfers (Table 4). This sample is characterized by positive stationary potential (25 ± 8.6 mV) with the value close to 0 V (Table 3), which does not deactivate and promotes protein adsorption [29]. Therefore, for further analysis including XPS and the deposition of biological elements i.e., bovine serum albumin, TNT arrays were chosen deposition of gold nanoparticles by cyclic voltammetry method carried out for 40 cycles in 0.1 mM HAuCl 4 . Concentration of HAuCl 4 8AuNPs/TNTs 20AuNPs/TNTs 40AuNPs/TNTs The results of the XPS analysis of the TNTs before and after modification with gold nanoparticles (40 cycles, 0.1 mM HAuCl 4 ) are shown in Figure 7. TiO 2 and Ti 2 O 3 were found on the surface of the nanotube layers. The standard binding energy of Ti 2p3/2 in TiO 2 for Ti 3+ is usually located at 457.7 eV and for Ti 4+ is at 459.5 eV [33,34]. The O 1s binding energy for TiO 2 is 529.3 eV [34]. The analysis of the XPS depth profile of TNTs and 40AuNPs/TNTs indicates higher amount of oxygen absorbed inside of the oxide film rather than on its surface. Thermal modification of TNTs results in the occurrence of the lack of oxygen on the surface, which proves its deficiency, the presence of oxygen vacancies and results in improved TNTs electrical conductivity [25]. For 40AuNPs/TNTs, the main 4f 7/2 line is shifted to lower binding energy (83.2 eV), which was found to occur in case of nanoparticles of gold with well spherical geometry [35]. This shift is caused by the initial state effects where spherical NPs have larger fraction of uncoordinated surface atoms reducing their binding energies relative to nanoparticles with large diameter [35]. results in the occurrence of the lack of oxygen on the surface, which proves its deficiency, the presence of oxygen vacancies and results in improved TNTs electrical conductivity [25].For 40AuNPs/TNTs, the main 4f7/2 line is shifted to lower binding energy (83.2 eV), which was found to occur in case of nanoparticles of gold with well spherical geometry [35]. This shift is caused by the initial state effects where spherical NPs have larger fraction of uncoordinated surface atoms reducing their binding energies relative to nanoparticles with large diameter [35]. Figure 8 shows the results of the electrochemical analysis in Nyquist representation for thermal modification of the TNTs and the AuNPs/TNTs before and after serum bovine albumin immobilization. The immobilization procedure was executed in accordance with Kopac et al. [36], in which BSA deposition was carried out on a double-walled, carbon nanotube (DWCNT) system. For this study, the optimum experimental conditions were observed to be at 40 °C due to the highest value of adsorption efficiency (72%) compared to other temperatures: 25 °C (27%), 30 °C (32%), 37 °C (53%). This showed that the adsorbate onto DWCNT increased with increasing temperature, which, according to [36], can be attributed to the availability of more adsorption sites and increase in the sorptive surface area. Evaluation of Experimental Conditions for BSA Protein Adsorption on the TNT and 40AuNPs Arrays The values of electrochemical parameters presented in Figure 8 obtained for the 40AuNPs/TNTs and the TNT layers before and after the deposition of BSA show that the highest increase in impedance modulus recorded in the lowest frequency (0.1 Hz) by 1247 Ω (28%) was observed for samples modified with gold nanoparticles, while for non-modified titanium dioxide nanotubes, the increase was only 843 Ω (14%). Figure 8 shows the results of the electrochemical analysis in Nyquist representation for thermal modification of the TNTs and the AuNPs/TNTs before and after serum bovine albumin immobilization. The immobilization procedure was executed in accordance with Kopac et al. [36], in which BSA deposition was carried out on a double-walled, carbon nanotube (DWCNT) system. For this study, the optimum experimental conditions were observed to be at 40 • C due to the highest value of adsorption efficiency (72%) compared to other temperatures: 25 • C (27%), 30 • C (32%), 37 • C (53%). This showed that the adsorbate onto DWCNT increased with increasing temperature, which, according to [36], can be attributed to the availability of more adsorption sites and increase in the sorptive surface area. Evaluation of Experimental Conditions for BSA Protein Adsorption on the TNT and 40AuNPs Arrays The values of electrochemical parameters presented in Figure 8 obtained for the 40AuNPs/TNTs and the TNT layers before and after the deposition of BSA show that the highest increase in impedance modulus recorded in the lowest frequency (0.1 Hz) by 1247 Ω (28%) was observed for samples modified with gold nanoparticles, while for non-modified titanium dioxide nanotubes, the increase was only 843 Ω (14%). The electrical parameters obtained by fitting equivalent circuits are shown in Table 5. The equivalent circuit is shown in Figure 4. Deposition of BSA on TNTs and 40AuNPs/TNTs arrays causes an increase in the Rs value. The highest increase in Rs value of 16.29 Ω was recorded for the samples after modification with gold nanoparticles, which confirms the increased adsorption of The electrical parameters obtained by fitting equivalent circuits are shown in Table 5. The equivalent circuit is shown in Figure 4. Deposition of BSA on TNTs and 40AuNPs/TNTs arrays causes an increase in the Rs value. The highest increase in Rs value of 16.29 Ω was recorded for the samples after modification with gold nanoparticles, which confirms the increased adsorption of bovine serum albumin for this platform. For 40AuNPs/TNTs arrays after the BSA deposition process, they are characterized by almost double increase of the T value, which is caused by the increase of electron transfer resistance due to the formation of a protein layer. Better adsorption of biological elements on the 40AuNPs/TNT arrays results from an increase in the direction of positive OCP values caused by modification of gold nanoparticles. It is very important for the adsorption of BSA, which in the PBS solution (pH 7.4) has negative charge (isoelectric point of BSA protein −5.4). According to several articles on protein immobilization, an appropriate pH for the termination of proteins would be around 7 to 8 [37,38]. During the formation of biological layer on the surface, each adsorbing molecule must go through the following steps: transport toward the surface, attachment, and another spreading on the surface. According to the electrostatic binding hypothesis, the attraction between the negative surface residues due to the isoelectric point of BSA and the positive charge from the surface are responsible for the strong binding of BSA to gold nanoparticles. In this hypothesis, the protein attaches itself to the passivation layer on the gold surface, with little direct interaction between BSA and the gold surface [38]. According to Liu [39] modification of electrode surfaces with gold nanoparticles provides a microenvironment similar to that of proteins in native systems, and gives the protein molecules orientation freedom. Peng et al. confirmed that the highest titanium dioxide nanotubes promote adsorption and stability of BSA protein binding [40]. Conclusions The aim of the described study was to show compare electrochemical properties of TNT arrays before and after modification using gold nanoparticles. AuNPs/TNT platforms were produced using cyclic voltammetry method applying different number of cycles and concentration of gold salt solutions. Due to higher current densities, the nucleation primarily took place at the boundaries between the titanium dioxide nanotubes [19]. Increasing the number of cycles of the deposition process and the concentration of tetrachloroauric acid caused an increase in the diameter of the deposited gold nanoparticles, the amount of the deposited gold, and possibility of nanoparticles aggregation. The research showed that using spherical and high dispersion on the TNT arrays Au nanoparticles improved the capacitance of the developed platform. Another advantage of AuNPs nanoparticles is improvement of the corrosion resistance of TNTs [19]. The authors determined that the greatest improvement in electrochemical parameters is obtained for nanoparticles that are deposited for 40 CV cycles in 0.1 mM concentration of gold salt solution. Bovine serum albumin adsorption studies confirmed that modification of the TNT arrays with gold nanoparticles promotes the adsorption of biological elements. For samples modified with gold nanoparticles almost double increase of the T value was observed confirming that the AuNPs improve the TNTs adsorption properties. Modification of the TNT surfaces with gold nanoparticles creates a microenvironment similar to that of proteins in native systems and gives the protein molecules freedom in orientation [38]. This study also suggests the possibility of exploring the use of gold nanoparticles to further improve sensitivity of the titanium dioxide nanotubes in label free detection system. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. Table of mean values of phase angle, impedance modulus (|Z|), real impedance (ReZ) and imaginary impedance (-ImZ) recorded at the lowest frequency of 0.1 Hz, with SD and RSD measured for annealed TNTs before and after the deposition of AuNPs using cyclic voltammetry carried out in 0.1 mM HAuCl 4 for different number of cycles (8,20,40,60,80 deposited for 40 CV cycles in 0.1 mM concentration of gold salt solution. Bovine serum albumin adsorption studies confirmed that modification of the TNT arrays with gold nanoparticles promotes the adsorption of biological elements. For samples modified with gold nanoparticles almost double increase of the T value was observed confirming that the AuNPs improve the TNTs adsorption properties. Modification of the TNT surfaces with gold nanoparticles creates a microenvironment similar to that of proteins in native systems and gives the protein molecules freedom in orientation [38]. This study also suggests the possibility of exploring the use of gold nanoparticles to further improve sensitivity of the titanium dioxide nanotubes in label free detection system. microenvironment similar to that of proteins in native systems, and gives the protein molecules orientation freedom. Peng et al. confirmed that the highest titanium dioxide nanotubes promote adsorption and stability of BSA protein binding [40]. Conclusions The aim of the described study was to show compare electrochemical properties of TNT arrays before and after modification using gold nanoparticles. AuNPs/TNT platforms were produced using cyclic voltammetry method applying different number of cycles and concentration of gold salt solutions. Due to higher current densities, the nucleation primarily took place at the boundaries between the titanium dioxide nanotubes [19]. Increasing the number of cycles of the deposition process and the concentration of tetrachloroauric acid caused an increase in the diameter of the deposited gold nanoparticles, the amount of the deposited gold, and possibility of nanoparticles aggregation. The research showed that using spherical and high dispersion on the TNT arrays Au nanoparticles improved the capacitance of the developed platform. Another advantage of AuNPs nanoparticles is improvement of the corrosion resistance of TNTs [19]. The authors determined that the greatest improvement in electrochemical parameters is obtained for nanoparticles that are deposited for 40 CV cycles in 0.1 mM concentration of gold salt solution. Bovine serum albumin adsorption studies confirmed that modification of the TNT arrays with gold nanoparticles promotes the adsorption of biological elements. For samples modified with gold nanoparticles almost double increase of the T value was observed confirming that the AuNPs improve the TNTs adsorption properties. Modification of the TNT surfaces with gold nanoparticles creates a microenvironment similar to that of proteins in native systems and gives the protein molecules freedom in orientation [38]. This study also suggests the possibility of exploring the use of gold nanoparticles to further improve sensitivity of the titanium dioxide nanotubes in label free detection system. Conflicts of Interest: The authors declare no conflict of interest Appendix A Table A1. Table of mean values of phase angle, impedance modulus (|Z|), real impedance (ReZ) and imaginary impedance (-ImZ) recorded at the lowest frequency of 0.1 Hz, with SD and RSD measured for annealed TNTs before and after the deposition of AuNPs using cyclic voltammetry carried out in 0.1 mM HAuCl4 for different number of cycles (8,20,40,60,80
2019-11-22T00:55:55.794Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "d09996cf963f04614a47a12458a401e5890680ae", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/bios9040138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3426f7f77299c3563adbd90ede94ca9c01bbe302", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
261075642
pes2o/s2orc
v3-fos-license
Cell–matrix and cell–cell interaction mechanics in guiding migration Physical properties of tissue are increasingly recognised as major regulatory cues affecting cell behaviours, particularly cell migration. While these properties of the extracellular matrix have been extensively discussed, the contribution from the cellular components that make up the tissue are still poorly appreciated. In this mini-review, we will discuss two major physical components: stiffness and topology with a stronger focus on cell–cell interactions and how these can impact cell migration. Introduction When cells migrate inside a multicellular body, they make extensive contact with their surrounding tissue.While biochemical signalling is important, physical forces and the mechanical properties of the tissue also contribute critical regulatory cues to the migratory behaviours of cells. Tissues are composed of two main components: the extracellular matrix (ECM) and the cells.The ECM contains proteins, such as collagens and fibronectin, that provide structural support to the tissue.The cellular component refers to the cells that make up that particular tissue.During migration, a cell that enters into a tissue can encounter an array of physical cues, resulting in a response in its migration strategy, which we will consider in more detail below.In this mini-review, we will summarise some of the recent developments in understanding the effects of the physical environment on cell migration by taking into consideration the cell-ECM and cell-cell interactions.There have been considerable progress on the study of cell-matrix interaction, which has been reviewed elsewhere [1][2][3][4], therefore this review will focus mainly on the topic of cell-cell interactions during migration.We will draw on examples taken from a wide range of contexts in development, cancer biology and immunology to recapitulate the generality of these ideas. The extracellular matrix and cell migration The ECM stiffness Much of our initial understanding of the effect of ECM stiffness on cell migration comes from using tuneable synthetic hydrogels such as polyacrylamide [5] or alginate gels [6].By changing the concentrations of these substrates or the degree of cross-linking, the stiffness can be varied, which revealed the tendency of NIH3T3 fibroblasts [7] among other cell types [8,9], to preferentially migrate towards a stiffer substratum.This phenomenon is known as durotaxis [10] and was later explained using the molecular clutch model.In this model, there are five main components involved: the substratum, the integrins, the adaptor proteins, the filamentous actin, and the myosin motors [3,4,[11][12][13].The model begins with integrins binding to ECM ligands as well as connecting to the filamentous actin via adaptor proteins such as talin (Figure 1).As myosin contractility pulls on actin, this strengthens integrins' affinity to its substrate (known as a catch-bond mechanism).The speed of this binding (also known as the force loading rate) is faster on stiffer substrates, which allows for more integrins to cluster, resulting in an increase in traction force generation, thus biasing the cells towards stiffer regions. Recent evidence refines this model by introducing the concept of optimal stiffness [12,14,15].It is proposed that when substrate stiffness is too high, the force loading rate occurs too rapidly, which causes an uncoordinated engagement of the different components within the molecular clutcha phenomenon known as frictional slippage.This provides three conclusions: (1) cells may not always prefer stiffer substrates over softer ones, (2) different cell types have a specific range of optimal stiffness, and (3) cells can move downstream of a stiffness gradient to find their optimal stiffness, a phenomenon called negative durotaxis [16].Further evidence supporting this hypothesis comes from calculating the ratio between the stiffness of the substratum and the independent stiffness of the cell.According to a new model that takes into account the 'soft substrate effect' [17] (a phenomenon that occurs when the substrate underneath the cell being measured deforms due to the pressure of the cantilever), cells do not change their cortical stiffness based on the underlying substrate.Therefore, cells can independently compute a window of stiffness where their actin cytoskeleton machinery is able to break symmetry and become polarised for migration, even if it means going against a stiffness gradient. Matrix stiffness has a profound effect on the migration of cancer cells.Many solid tumours are found to be stiffer than the surrounding tissue, for example, ∼150 Pa in normal versus 4000 Pa in breast cancer [18].The Cells bias their migration towards the stiffer region due to having a higher force loading rate of integrins binding to the substrate (k on > k off ) at the front, than at their rear (k off > k on ).This allows more integrins to cluster at the leading front, hence higher actin polymerisation.Thicker arrows denote a higher rate than thinner arrows.idea of optimal stiffness could potentially explain why cancer cells leave their stiff tumour environment to invade the relatively softer normal surrounding tissue.Contrastingly, cancer cells can actively modify the stiffness of this matrix [19,20].The network of fibres can be locally bundled up at the cell's anterior protrusions to provide traction to pull the cell forward [21].Laser ablating just the front of this pre-strained region halts cell migration.In a stiff 3D matrix, cells have more elongated morphology compared with the more clustered phenotype observed in the soft matrix [8,20,22], which perhaps reflects the migration-inducing property of stiff matrices.In breast cancer, a stiff matrix triggers the production of the oncogenic actin-regulatory protein MENA [23], which is known for participating in invadopodia formation capable of degrading the ECM and prompting haptotaxis towards blood vessels for intravasation [24].In pancreatic cancer, the enzyme creatine kinase B (CKB) is gradually up-regulated with stiffening substrates in a YAP-dependent manner, which is thought to provide the ATP needed for a faster actin turnover at the cell's leading front [25].This means substrate stiffness controls how certain types of cancer can generate energy for proliferation and migration [26].The link between mechanical cues and metabolism remains an exciting area for future exploration. Substrate viscoelasticity is also another important factor in modulating cell migration.If a soft substrate has a fast stress relaxation rate, meaning the deformation in the ECM remains even after the applied force has disappeared (a property known as viscoelasticity), then cells can use WASP-mediated actin-rich protrusions to wedge open a path to efficiently migrate through [27].This behaviour has been observed in monocytes [27] and fibrosarcoma cells [28] and is in parallel with observations in neutrophils [29] and dendritic cells [30] where WASP-mediated actin puncta were used to counteract matrix compression. However, in vitro studies fail to recapitulate the complexity in vivo.Manipulating matrix stiffness in vivo and having the capability to verify such manipulation remain challenging tasks.Explant model systems, transparent embryos as well as the use of second harmonic generation imaging can give us a proxy for these properties in a more physiologically relevant context to bridge the gap between in vitro and in vivo. The matrix topology Inside an organism, cells are often embedded in a 3D space.Hence, its topology can directly affect the cell's behaviours.One such property is porosity.Porosity refers to how much free space is available within a matrix and is defined by the ratio between the volume of the empty space over the volume of the total reservoir and is often inversely correlated with density [31]. Generally speaking, smaller pore size impedes cell migration [32].Human foreskin fibroblasts embedded in rat tail collagen polymerised at 4°C, which coalesces into longer and thicker bundles with an overall less dense network, migrate ∼2 times faster using the pressure-based migration mode, known as lobopodia, compared with gels of the same stiffness that are polymerised at 37°C, which are less porous [33].A similar observation was made with macrophages where these cells exhibited a slower mesenchymal morphology when embedded in a dense collagen matrix, however a switch to the faster ameboid migration in fibrillar collagen was observed [34].A more porous matrix also means more possible paths for a cell to move through.In the case of dendritic cells, this pathfinding process proves to be a struggle for cells that have multiple filopodia, while cells with a single filopodia move the fastest and the most directional [35][36][37]. However, it seems that in the case of collective migration, cells can migrate more efficiently in denser matrix conditions.Cancer cells form tubular structures that mimic the vascular network in high-density collagen or low-density collagen mixed with the crowding agent polyethylene glycol (PEG) [38][39][40].Tumour spheroids form more cell clusters that invade more readily into the higher-density collagen matrix.It is tempting to speculate that in the denser matrix, collective migration would be more advantageous over single-cell migration since clusters generate a higher deformation force, and thus can carve out a pathway for follower cells to move through. For a single cell migrating through a matrix, the most rate-limiting step is fitting the nucleusthe stiffest organellethrough the pore [41].Multiple studies suggest that the nucleus itself is utilised as a kind of piston to aid with the migration process [42,43].In fibroblasts, when cells are exposed to a low porosity matrix, actomyosin contractility is triggered, which pulls on the actin cytoskeleton connecting to the nucleus via Nesprin-3 towards the front [43], effectively pressurises the cytoplasm and generates lobopodial protrusions (Figure 2A).This leads to an influx of ions through opening channels such as TRPV4 and NHE-1, which increases the osmotic pressure and draws in water at the cell-front.This expands the protrusion to widen the viscoelastic matrix and allows the cell to pass through [44].The increased contractility during constricted migration is due to the complete unfolding of the nuclear envelope, which triggers Ca 2+ ions release from the endoplasmic reticulum or through Piezo1-mediated Ca 2+ influx.This triggers the binding of calcium-dependent phospholipase 2 (cPLA2) to the stretched outer nuclear envelope, catalysing the synthesis of arachidonic acid which activates actomyosin contraction [45,46].This stiffens the cell cortex and thus resists the compression from the dense matrix around it (Figure 2B).Calcium ions also suppress the activity of protein kinase A, which is a known activator of Rac1 [47].This potentiates the elevation of Rho-ROCK signalling [48] and allows cells to enter contraction-based instead of lamellipodia-based migration.This mode of migration can be utilised for as long as the confinement site is not smaller than 10% of the nucleus's cross-section diameter before matrix protease-dependent migration is activated [41].What controls nuclear folding is an intriguing question that has not been explored but could potentially be used to control the threshold level for how sensitive cells are to compression.It is also important to note that the mechanisms described above have only been observed in vitro, while in vivo observations are still scarce [49] and thus more research is needed. The cellular environment in guiding cell migration Despite the intimate connection the cellular environment poses to cell migration, this aspect of tissue mechanics remains poorly discussed.Studying the effects of cellular mechanics on neighbouring cell migration is somewhat less common because it often requires a native tissue or an in vivo system.Gaining access to the tissue of interest is not always possible, and even then, imaging these interactions as well as measuring the mechanical properties of the native tissue can often be technically challenging.Despite these difficulties, recent evidence from Xenopus, Drosophila and zebrafish embryos has suggested such effects of cellular mechanics exist and can have major impacts on the migratory behaviour of neighbouring cells.In this section, we will discuss the role of cellular stiffness and tissue architecture on cell migration. Cellular stiffness One of the most recent pieces of evidence of in vivo tissue stiffness sensing comes from the study of neural crest migration in Xenopus embryos.Neural crest is a population of embryonic stem cells that delaminates from the neural fold and migrates along the dorsoventral axis.This migratory behaviour has been likened to cancer cell invasion during metastasis.During this process, the cell-cell adhesion molecule, E-cadherin, is down-regulated, while N-cadherin is up-regulated.Neural crest cells also follow the gradient of the chemotactic molecule SDF-1 (CXCL12) secreted by the neural placode in a chase-and-run mechanism [50], while avoiding areas with inhibitory signals such as Versican [51] or Semaphorin 3A [52].Inside an embryo, neural crest cells migrate between two thin layers of fibronectin present on the surface of the mesoderm and placodal ectoderm (Figure 3).Intriguingly, atomic force microscopy (AFM) measurement of this mesoderm shows a progressive increase in the apparent stiffness from stage 13 to stage 20 embryos that correlates with the initiation of neural crest migration [53].When the mesodermal layer is artificially softened by overexpressing constitutively active myosin phosphatase or by releasing the tissue tension through tissue ablation, this prevents the collective migration of the neural crest.In contrast, when the mesoderm is stiffened up via the overexpression of the constitutively active myosin light chain or by enhancing the tissue tension by pressing with the AFM cantilever, this promotes the migration of the neural crest cells.Importantly, removing the fibronectin layer has no effect on the measured stiffness, which suggests that the ECM does not have a significant mechanical contribution in this case, apart from providing an adhesive substratum.Interestingly, in contrast with the global increase in mesodermal stiffness over time, the stiffness of the placodes is not homogeneously distributed.Careful measurements of the placode reveal a dorsoventral gradient of stiffness in the same direction as the SDF-1 gradient [54].It was proposed that the portion of the placode that is in contact with the neural crest is softened through N-cadherin signalling by reducing cortical actin.Although a detailed mechanism was not extensively discussed, it is not unreasonable to speculate that it follows previously described signalling pathways.For example, homotypic N-cadherin interaction recruits and activates RhoGTPase activating proteins (RhoGAP) like p190 [55] or Gap21/23 [56] that inhibit RhoA.This results in the reduction in cortical contractility, therefore, reduces the apparent stiffness of the placodal cells at the interface with the neural crest.It is interesting to note that convergent extension which leads to the increase in mesodermal cell packing is the driving force behind mesodermal stiffening.While the neural crest makes contact with both the placode and the mesoderm, the mechanism by which the placode expresses the stiffness gradient remains an interesting question for future studies. In another model of retinal ganglion cells, the neurons that are part of the optic system were shown to preferentially bend towards softer tissue regions where fewer cells are packed together [57].It was shown that the increased in tissue stiffness is due to an increase in cell density, showing again that neurons are sensing the mechanical property of the surrounding tissue and not from the ECM [58].In vitro, it was shown that axons are Piezo1-dependent mechanosensitive [57,58] and are longer on stiff polyacrylamide gels compared with softer gels where they assume a more explorative morphology.This exploration behaviour was argued to be important for the axon bundle to find the optic tectum in vivo.However, it is unclear whether this seemingly negative durotaxis is an active or a passive process.If this was a passive process, a possible explanation would be that one side of the bundle grows faster and is more migratory compared with the other side due to the exposure to a stiffer substrate, then the bundle would naturally curve towards the softer region.If the bending was an active process, then we would expect to observe more active growth cones formed on the softer side.However, testing these hypotheses would require a higher-resolution imaging modality, which could be technically difficult in vivo.Nevertheless, tissue mechanical properties by cellular components play an important regulatory role in cell migration.This initiates the migration of neural crest cells.Neural crest also follows a chemotactic gradient of SDF-1 secreted by the placode.When the leading neural crest cells make contact with the placode through homotypic N-cadherin interactions, this potentially recruits RhoGAPs, which deactivates RhoA, and therefore lowers down actomyosin contractility, causes the placode to soften at the point of contact.This creates a stiffness gradient of the placode in the same direction as SDF-1, guiding the neural crest migration forwards. Tissue architecture and topology Similar to the the architecture of tissue can have a profound impact on cell migration.In vitro, synthetic substrates with altered topology have been extensively used.Recent studies show that on a wavy substrate, cells naturally fall into the troughs or negative curvatures through a mechanism termed curvotaxis [59,60].Positive curvatures or convex structures were shown to bend and compress the nucleus, likely in coordination with stress fibres that straddle on top.Through a mechanism that is likely to be similar to what we described in the previous section [45,46], this nuclear compression triggers myosin contractility, which pushes the nucleus down into the trough. When these curvatures are small enough, cells can use them to migrate even in the absence of any focal adhesions.When talin-knockout T-cells are placed in a smooth microchannel, they fail to migrate.However, when these same cells are placed in a serrated channel that contains repeated wavy patterns, they become mobile, albeit still slower than wild-type cells [61].The bigger the serration, the less effective the microchannel becomes in rescuing the migration ability of the talin-knockout cells, suggesting that cells have a certain degree of resolution for substrate topology that they can feel.This mode of repetitive texture-dependent is sometimes referred to as rachetaxis [62] and seems to be the most relevant to cells that use bleb-based migration.The current proposal is that actin flowing from the front to the rear of a cell encounters resistance from the serration, which generates a countering force propelling the cell forwards.One point to note is that the tested serrated patterns were symmetrical on both sides and high in stiffness, while native tissues are often heterogenous and a lot softer.Hence, it remains to be seen whether rachetaxis is relevant in vivo. A related mode of migration that is potentially more physiologically probable is frictiotaxis.As the name suggests, this migration mode relies on non-specific friction interactions between the migrating cell and its surrounding.Non-adherent Walker cells that do not naturally form focal adhesions and cannot migrate effectively on a flat 2D substrate migrate efficiently in microchannels coated with bovine serum albumin to have increased friction [63].The same cells fail to migrate when this coating is replaced with Pluronic F127, which is known for reducing friction.Interestingly, a recent preprint suggests that by simply increasing the friction gradient, cells can be directed towards a higher friction region [64].The proposed mechanism is that the retrograde flow of actin is resisted by the architectural interactions between random transmembrane proteins and the minuscule irregularity on the substrate wall combined with rear-end myosin contractility, which creates a propelling force driving cells forward.It is somewhat analogous to rachetaxis but instead of the cell-scale architectural topology, friction can happen at the molecular scale.It remains to be seen whether frictiotaxis occurs in vivo. In Drosophila embryos, ectodermal tissue architecture affects macrophage invasion into the germband.The ectodermal cell that blocks the entrant gate to the germband needs to be physically moved away to let the first macrophage through (Figure 4A).Tissue necrosis factor (TNF) Eiger secreted by the surrounding cells triggers a dephosphorylation of the myosin light chain in the ectodermal cell, resulting in the loss of cortical tension and loosening of the blockage, which allows the macrophages to squeeze through [65].The macrophages also respond to being squeezed by up-regulating the transcription factor Dfos, which leads to Rho1 and the formin Dia activation [66].This leads to a global increase in cortical actin polymerisation within the body of the macrophage, possibly as a protection mechanism.In addition, upon rounding up during cell division, the entrance-blocking ectodermal cell temporarily loses its integrin adhesions with the laminin layer covering the mesoderm (Figure 4B), thus forms a physical opening for the macrophages to wedge in [67].Interestingly, this division does not seem to be triggered by the macrophages, therefore the factors underlying the timing of this crucial division remains to be explored.The studies discussed in this section highlight the extensive physical regulatory mechanisms that organisms employ to control cell migration.In the same model organism, within the egg chamber, a cluster containing 2 polar cells surrounded by a few border cells migrates through a densely packed tissue of nurse cells [68,69].When the cell-cell adhesion molecule E-cadherin is knocked down in border cells or nurse cells, this significantly reduces the directionality of the cell cluster.In contrast, when E-cadherin is overexpressed in nurse cells, this slows down the migration of the cell cluster but does not affect its directionality [70].Through fluorescence resonance energy transfer (FRET) microscopy, it was revealed that the front of the cell cluster is constantly under tension.The homotypic E-cadherin interactions activate Rac1 at a few front cells, promotes protrusion formation, and thus forms a positive feedback loop driving directional migration of the cell cluster.Recent data also point to the role of the nucleus in the leader cell to act as a wedge to assist the migration [71], analogous to the nuclear piston model described previously. During the early development of zebrafish embryos, the prechordal plate migrates to the animal pole, while the outer neurectoderm migrates in the opposite direction towards the vegetal pole [72].Cell tracking reveals a characteristic vortex-liked movement of the neurectoderm in normal embryos, while the mutants that lack the endogenous mesoderm or have defective mesoderm movement, vortex pattern is lost.It was then revealed that the physical interactions of E-cadherin between the prechordal plate and the neurectoderm are the source that gives rise to this pattern.Shearing beads coated with E-cadherin on top of a layer of ectoderm explant reproduces this vortex pattern.This phenomenon was argued to be due to the friction between the two tissue layers.However, it is interesting to note that friction-based migration occurs due to non-specific interactions of transmembrane proteins.The fact that only specifically depleting E-cadherin or the use of E-cadherin coated beads had an effect argues against the friction-based hypothesis but rather a more specific cell-cell interaction must be required.An interesting question is whether the overexpression of a random transmembrane protein could also rescue the vortex pattern.Another example for cell-cell interactions comes from the zebrafish posterior lateral line primordium as 3D imaging identifies a cell subpopulation that lies on top of the cluster that makes extensive contact with the overlying skin tissue [73].These so-called superficial cells extend lamellipodial-liked protrusions and seemingly use the basal side of the skin as a substrate to assist with the migration of the entire lateral line cluster.Importantly, removing the skin completely abrogated this migration while increase in the cluster's height was observed.This suggests that the skin tissue itself provides a type of compression along with being a substrate for the lateral line.In vitro compression has been repeatedly shown to induce blebs in different cell types [74], so whether the skin compression seen in the case actively induces the observed protrusions remains to be elucidated (Table 1). Outlooks Studies of biological systems have been heavily focused on its biochemical aspect since its conception.But recent developments in the field have argued for the significant role of physics in dictating many biological phenomena.By drawing from widely different contexts and model systems, we hope we have demonstrated the importance of mechanical properties and physical interactions between cells and their tissue environment.This knowledge not only helps us gain a better understanding of how cell migration is regulated but also how we can expand our toolbox in devising strategies for when things go wrong. We also hope that we have brought more attention to the rather lesser-discussed cellular mechanics as one of the important influencing factors.Unlike the ECM, cells are alive and are responsive to stimulations.They are active matters.Hence, any external mechanical impulses can be met with an adaptive response.This arguably can have a more diverse and complex outcome that we hope future research will be able to address. Apart from stiffness and architecture, there are also many other physical factors that we have not discussed in this review, such as hydrostatic pressure [75][76][77], tissue jamming and unjamming [78], and matrix alignment [79] to name a few.While it is useful to understand each factor independently, it is essential to recognise that the observed migration behaviour of a cell in tissue may likely be a result of a combination of different properties, and that cells may use the same mechanism to adapt to different physical stimuli.Future studies should try to address how each factor influences each other and how much each of them influences a cell, particularly in an in vivo context. Perspectives • Understanding how physical factors regulate cell migration opens doors to better understanding of many biological phenomena and therapeutic implications when these processes go wrong.• Much of the current work in mechanobiology has been focusing on the physical properties of the ECM, while physical interactions between migrating and their surrounding tissue are lesser explored. • To gain a complete understanding of how tissue regulates cell migration, future studies should intercalate the properties of both the ECM and cellular components. Figure 1 . Figure 1.The molecular clutch model of durotaxis. Figure 2 . Figure 2. The nuclear piston and nuclear compression model allow cells to migrate through a low porous environment.Under non-constricted conditions, the nucleus has natural folds.Upon squeezing through a narrow constriction: (A) The nucleus is being pulled forwards by actomyosin contractility through Nesprin-3, which pressurises the front of the cell.This opens up ion channels such as TRPV4 or NHE, allows ions to flux into the cells, increases the osmotic pressure and draws in water.The influx of water causes the expansion of the front protrusion, which wedges open the matrix for the cell to pass through.(B) The nuclear folds are stretched, leading to the release or influx of Ca 2+ ions into the cytoplasm through ion channels on the Endoplasmic Reticulum or on the plasma membrane.This triggers the binding of the cPLA2 enzyme to the nuclear envelope to catalyse the synthesis of Arachidonic acid, which triggers cortical actomyosin contraction and stiffens the cells to allow passing through narrow pores. Figure 3 . Figure 3.The mechanism of durotaxis in vivo by neural crest.Convergent extension causes mesodermal cells to pack together, increasing their cell density and therefore the tissue stiffness. Figure 4 . Figure 4. Guiding macrophage migration in vivo by tissue mechanics.Macrophages invade the germ band through an opening between the layers of the ectoderm and the mesoderm.(A) TNF secreted by the surrounding cells binds to the TNF receptor (TNFR) on the ectodermal cell.This leads to the dephosphorylation of myosin, therefore lowers down cortical actomyosin contraction and softening the cell.As the first macrophage is squeezed between the ectoderm and the mesoderm, it up-regulates the transcription factor cFos, which up-regulates mRNA of actin cross-linking proteins, which then activates RhoA and Dia to increase cortical actin polymerisation and contraction.This stiffens up the macrophage and allows it to squeeze in between the two layers of tissue.(B) The ectodermal cell at the entry point adheres to the Laminin covering on top of the mesoderm layer through integrins.When this cell enters cell division, it rounds up and temporarily detaches from the Laminin.This creates an opening for the macrophage to wedge in. Table 1 Different modes of tissue architecture and topology on cell migration Superficial cells of the posterior lateral line primordium use lamellipodia to interact with the overlying skin tissue to drive cluster migration
2023-08-24T06:17:37.990Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "f52fb059d9854cf44b9fcbad3310e458810113f6", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/biochemsoctrans/article-pdf/doi/10.1042/BST20230211/949192/bst-2023-0211c.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b9a9a702276bc2d34d12a3438179fb4356fd47a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236559124
pes2o/s2orc
v3-fos-license
Tunneling Related Party Lending Phenomenon: Empirical Study on Family Business in Indonesia This study examines the effect of family end control with a pyramid structure, RPTs disclosure, internal auditors and independent commissioners on related loan tunneling in Indonesia. This study used a sample of 258 public companies listed on the Indonesia Stock Exchange from 2016-2018. This study provides empirical evidence that the final controller of the family with a pyramid structure is proven to practice tunneling through related loans. The next finding of this study is that the level of disclosure of related transactions can reduce the potential for the practice of tunneling related loans. Another important finding is the failure of the internal control mechanism by internal auditors and independent commissioners which is not able to reduce the potential for related loan tunneling practices in family companies in Indonesia. INTRODUCTION Related lending is a lending activity between companies that have business line relationships within a business group. This transaction was carried out with the aim of meeting financing needs and improving the performance of the group of companies. Related loans are an efficient alternative source of internal funding for business groups to anticipate underdevelopment of capital markets in emerging countries (Khanna and Palepu, 2000). Another benefit of related lending is the low asymmetry between companies conducting transactions, thereby minimizing the risk of default by the borrowing company. However, this transaction caught the attention of investors and regulators because of concerns that the parties involved in this transaction were misusing for their own interests and detrimental to the interests of other investors (Balyuk, 2014). Related transactions are generally carried out by the board of directors or controlling shareholders by using their authority to condition the transactions they do with their subsidiaries and transfer contracts with suppliers to other companies under control (Balyuk, 2014). In affiliated companies, related transactions are carried out by the controlling shareholder using long-term contracts. Using this long-term contract, the controlling shareholder can reduce his business risk and can affect the performance of the affiliated company with the control rights he owns. This practice generally occurs in developing countries and countries with weak investor protection regulations regarding related party transactions (Djankov, La Porta, Lopez-de-Silanes, and Shleifer, 2008). In Indonesia, the arrangements regarding related party transactions are specifically regulated in PSAK No. 7 (revised 2010) concerning disclosures of related party transactions. This Standard aims to ensure that the entity's financial statements contain the necessary disclosures about the possibility that the financial position and profit or loss have been affected by related party transactions, including commitments with those parties. IAI's attention to related party transactions within the scope of this accounting treatment is highly relevant, considering that the position of related entities can affect the profit or loss and financial position of other entities even if transactions with related parties do not occur. The ownership structure of public companies in Asia is different than in America and Europe. Claessens, Djankov, and Lang (2000) found that the ownership structures of public companies in America and Europe were found to have scattered ownership, whereas in Asia, including Indonesia, they had a concentrated ownership structure. The consequence of the concentrated ownership structure is that there is a final controlling shareholder. The results of a survey by the World Bank (2010) that the majority of businesses in Indonesia are family-owned or controlled companies. This is consistent with the results of research conducted by Claessens et al. (2000) and Achmad (2008) which state that public companies in Indonesia are dominated by family businesses. The characteristics of public company ownership in Indonesia are generally in the form of family companies with a pyramid structure (La Porta et al., 1999). Through this structure, controlling shareholders can increase their control rights beyond their cash flow rights. This increased control allows controlling shareholders to be involved directly or indirectly in the management of the company. The positive impact of this increase in control is the reduction of type I agency conflicts between principals and agents, but it opens space for the creation of type II agency conflicts, namely between controlling shareholders and non-controlling shareholders (Jansen and Mackling, 1976). This type II agency problem arises when the controlling shareholder is involved in making management decisions that have the potential to harm the interests of the non-controlling shareholders. One solution to reducing this agency conflict is to improve the internal control mechanism by optimizing the roles of the independent commissioner and internal auditor (Jensen and Meckling, 1976). It is hoped that the increased role of internal control will be able to suppress the opportunistic behavior of managers. Findings regarding the practice of tunneling have been widely disclosed in developing countries, generally influenced by several factors, such as poor corporate governance systems (Claessens, Djankov, and Klapper, 2010;Gao and Kling, 2008;Juliarto, 2012), weak investor protection (Aharony, Wang and Yuan, 2010;La Porta, Silanes, Shleifer, and Vishny, 2000), regulatory and business environment (Liu and Tian, 2012;Juliarto, 2012), and company ownership structure (Liu and Lu, 2007;Jiang and Wong, 2003). This study examines the effect of family end control with a pyramid structure, RPTs disclosure, Internal Auditor and Independent Commissioner on Correlated Loan Tunneling in Indonesia with control variables of company size and industrial sector. This study used a sample of 258 companies listed on the Indonesia Stock Exchange from 2016 to 2018 with a purposive sampling technique. The findings of this study are expected to broaden investors' insights in the capital market and enrich empirical studies regarding the tunneling of RPT loans to family companies in Indonesia which are still rarely found. Another important aspect of the findings of this study is to provide input to regulators regarding the level of disclosure of related transactions, the effectiveness of internal control mechanisms, particularly the role of independent commissioners and internal auditors, which are expected to improve the protection of non-controlling investors in the Indonesian capital market. The consequence of the concentrated ownership structure is that there is a final controlling shareholder. La Porta et al. (1999); Claessens et al. (2000); and Faccio and Lang (2002) classify controlling shareholders into five categories, namely: 1) family, 2) government, 3) financial institutions with broad ownership, 4) companies with broad ownership, and 5) other controlling shareholders. La Porta et al. (1999); Claessens et al. (2000); and Faccio and Lang (2002) identify family ownership based on the same last name and whether there is a marital relationship. Family members are categorized as one controlling shareholder with the assumption that they cast voting rights as a coalition (Wiwattanakantang, 2000). With a 10% separation of control rights, the family is the most dominant controlling shareholder (La Porta et al., 1999;Claessens et al., 2000;Faccio and Lang, 2002). The dominance of control by family controllers provides flexibility to exercise tighter corporate control, but this also has the potential to take over the company's assets from other shareholders (Faccio and Lang, 2002;Villalonga and Amit, 2006). Family controllers use a variety of means to transfer the company's wealth from free cash flow to another company where they have petty cash flow rights but with large control rights (Johnson, La Porta, Lopez-de-Silanez, and Shleifer, 2000). Controlling shareholders can transfer the company's assets to obtain private benefits through transactions between controlling shareholders and controlled companies (Gilson and Gordon, 2003). Guo (2012) reveals that the presence of controlling shareholders with higher control rights than their cash flow rights causes a higher level of tunneling of related party transactions. Li (2010) examined tunneling performed by controlling shareholders in Chinese public companies, and found evidence of tunneling. Nugroho, Rahmawati, Bandi, and Probohudono (2020) state that 68% of public companies in Indonesia are family companies with a pyramid structure. Furthermore, Nugroho (2020) found that the final controller of the family with a pyramid structure was proven to perform tunneling through related party transactions. Based on some of the research findings, this research hypothesis can be formulated as follows: H1: The final controller of the family with a pyramid structure has a positive effect on the tunneling of RPT loans. Disclosure of RPTs is an activity to disclose transactions between related parties in accordance with the provisions contained in PSAK 7 (revised 2010). Disclosure of transactions between related parties is carried out with the aim of providing information to parties with an interest in the company's financial statements regarding the nature and types of transactions between related parties and their effects on the company's financial statements. The measurement of RPTs disclosure refers to the research of Juvita and Siregar (2015), which is to compare the value of disclosure made by the company with the total disclosure that the company should have made. Research on the disclosure of RPTs has been carried out (Utama and Utama, 2013;Hwang, Zhang, and Zhu (2010); Juvita and Siregar, 2015) with mixed results. Utama and Utama (2013) found that the RPT measure had a positive impact on firm value when transactions involved loans from related parties and had no impact on firm value. Hwang et al. (2010) found that the higher the disclosure of RPT would increase the transparency of the company's financial statements and reduce the tendency for abusive RPT to occur. Juvita and Siregar (2015) found that the disclosure of RPTs can reduce earnings management. Based on some of the results of these previous studies, the research hypothesis can be formulated as follows: H2: The level of RPTs disclosure has a negative effect on the tunneling of RPT loans. Internal audit is an independent and objective assurance and consulting activity, designed to provide added value and improve the activities of an organization's operations (International Professional Practices Framework, 2011: 34). Internal auditors are formed by organizations to help achieve company goals, through a series of systematic approaches to evaluate and improve the effectiveness of risk management, control and governance processes. The internal auditor is an independent work unit led by a Head of Internal Audit who is appointed and dismissed by the President Director with the approval of the Board of Commissioners. In carrying out its activities, Internal Audit adheres to the Internal Audit Charter established by the Board of Directors after obtaining approval from the Board of Commissioners. The Internal Audit Charter contains, among other things, the structure and position, responsibilities and authorities, code of ethics, and policies for the internal audit function. Tong, Mingzhu, and Feng (2014) conducted a study on BUMN in China that found that the quality of internal auditors plays an important role in controlling related transactions by controlling shareholders that have the potential to harm the interests of minority shareholders. Li, Sun, and Wang (2004) state that efficient internal control can, to some extent, prevent the company's controlling behavior from committing fraudulent actions as a result of agency problems, such as related transactions, manager's opportunistic behavior, and earnings management. The main function of the internal auditor is to ensure and assist management regarding the supervisory function, improve management effectiveness and implement governance within the company to increase value and improve company performance (Tong et al., 2014). Based on the research findings above, in this study the formulation of the hypothesis can be presented as follows: H3: Internal auditors have a negative effect on the tunneling of RPT loans. The management of companies in Indonesia adopts a two-tier board system, namely the separation of functions between the board of commissioners as an organ of the company that functions to supervise and the directors who are responsible for managing the company (UUPT No.40 of 2007). An independent commissioner is a member of the board of commissioners who is not affiliated with the board of directors, other members of the board of commissioners and controlling shareholder, and is free from business or other relationships that may affect his independence in carrying out his supervisory function. Financial Services Authority Regulation No. 33 / POJK.04 / 2014 states that the number of commissioners in the company consists of at least 2 commissioners and at least 30% of them are independent commissioners. Through the separation of functions and composition, it is hoped that the board of commissioners and independent commissioners can optimally perform their supervisory functions. Many empirical studies examining the role of independent commissioners on tunneling have been conducted with mixed results. Several studies have found that independent commissioners have a positive effect on earnings management (Liu and Lu, 2007), and asset appropriation (Gao and Kling, 2008) has a negative effect on expropriation (Shan, 2012) and financial fraud (Chen, Chen, and Chen, 2009). ), but found no effect on executive compensation (Chen et al., 2009) and firm performance (Shan, 2012). Empirical studies regarding tunneling activity have been carried out in various countries with mixed findings. Juliarto (2012) states that governance mechanisms in the form of foreign ownership and independent boards of commissioners in developing countries are not effective factors in controlling tunneling activities. Hastori, Sembel, and Maulana. (2015) also found that independent commissioners are not a significant factor in reducing expropriation practices. Based on some of the results of these previous studies, the research hypothesis can be formulated as follows: H4: The proportion of independent commissioners has a positive effect on RPT loan tunneling. METHOD This study uses sample data on nine company sectors listed on the Indonesia Stock Exchange, except for the banking sub-sector. The data was collected by using purposive sampling method. This study uses 258 sample companies listed on the Indonesia Stock Exchange from 2016 to 2018. Measurement of the dependent variable tunneling loan RPTs refers to Nugroho et al. (2020) which is measured by the difference between accounts receivable (short term and long term) and debt (short term and long term) divided by total assets. Short-term receivables used are other receivables except trade receivables, while long-term receivables used are related party receivables. Information on short-term debt used is other debt except trade payables, while longterm debt is used for related party debt. The final control data for the family was obtained by Globe Asia (www.globeasia.com) which published data on 100 business groups in Indonesia in 2018. Unique data on the ownership structure of the pyramid was traced from company financial reports, IPO reports and collaborative company websites to compile a sample company business group structure . Measurement of the level of disclosure of RPTs refers to the measurement used by Utama and Utama (2013) which compiled 10 questionnaires based on the six categories stipulated in POJK VIII.G.7 concerning Guidelines for the Presentation of Financial Statements, particularly on related party transactions. Data on internal auditors and independent commissioners were obtained from the company's annual report for 2016-2018. RESULTS AND DISCUSSION Testing the hypothesis of this study using the binary logistic regression test to test 2 groups of tunneling and non-tunneling company classifications. The main regression model of this study is presented as follows: T_RPTsL i,t = α 0 + β 1 .ULTM_PYRMD i,t + β 2 .DSCL i,t + β 3 .IAUD i,t + β 4 . INDEP i,t + β 5 .SIZE i,t + β 7 .SEC i,t + ε i,t Where T_RPTsLt is the tunneling of RPTs loans. ULTM_PYRMD is the ultimate controller of the family via a pyramid ownership structure. DSCL is the disclosure of RPTs. IAUD is an internal auditor and INDEP is an independent commissioner. This study uses the SIZE control variable, namely company size and the SEC is the industrial sector. Table 1 Panel A presents descriptive statistics of all research samples (N = 258). The results of this test indicate that the dependent variable T_RPTsL has a positive mean value (14.17%) which indicates an indication of the tunneling behavior of the sample companies. The ULTM_PYRMD variable has a positive mean value (6.09%). This shows that the final controller of the family with a pyramid structure has an average control right that is higher than the right to cash flow by 6.09%. The DSCL variable has an average value of 62.57%. These findings indicate that the average sample company has a disclosure level of 62.57% of the supposed disclosure. The IAUD variable has an average value of 4.88. These results indicate that on average the sample companies have met the minimum requirements stipulated in POJK No. 56 / POJK.04 / 2015 that the minimum limit for a public company internal auditor is 1 auditor. The results of the INDEP variable test showed an average value of 42.67%. These results indicate that the average sample company has met POJK N0.33 / POJK.04 / 2014 that the minimum proportion of independent commissioners in public companies is 30% of the board of commissioners. The test results of the SIZE control variable showed an average value of 12.72. This value indicates that the average sample company has a large size, which is above 10.00. In the test of the two categories of tunneling and non-tunneling companies, it shows that of the 258 sample companies, 146 companies (56.60%) are included in the tunneling company category (1) and the remaining 112 companies (43.40%) are included in the non-company group. -tunneling (2). Based on the results of the t-test, the t-statistic value is 2.503 with a significance level of 0.013 and a mean difference value of 29.58%. These results indicate a significant difference (0.013 <0.05) between dummy 1 (tuneling) and dummy 0 (non-tunneling) in the tunneling activity of RPTs loans. In the test of two categories of family company groups and other than family companies, it shows that of the 258 sample companies, 186 companies (72.10%) are included in the family company category (1) and the remaining 72 companies (27.90%) are included in the company group other than family company (2). Based on the results of the t-test, the t-statistic value was obtained at 8.265 with a significance level of 0.000 and a mean difference of 2.39%. These results indicate a very significant difference (0.000 <0.01) between dummy 1 (family firm) and dummy 0 (nonfamily firm) in the category of company groups. Table 2 below shows the Pearson correlation across samples (N = 258). The correlation between T_RPTsL and ULTM_PYRMD is positive and significant at the 1% level. The correlation between T_RPTsL and DSCL shows a negative correlation and is very significant at the 1% level. Different results were found in the T_RPTsL correlation test with the IAUD and INDEP which showed a positive and insignificant correlation. The results of the T_RPTsL correlation test and the control variable SIZE show a negative and very significant correlation at the 1% level, on the other hand, the SEC variable shows a negative but insignificant correlation. This analysis is used to determine predictors that differentiate between tunneling and nontunneling-indicated corporate behavior. Table 3 presents the results of the binary logistic regression test for 258 samples. The test results showed that the psudeo chi-square value was 345.319 with a p-value of 0,000 and the psudeo negelkerke R square was 16.7%, while the accuracy of the logistic regression model was 67.4%. Table 3 presents a summary of the results of the binary logistic regression test. The results of testing the final control variable of the family through a pyramid structure (ULTM_PYRMD) show a positive and very significant regression coefficient at the 1% level. This result supports H1 which states a positive relationship between ULTM_PYRMD and T_RPTsL. The RPTs disclosure test (DSCL) shows a negative and very significant regression coefficient at the 1% level. This result supports H2 which states a negative relationship between DSCL and T_RPTsL. Insignificant results are shown in the testing of internal auditors (IAUD) and independent commissioners (INDEP) on related loan tunneling. These results cannot support Hypothesis 3 and Hypothesis 4. The T_RPTsL test results with the SIZE control variable showed a positive and significant regression coefficient, but the industrial sector variable (SEC) showed insignificant results. CONCLUSION The purpose of this study is to examine the relationship between family end controllers through a pyramid structure, RPTs disclosure, internal auditors and independent commissioners on the relationship between loan tunneling and the control variable firm size and the industrial sector. The results of hypothesis 1 testing indicate that the final control of the family through the ownership structure of the pyramid has a positive and very significant effect on related loan tunneling. This finding can be interpreted that related loans in business groups with a pyramid ownership structure tend to be used by the controlling shareholder of the family's end to carry out the practice of tunneling related loans which is detrimental to the interests of minority shareholders. This finding also confirms the type II agency theory that the separation of control rights and cash flow rights in concentrated ownership triggers agency conflicts between controlling shareholders versus minority shareholders. The dominance of control that is owned by controlling shareholders in business groups with a pyramid ownership structure can be freely used to control the company's management, either directly or indirectly. The results of hypothesis testing 2 show that the disclosure of RPTs has a negative and very significant effect on related loan tunneling. This finding can be explained that the higher the company makes related transaction disclosure, the more likely it is to reduce the possibility of tunneling related loan practices carried out by the end-of-family controller. Based on the results of statistical tests (Table 2), the average level of disclosure of RPTs for the sample companies of 62.57% has met the minimum RPTs disclosure limit, which is 6 out of 10 disclosure items in accordance with PSAK 7 (revised 2010). These findings indicate that the higher the disclosure of RPT, the greater the transparency of the company's financial statements and reduce the tendency for abusive RPT to occur (Hwang et al., 2010). These findings at the same time confirm agency theory that through an effective supervisory mechanism, in this case, internal control by internal auditors can reduce the opportunistic behavior of company management which is controlled by the final controller of the family. The results of hypothesis 3 testing indicate that the internal auditors have no effect on the tunneling of RPTs loans carried out by the end-of-family controllers. This finding can be interpreted even though the average number of internal auditors of the sample companies in this study shows an average of 4.8 which means that they have met the minimum requirements set out in POJK No. 56 / POJK.04 / 2015 that the minimum limit for internal auditors of public companies is 1 auditor, but the presence of internal auditors is not effective enough to control tunneling practices through RPTs loans which are carried out by the final controller of the family. These findings become interesting things in this study. Although in carrying out their duties the internal auditor has a clear mandate which is stated in detail and firmly in the Internal Audit Charter, but the strong control right of the family's final controller as the central power of decision making and management policy causes the function and role of the internal auditor to be ineffective and is limited to comply only with regulatory provisions. The results of hypothesis testing 4 found that the independent commissioners had no effect on the tunneling of RPTs loans carried out by the end-of-family controllers. This finding can be interpreted even though the average number of independent commissioners in the sample companies is 42.67%, which means that they have met the minimum proportion stipulated in fulfilling the OJK Regulation No. 33 / POJK.04 / 2014 amounted to 30% of the board of commissioners, but the existence of independent commissioners was not effective enough to control tunneling practices through RPTs loans that were carried out by the final controller of the family. These findings become interesting things in this study. Although the process of selecting independent commissioners in Indonesia is sufficiently well regulated in Law Number 40 of 2007 as well as improved regulations through Bapepam Regulation Number IX.I.5 of 2011, in reality the existence of independent commissioners in family companies only meets formal requirements and does not. intended to streamline corporate governance practices. Meanwhile, the final controller of the family has a strong dominance of control so that the function and role of the independent commissioner becomes ineffective. The implications of this research are: First, the Government can encourage OJK as the regulator of the Indonesian capital market to make the various existing regulations more effective, especially those related to disclosure of related transactions. Through firm law enforcement, it is hoped that it can reduce various negative impacts related to related party transactions which are often used by the end-of-family controllers to carry out illegal transactions that have the potential to harm the interests of non-controlling shareholders. The implication of these two studies is that the government through related institutions can continue to encourage efforts to implement corporate governance practices in public companies in Indonesia. Through this improvement in the quality of corporate governance, it can strengthen and make the role of internal auditors and independent commissioners effective in carrying out their supervisory functions so as to reduce the occurrence of related loan tunneling practices in Indonesia.
2021-08-02T00:06:42.309Z
2021-04-29T00:00:00.000
{ "year": 2021, "sha1": "9281f289795af15361e6e3c2a5b00040498a6b2d", "oa_license": "CCBYSA", "oa_url": "https://ejournal.stiewidyagamalumajang.ac.id/index.php/wiga/article/download/651/379", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ba65624c4c306e7d8eac3f4b97eb4f2f422bf95", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
226270722
pes2o/s2orc
v3-fos-license
Self-Directed Female Migration in Ghana: Health and Wellness of Elderly Family Caregivers Left Behind. An Ethnographic Study Driven by the global economic crisis, families are developing strategies for survival, including self-directed female migration. Female migration has negative and positive impacts on families in rural areas. The purpose of the project was to explore the health and wellness experiences of elderly family caregivers who have female family members who have migrated to improve the status of their families. In this focused ethnographic study, we interviewed elderly family members who had a female family member who migrated outside their community for employment. Participants were enrolled from northern Ghanaian communities known to be economically disadvantaged in comparison to their southern counterparts. All interviews were audio-recorded, transcribed verbatim, and translated into English. Data were analyzed based on thematic content. Majors themes that emerged were reasons for children leaving their families; physical, emotional, and spiritual health; and social and economic struggles. Challenges of family care work undertaken by the elderly in families with emigrated female kin strongly also emerged as a theme. New contextual knowledge was developed about the impact of self-directed female migration on the health and wellness of elderly family caregivers. The information is valuable for the development of culturally appropriate social support and health practices for female migrants and their families. Introduction Driven by the global economic crisis, families are developing a variety of strategies for survival, including self-directed female migration [1]. Women increasingly make decisions about migrating and organizing their work rather than consider family decision-making processes. The feminization of migration is a clear trend within migration in the last few decades. More women migrate to obtain work and support their families [2]. The migration is mostly from rural to urban areas and, in the African context, migration is a family matter that includes sending payments by migrants to their families [3]. When little attention is paid to the social determinants of health (SDOH), it has great consequences for female migrants and their families, specifically the wellness of elderly family members that stay behind. While the financial benefits of female migration have been reported, Dungumaro [3] argues that female migrants have more negative than positive impacts on their families in rural areas. A negative impact may include changes in values and norms that are not always congruent with the traditional values of their communities of origin [2,3]. This project builds upon previous research conducted by a team (Vallianatos, Richter, Ansu-Kyeremeh & Aniteye) in 2014/15, which explored how migration influences the understanding of health and health behaviors by working women who have or have not migrated in Ghana. Insights from this study enriched our understanding of the intersection of migration, gender, and health. Women who migrated had different challenges than migrant men. The harsh environment affected their physical, psychological, and social health and, in particular, safety was a great concern. Our analysis revealed the need to expand the research to northern Ghana, from where most of our migrant participants originated. In low-middle income countries (LMIC), and particularly Ghana, we know little about the effect of migration on the left behind elderly family members. The purpose of the project was to explore the health and wellness experiences of elderly family caregivers (living in northern Ghana) who have female family members who have migrated to improve the financial status of their families. This paper highlights the challenges and opportunities of care work undertaken by the elderly in families with female kin who have migrated. Materials and Methods Our research design was a focused ethnographic study. Focus ethnography concentrates on distinct experiences in a particular culture or subculture [4]. Focused ethnographic research explores participants' beliefs and practices, viewing them within the context in which they occur rather than aiming to produce findings that can be generalized. The methodology has been used to identify how people from different cultures integrate health beliefs and practices into their lives [5] and to describe the meaning cultures or subcultures ascribe to their experiences [6]. Ethics approval was received from the Human Research Ethics Review Board at the University of Alberta (Pro0071082), Canada. Participants were informed in detail about the research project including the benefits and risks of participation. All research staff signed a confidentiality agreement in which it was explained that all data are confidential and private, and explained their role in maintaining confidentiality. All participants gave oral consent. Participants had the option to withdraw from the study at any time. We employed convenient and snowball sampling techniques. To be included in the study participants had to be elderly family members age 50 years or above and have or have had a female family member who migrated outside their community for employment. Participants were enrolled from northern Ghanaian communities, which are known to be economically disadvantaged in comparison to their southern counterparts. Data were collected in 2017 over two weeks. We visited the participants at their homes in the local communities. We employed semi-structured interviews and have written extensive field notes. Interviews were conducted with the support of local research assistants who spoke Dagbani, the participants' native language. All interviews were audio-recorded, transcribed verbatim, and translated into English. A bilingual member from the research team, fluent in English and Dagbani, oversaw the quality of the translation. Data were analyzed using a word processing software, and were based on thematic content [7]. Each interview was coded to identify concepts of what was communicated. Codes were formulated through a line-by-line analysis of concepts identified in the data. Comparative analysis of codes, and participants' use of codes, led to the development of subthemes. Themes were developed from both the subthemes that emerged from the data and by comparing to concepts reported in the literature. Rigor was maintained by ensuring the research process was transparent by way of an audit trail, member checking, reflexivity, and ongoing discussion with the research team. Demographics Eighteen participants were interviewed. The ages of the participants ranged between 51 and 90 years, with an average age of 70 years. Seventeen of the participants were women, and one husband-wife pair were interviewed. The participants were all Islamic and earning a living by farming and/or petty trading. Nine participants were widows, one was divorced, and eight were married. Ten of the participants were previously, or at the time of the interviews, in a polygamous marriage. All the participants had female family members who worked away from home as head porters (kayayei), hairdressers, or vendors selling food on the streets and in the markets. All female family members had been working away from home between seven months and seven years. Major Themes Majors themes that emerged were reasons for children leaving their families and struggles and challenges of family care work undertaken by the elderly in families with emigrated female kin. Participants talked about the reasons their daughters left their families and travelled to the city. The reasons for migrating effected the severity of their health experiences as described in the other themes. The reason for migrating was mainly for work purposes. Most of the participants' daughters became pregnant at a young age and had to leave school. As it was difficult for them to return to school, most of the girls decided to leave their children with their parents and travel to urban areas to search for work to support the family income. A mother explained: . . . they are not yet grown. They are still children. It is only this child's mother who is grown, and she was going to school, but because we do not have money she could not continue. She gave birth and left her baby with me and travelled. Another participant added: . . . she just had the baby [at] home, here. She was going to the school but, because of the pregnancy, it was not possible, so she gave birth to the child and we are caring for her. She left her with me and said she will go and see . . . she said she has regretted so she would be patient and work for money. Some participants in this study left home to support other children in the family. School fees were a commonly expressed need, as noted by a participant: . . . because her brother is in school and there is no support, she went there [the city of Accra] so that when school fees come then we will tell her and she brings money for us to pay the fees. Most of the participants live in low socioeconomic circumstances and do not have the means to support the dowry needed to make their daughters marriageable. A participant said her daughter had to look for work to buy the kitchen supplies she was expected to have to marry: You know, us women if you want to marry and you don't have bowls [kitchen equipment] you cannot get married, so she decided to go and work and gather her bowls in anticipation of marriage. Another participant shared that her health issues prevent her from farming and being self-sustainable. Her daughter left to support her financially: Struggles and Challenges of Family Care Work Undertaken by the Elderly in Families with Emigrated Female Kin Participants talked extensively about physical, emotional, spiritual, social, and economic challenges related to their female family members being away from home. Participants shared that children being away contributed to more work around the home and farm and now "when the day breaks, if I know I have some work to do and no one would be there to help, I just face it." They find it difficult to manage with all the daily chores around the house, for example, fetching water and firewood, cleaning, and cooking. There are various responsibilities and lots of hard work for the elderly. Another participant shared: I suffer a lot because of her absence. So, I think her stay here at home is much preferred than her stay over there because I cannot even carry water on my head. When I carry water on my head I cannot sleep at night, again when I cook, I cannot sleep at night because of the smoke. Yet, I do all that in her absence because I don't have anyone to do them for me. And the other daughter is not old enough to do those things. Another participant continued and added that she is not physically able to do all the work. She elaborated: When they were here and I was strong, they used to work and I will also work and we were all taking care of our responsibilities and needs and by then I did not have any problem, but now they are away and I am no longer strong to be able to work to earn a living. Food insecurity was discussed in detail by participants. Their food production is dependent on how much they can harvest and that contributes to feeding the family and finances for other needs. A participant explained: . . . our problem is our farming issues. When it is time to farm and you have enough in your hand to farm, you can cultivate a large scale of land to get enough food that can sustain the family and leave some for sale. If you leave some to sell, at least it can cater for other problems. But, if you don't have enough food, you don't even get satisfied, how can you sell some to cater for other problems? They continued to say that in these cases their daughters offer support by buying food. A participant described: But anytime [I] need financial support, she sometimes buys foodstuffs and sends to us. Social and economic struggles and challenges experienced focused on their financial need for survival as elaborated on by a participant: " . . . our worry is just that we do not have enough [for] our children; they have gone out and since they have gone out and have also left a burden on us, that is our worry." Most participants' daughters left to contribute to the finances of the family, but family members said, "that since they have moved, we cannot say there have been changes." They do not see many financial benefits of the children working away from home. "What she sends is not enough to help." The financial contribution is dependent on the availability of work in the urban areas, as explained by one mother: . . . there is a bit of change, but it is just that it is not much change, but it is better than how it was. Is only like when she is . . . she is over there and she has not gotten any [earnings], we will not also get any [money]. It is just not enough, is not much, it doesn't help. Participants shared that it was difficult for their children to sustain themselves while working in the markets and additionally sending money home. An older mother shared that their daughters have to pay for accommodation, food, and other living costs while working in the markets. Their income and how much money they can send home also depends on the availability of work in the markets. She shared: . . . that the child, hmm, that the market has fallen and so they don't get work like before. . . . they get but what they used to get they don't get anymore. [And] because she doesn't get enough is not being able to support. About the finance, since she doesn't give me, there is no help. The daughters that have left often leave their children with their elderly family members and that causes an extra economic burden. A participant shared: I am also just suffering and the little I get I have to share with the child she left behind for me. So, it is an extra burden on her having children to take care of. It is just added to you because she is gone. Another participant talked about the benefit of her daughter's contribution to the income of the family. Most of the participants highly valued the education of children. She elaborated: . . . she sent money to the house because when her brother was going to school, she used to send money for the brother to be in school and also her son, her daughter too is in school and because of that, she sends money to take care of her daughter's school fees because her daughter is going to private school. Participants talked about the effect on their emotional and spiritual health as shared: "I cry in the night because I am thinking of the absence of the children." They mentioned worrying about their children being away from home and how they wish they would come home soon to support the family. A participant shared that "that [they] do not have enough children; they have gone out and since they have gone out and also left a burden on [them] that is [their] worries." Their children's absence affects their emotional and spiritual health and contributes to physical symptoms such as sleeplessness, as described by an older participant: "They went because of compelling reasons, so I am worried. Their absence worries me and gives me sleepless nights." Their children's absence is affecting them "greatly because if you're sick and you want to go to hospital, you will need money to go to hospital, you even need someone to carry you to the hospital, but my children are not here to give me money or carry me to hospital and because of that I no longer go to seek for treatment or seek for medication. I will be just in the room and live with God's favor." Another participant continued: Now that they (adult children) are not here, [it] is not making me happy because it will take a long time and I will not see them and not seeing them and they don't also have what will make them come so we meet together. Participants relied on their religion to manage their emotional health, as described by a participant: . . . if not because of the lack of certain things she would not have gone. Because lack . . . [causes] some difficulties, that is why she has gone. If she were to be around, she would have been helping me in some things. Now that she is away, we only pray that she gets something and comes back, and I will be better again. As she is not there, it worries me so much because if she is there, she will be helping me [with] something. They "only pray that [their daughters] get something and come back." They believe it will all be better if they return. Participants talked at great length about happiness and what makes them happy and how it contributes to their emotional health. Some remarked about "financial stability. If you are financially sound, you would be able to take good care of your children without even watching your back. But, if you are there without anything financial [support], you would not be able to look after your kids and that alone can bring you lots of worries." Participants' sense of happiness is related to physical health and having the means to support their family, as eloquently described by a participant: . . . what I know about happiness is when you wake up and you and your children, they are healthy, and you can get them what they need to eat for the day and tomorrow, then that is happiness. If you are healthy and your children are healthy, then you are happy. Happiness was associated with the ability to care for their family members. Discussion Many studies have been conducted on women that are left behind when their husbands become migrant workers [8,9] and the effect on elderly parents when their children migrate [10,11]. Fewer studies have, however, been conducted in the context of low-middle income countries and especially focusing on female migration, leaving their elderly parent(s) behind. In low-middle income countries, the younger women are often the caregivers of their elderly parents, and when they migrate, it has a profound physical, emotional, and social effect on the elderly left behind. The migration of female family members contributes to significant changes in family function and often results in difficulties with kin relationships [12]. Using intersectionality as a framework uncovers the sources of oppression faced by families who decide to have their female family members migrate for economic reasons. Intersectionality is well suited in the health discipline to develop an understanding of the ways in which social locations and identities affect individuals, families, and communities and its effect on caring. Multiple factors intersect and influence the experiences of the elderly left behind, for example, the gender of the adult child who has left, cultural factors (dowry), and oppression (oppressive living arrangement and availability of work that cause stress with the family members left behind). Our study focused on the wellness experiences of the elderly family members left behind when female members decided to migrate for work purposes. Globally, unemployment, low income, and lack of education have supported younger people's decisions to migrate. It has become a normal part of younger people's lives and, more often, younger women are moving, mostly from rural to larger urban areas [13]. In Ghana, the high poverty levels in the northern parts of the country motivate girls and women to migrate to urban areas such as Accra and Kumasi, to engage in what is locally called "kaya business" (head porters) [14]. The need for female members of the family to migrate was influenced by economic needs for survival. It was used as a "temporary option to financially support their families in Northern Ghana" ( [15], p. 5). Schooling is free in Ghana, but many children, particularly those living in rural areas, leave school at a young age. Economic necessity forces children, and more so girls, to drop out of school in search of work, to care for younger siblings, and to help with domestic work. Several societal and cultural norms are pressuring elderly women to fulfill the role of caregivers in the family context [16]. It is important to understand the socioeconomic and cultural context in which these families function. In Africa, and particularly in northern Ghana where this study was conducted, cultural norms are strong and dictate moral conduct and behaviors. Individuality is encouraged and the collective and communal values are deep-seated in the culture [17]. The communality incorporates the function of caring for other persons of the same kinship, clan, or community [18]. Northern Ghana is historically a patriarchal society and women are facing discrimination and inequality in society. In our study, women have, however, independently decided to migrate to find work to support themselves and contribute to the family income. The daughters had children very young and did not complete their schooling. They decided to leave their children with their parents while migrating to work in larger urban areas. This is an acceptable practice in northern Ghana. The concept of family is different from the contemporary Western concept of the nuclear family; "The Dagomba and other northern ethnic groups believe that the child is a gift from God and it is the responsibility of all members of the family to bring up the child" ( [19], p. 441). Parents are taking on the caring role of their grandchildren to keep the family ties alive and to ensure that the children have the opportunity to attend school, which might be more difficult if they travel with their mothers [19]. It, however, places a burden on the elderly parents taking on the caregiving role. Adult children traveled back and forth from northern Ghana as the need arose for financial support. Society assumes that the financial contribution to left behind elderly family members will be beneficial. The common view is that the family member that leaves will contribute to the household income and living standards. Contrary to this assumption, our study shows that elderly parents did not experience it as being beneficial. Dungumaro also describes in his study in rural Tanzania that "migration has not improved household income, it has negatively impacted on migrants' families in rural areas" ( [3], p. 46). Similarly, in our study, the left behind older family members described physical, socioeconomic, and emotional challenges as a direct impact of their children working away from home. In most African cultures, gender roles are dictated, and responsibilities are assigned to each family member at different stages of their lives [20]. The participants talked about having nobody to support activities normally assigned to women in this society such as cooking, fetching firewood and water, and participating in caring for their crops. Caring for the elderly and supporting them physically and financially is an important task of the female children. The absence of their children additionally affected their mental health, as shared, and also caused sleeplessness and constant worrying. Similarly, some quantitative studies have shown that the out-migration of adult children was highly associated with poor mental health, but the phenomenon was not associated with the physical health of the elderly left behind, as our study portrays [21]. Our findings have policy implications. The health and social care teams in Africa need to consider the values or beliefs of each individual to provide appropriate support needed. Support systems for families need to focus on the negative effects of migration and improving the experiences of the elderly family members that stay behind without adequate social safety nets. Governments must provide better support systems for left-behind elderly family caregivers who participate in childcare and numerous other demanding domestic duties for absent family members, but who may need for care and support themselves [22]. Rural communities should consider developing supportive institutions that can help elderly family members who stay behind to "adapt to the loss of an economically active member or caregiver through migration" ( [22], p. 9). On the macro level, policymakers need to attend to the reasons for migration, which in our case are mainly socioeconomic reasons. More attention is needed to improve the local economy in northern Ghana, for example, Démurger recommends "improving the functioning of labor markets (notably in rural areas, to facilitate the hiring of local labor when a family member migrates), strengthening formal insurance and credit markets, facilitating the transmission of remittances by lowering remitting costs, and increasing access to education and health care" ( [11], p. 9). The study has limitations. We mainly interviewed female members that were left behind. It was challenging to recruit male family members; some of them were either deceased or working and not available. We did not include other family members, for example co-wives or siblings. It will be beneficial to conduct a follow up study and interview the family as a unit. The study was conducted in a relative short period. It is recommended to spend more time in these communities and interview participants multiple times to clarify and expand and the data analysis. Conclusions New contextual knowledge was developed about the impact of self-directed female migration on the health and wellness of elderly family caregivers. There is valuable information for the development of culturally appropriate social support and health practices for female migrants and their families applying a social determinant of health framework. A limitation of this study was the unequal sample size of male and female participants. Males hold great authority in northern Ghana; therefore, a greater male representation could have influenced the findings. This is an important area worthy of more research. Funding: This research was funded by University of Alberta Endowment Fund for the Future, University of Alberta, Alberta, Canada. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-11-05T09:05:16.251Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "391e4a498aa1796dafd8ef07ed0f864cf5bb4cfc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/21/8127/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "446034f45f48845e681366999cefe7a30003d077", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
104393797
pes2o/s2orc
v3-fos-license
Combined Experimental and Simulation Studies of Cross-Linked Polymer Brushes under Shear We have studied the effect of cross-linking on the tribological behavior of polymer brushes using a combined experimental and theoretical approach. Tribological and indentation measurements on poly(glycidyl methacrylate) brushes and gels in the presence of dimethylformamide solvent were obtained by means of atomic force microscopy. To complement experiments, we have performed corresponding molecular dynamics (MD) simulations of a generic bead–spring model in the presence of explicit solvent and cross-linkers. Our study shows that cross-linking leads to an increase in friction between polymer brushes and a counter-surface. The coefficient of friction increases with increasing degree of cross-linking and decreases with increasing length of the cross-linker chains. We find that the brush-forming polymer chains in the outer layer play a significant role in reducing friction at the interface. INTRODUCTION Cross-linked polymer brushes are often termed polymer brush gels or simply gels. These polymer gels can swell in either water (hydrogels) or oil (lipogels), 1 making them highly suitable candidates for applications in the fields of drug delivery, pharmaceuticals, tissue engineering, and other biomedical applications. 2−5 Surface-grafted polymer gels can be prepared using two different methods: (i) in situ and (ii) ex situ. In the in situ method, the polymer gels are prepared by cross-linking the chains while growing them from the grafting surface, whereas in the ex situ method, polymer gels are prepared by cross-linking the chains in a subsequent step. Polymer brushes have long been studied using experimental, 6−9 theoretical, 10−14 and modeling 15−21 approaches. Polymer-brush-bearing surfaces exhibit very low friction in a good solvent. 8,22,23 Strong repulsive forces of entropic origin largely prevent the interpenetration of polymer chains grafted on opposing surfaces. Such forces lead to the formation of a thin fluid film between opposing brushes that assists in reducing friction. 7 Studies have been performed to study the effect of different design parameters, such as molecular weight or chain length, 24−27 grafting density, 21,28−31 chain stiffness, 29 and solvent quality 8,32−34 on the tribological behavior of polymer brushes. There has also been interest in studying the effect of crosslinking on the shear response of polymer brushes. 4,35−42 Lin et al. 43 investigated the effect of cross-linking density and stiffness on the macroscopic behavior of a type 1 collagen gel. It was found that an increase in the cross-linking density and stiffness (of cross-linkers) leads to an increase in the stiffness of the gel, but the cross-linking density plays the dominant role. The grafted poly[styrene-b-(ethylene-co-butylene)-b-styrene] (SEBS) gel layer showed improved tribological properties (less wear and lower friction coefficient) in comparison to the dry grafted SEBS layer and an n-octadecyltricholorosilane selfassembled monolayer. 44 Recently, the effect of cross-linking was studied using pentaerythritol tetraacrylate as a crosslinking agent for poly(ethylene oxide) gels. 45 It was found that an increase in cross-linker concentration lowers the swelling ratio and increases tensile stress. Cross-linking is known to improve the wear behavior of polymer brushes. 35,46,47 Kobayashi et al. 48 recently showed that the macroscopic friction properties of a diamond-like carbon−silicon (DLC-Si) specimen can be significantly improved by fabrication of an oleophilic cross-linked copolymer brush layer on its surface. Pan et al. 38 studied the friction properties of poly(vinyl alcohol) hydrogels against titanium alloys for biotribological applications under varying loads and shear speeds. They concluded that the effect of load on friction was more significant than that of the speed. Poly(2-hydroxyethyl methacrylate) (PHEMA) hydrogels have been of particular interest to researchers for their potential biotribological applications, and studies have been performed for different combinations of substrate and counter-surface. 4,37,49,50 Li et al. 35 studied the effect of degree of cross-linking on the mechanical and tribological behavior of poly(acrylamide) (PAAM) brushes and hydrogels. They found that covalently cross-linked hydrogels display higher Young's moduli and coefficients of friction in comparison with surface-grafted polymer brushes, and the effect was found to increase with the degree of cross-linking. In contrast, Ishikawa et al. 51 compared the effect of mechanical properties and of chemical characteristics (polymer hydration) on tribological behavior of hydrogels via pin-on disk experiments and concluded that the chemical characteristics (e.g., hydration) were the dominant factors. Ohsedo et al. 50 studied the effect of the presence of well-defined polymer brushes on gel surfaces. Their study showed that longer poly(sodium 4-styrenesulfonate) (PNaSS) brushes on PHEMA gels exhibit lower friction at low sliding speeds. Dunn et al. 3 explored the distinction between a selfmated "gemini" hydrogel interface and hydrogels sliding against hard, impermeable counter-surfaces and demonstrated that Gemini interfaces have very low friction coefficients, which are independent of sliding speed. On the other hand, hydrogels sliding against rigid impermeable surfaces exhibit higher friction, which is strongly dependent on sliding speed or time in contact. Thus, experimental studies have mainly focused on the role of solvent and effect of degree of crosslinking on the tribological behavior of gels, but to the best of our knowledge the role of the length of cross-linkers has not yet been studied in detail. We performed complementary experimental and simulation studies to understand the tribological behavior of polymer brushes and gels. We characterized the tribological behavior of poly(glycidyl methacrylate) (PGMA) brushes and gel systems using a colloidal-probe-based lateral force microscopy (LFM) technique. Friction measurements were performed at various applied loads, while maintaining the sliding speed constant. Polymer brushes and gels were modeled using a multibead− spring, coarse-grained molecular-dynamics (MD) simulation technique. We compare the experimental outcome with modeling results to rationalize the effect of cross-linker chains on the frictional behavior of polymer brush gels. METHODOLOGY 2.1. Experiment. 2.1.1. Materials. Friction experiments were performed on PGMA brushes and gels in dimethylformamide (DMF). The polymers were synthesized using the surface-initiated atom-transfer radical polymerization 52 (SI-ATRP) method on a silicon surface. They are characterized by their mean molecular weight M n = 281.7 × 10 3 g/mol and a polydispersity index PDI = 1.4. The grafting density of the polymer brushes and gels is ρ expt ≈ 0.16/nm 2 , i.e., 50 times the critical grafting density, 21 ρ* = (πR g 2 ) −1 . For details about the estimation of these characteristics for our polymer brushes and gels, see the Supporting Information. The typical procedures for SI-ATRP of glycidyl methacrylate (GMA) were as follows: 0.141 g (0.9 mmol) of bipyridine (bpy) was dissolved in a mixture of 5 mL of GMA (0.037 mol), 1 mL of H 2 O, and 4 mL of methanol. The mixture underwent four freeze−pump−thaw circles (15 min each) to remove dissolved oxygen. In the next step the mixture was transferred to another flask containing 52.8 mg of CuBr (0.37 mmol) and 4.5 mg of CuBr 2 (0.02 mmol). After stirring for 10 min at room temperature, the mixture was immediately transferred to freshly prepared, initiator-modified silicon substrates. Polymerization was performed at room temperature for various lengths of time without stirring, after which the silicon substrates were removed from the polymerization solution and sonicated in DMF to remove weakly adsorbed polymer. PGMA brushes were cross-linked by ethane-1,2diamine or ethane-1,6-diamine in a postmodification manner. Amines can, in principle, react with the epoxypropyl groups in the PGMA in several different ways, since an amine can react with one, two, or even three epoxypropyl groups, and each end of the cross-linker could react with a different number. However, after a series of experiments (detailed in the Supporting Information), it was determined that, under the conditions used, each end of each cross-linker reacted with a single epoxypropyl group. Details of polymer brushes and gels used in the tribological experiments are presented in Table 1. Dry thicknesses of PGMA brushes and gels were measured with a variable-angle spectroscopic ellipsometer (VASE, M-2000F, LOT Oriel GmbH, Darmstadt, Germany) at an incident angle of 70°, using a three-layer model (software WVASE32, LOT Oriel GmbH, Darmstadt, Germany), each sample being measured at three different spots. Cross-linkers of two different lengths were used to prepare PGMA gels with different degrees of cross-linking to facilitate the study of the effect of length and degree of cross-linking on the tribological behavior of the gels. By degree of cross-linking (p) we mean p 2 no. of cross linkers no. of polymer chains deg of polymerization 2.1.2. Methods. Frictional and normal forces between a silica microsphere and PGMA brushes/gels were measured in the presence of DMF solvent by means of atomic force microscopy (AFM). All the measurements were performed using the MFP 3D Instrument (Asylum Inc., Santa Barbara, CA). Asymmetric contact (i.e., brush/gel against bare microsphere) was used to obtain a measurable friction value because friction in symmetric contact (brush-against-brush contact system) is so low as to be at the limit of the resolution of LFM measurements. The AFM was operated in contact mode, the lateral and normal movements of the cantilever being monitored with a laser beam, reflected off the rear of the cantilever, and detected Macromolecules Article with a four-quadrant photodiode. These normal and lateral movements of the cantilever can be quantitatively related to the normal and lateral forces acting between the cantilever tip and sample surface if the stiffness of the cantilever and sensitivity of the photodetector with respect to the cantilever position in the respective direction are known. A nondestructive calibration procedure, the thermal noise method, 53 was used to estimate the normal stiffness of the NSC36 (MicrosMasch, Tallinn, Estonia) cantilever. Sader's method 54 was used to calibrate the torsional spring constant of the cantilever. A home-built micromanipulator (attached to a BX 41, Olympus optical microscope, Japan) was used to attach the colloid particles to a tipless cantilever. In this study, silica microspheres (Kromasil, EKA Chemicals, Sweden) with a diameter, d = 14 μm (for the friction experiment) or d = 10 μm (for the indentation experiment) were attached to different tipless cantilevers using a UV-curable glue (NOA 61, Norland optical adhesive, Cranbury, NJ) and were cured overnight using a UV lamp (9 W, Panacol-Elosol, Steinbach, Germany). The lateral sensitivity, S L , of the AFM cantilever was estimated using the "test-probe" method 55 as described by Cannara et al. In this method, a colloidal sphere is attached to the cantilever used for calibration, termed the "test cantilever". The "test cantilever" is of similar width and thickness as the cantilever used for measurements or the "target cantilever". The diameter of the colloidal sphere, d = 80 μm, used for the test cantilever is larger than the width of the cantilever. For lateral-force measurements, 10 "friction loops" along the same line were acquired at each load. A scanning rate (n) of 1.0 Hz and stroke length (a) of 0.5 μm were used. Thus, the shear speed applied was calculated as v = 2na = 1 μm/s. Both the average friction force and the standard deviation were calculated. All the friction experiments were performed at room temperature (T = 300 K). 2.2. Simulation. We investigated an explicit, solvent-based multibead−spring generic coarse-grained model by means of MD simulation. Chains were permanently grafted by one end to a planar surface. To ensure that beads do not cross the grafting surface, an additional 9/3 repulsive wall potential U wall was used with cutoff z c = 0.5σ. Each grafted chain within the polymer brush consisted of N Lennard-Jones (LJ) beads, linearly interconnected by finite extendable nonlinear elastic (FENE) springs. Each chain was attached to the substrate by one of its ends using an immobile tether bead (red beads in Figure 1). The rest of the beads in each chain were free to move and interact with other polymer beads, the solvent, and the repulsive walls, confining the system to infinitely extended parallel-plate geometry. The solvent was modeled as a simple fluid using spherical beads (brown beads in Figure 1). A solvent molecule consists of one bead that has the same Lennard-Jones diameter as a polymer bead. All the simulations were performed for the brush-against-wall system. The wall was modeled with the help of frozen arrays of repulsive LJ beads. The interaction potential of counter-wall/surface with Macromolecules Article solvent and polymer beads in the simulation is not purely repulsive. We have used a LJ/12−6 potential with cutoff R c = 2.5 and ε = 1.0. Periodic boundary conditions were applied only along the lateral direction (along the x and y axis of Figure 1a), which coincides with the direction of sliding. To be specific, the explicit solvent model was that employed earlier by Soddemann et al. 56 and Dimitrov et al. 32 The Lennard-Jones (LJ/12−6) potential was truncated at its minimum and shifted to some desired depth (polymer−polymer, solvent− solvent, and polymer−solvent energies ε pp , ε ss , and ε ps ), continuing from its minimum to zero with a potential having a cosine form and thus providing a potential that both is continuous and has a continuous derivative at the cutoff distance r c,in . The parameters ε pp = ε ss = 0 and ε ps = 0.4 were chosen to model good solvent conditions in the current work. We have provided details of each potential used in this work in section SVI of the Supporting Information. The temperature was kept constant by controlling the temperature of all the beads except for tethered and explicit wall beads by explicitly rescaling their individual velocities. 29,57 We have used a profile-unbiased thermostatting (PUT) scheme. The velocity profile was calculated by computing the center-of-mass velocity of all beads residing in layers parallel to the grafting surface. The center-of-mass velocity of layers was used to define the "bias velocity", which was subtracted from the velocities of individual beads to calculate their thermal velocities. These were rescaled to the desired value, and subsequently the bias velocity was added. The temperature was maintained constant at T = 1.2 using a profileunbiased thermostat as discussed above for all the simulation work in this article. Details for generating the cross-linked polymer brush were discussed in our previous work. 58 For bonding within crosslinker chains and bonding between cross-linkers and polymer beads as part of the brush, we have used a harmonic bond potential, E r K r r ( ) ( ) Here K H is the spring coefficient determining the bond stiffness, r 0 is the equilibrium bond length, and r is the distance between two bonded atoms at any given time. We have used K H = 100 and r 0 = 1 to model rather stiff cross-linker bonds. The harmonic bond potential we use does not strictly prevent bond crossing, but bond crossing does not occur in practice for the chosen parameters, as described in the Supporting Information section SV. All simulated quantities reported in this study are given in terms of LJ units. 59 The cross-linked polymer brush system was generated for different numbers of cross-linkers (the number denoted by N cross ) with a fixed contour length of cross-linker (L cross ) chains, and vice versa. Figure 1b shows the explicit cross-linkers. L cross = 1 for monomers of different chains bonded by cross-linker, while L cross = 2 represents a single interior bead that is bonded to two beads in the respective chains to be cross-linked. The degrees of cross-linking (p) used in simulation work are p = 0, 4, 8, and 16%, as defined in eq 1. For our simulation, we have used LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). 60 We have performed simulations for the brush-against-wall model system described in Figure 1. We note that the simulations have been performed at fixed separation distances D (while measuring load), whereas experiments are performed under prescribed normal load (implying a separation distance D). The simulations were performed on randomly grafted polymer chains on flat surfaces. The system consists of M = 50 chains on the tethering surface, while each linear chain is composed of N = 50 beads. As mentioned in the section 2.1.1 (see also Supporting Information section SIII), the critical grafting density 21 for such polymer brush is ρ* = (πR g 2 ) −1 . We have considered grafting densities well within the brush regime, ρ = 0.075 (∼7ρ*). We have not considered additional bending stiffness of chains in the current work; i.e., the simulations were performed on flexible, excluded-volume chains. The total number of beads in the simulation box was such that the number density of beads was maintained at a typical value of ∼0.8 at each separation between the grafting surface and counter-wall. It can be seen that PGMA brushes on silicon surfaces in DMF reduce friction significantly when compared to bare silicon surfaces. The friction force was found to be higher for PGMA gels (i.e., with cross-linking) in comparison to PGMA brushes. A monotonic increase in friction force is observed upon increasing the degree of cross-linking for gels with C 2 crosslinkers. At 5% degree of cross-linking the friction force is seen to remain close to that for un-cross-linked brushes. At 50% degree of cross-linking, the friction force is higher and even exceeds that of the bare silicon surface. The observed higher friction (in comparison to a bare silicon surface) can be attributed to an increase in contact area between the colloidal sphere and the gel. Friction is also found to increase with cross-linking degree for gels made with C 6 cross-linkers. At 3% degree of crosslinking, the friction force is only slightly larger than that measured on (non-cross-linked) PGMA brushes. At 18% degree of cross-linking, friction is notably greater than that on (non-cross-linked) PGMA brushes and PGMA gels with 3% degree of cross-linking. With a further increase in degree of cross-linking to 36%, no significant further increase in friction is observed compared to the results obtained with a 18% degree of cross-linking. Similar experiments were performed at a shear velocity of 5 μm/s (Supporting Information section SVII). A scanning rate (n) of 1.0 Hz and stroke length (a) of 2.5 μm were used. Thus, the shear speed applied was calculated as v = 2na = 5 μm/s. The friction coefficient was found to increase with increasing shear speed for all the systems, but the overall trend in terms of the effect of cross-linking was found to be very similar. Polymer brushes and gels in our experiments underwent sliding and were not simply deformed. The friction force versus normal load curves show a linear relationship. The coefficient of friction can thus be extracted from the slope by linear-regression fitting. The obtained values for the coefficient of friction will be discussed in detail in section 3.3. 3.1.2. Atomic Force Microscopy (AFM)-Based Nanoindentation. AFM-based nanoindentation was employed to study the effect of cross-linking on the mechanical behavior of PGMA brushes and gels. The brushes and gels in DMF were indented with an AFM cantilever bearing a silica sphere of 10 μm diameter. The applied load (force) against penetration depth is presented in Figure 3. Figures 3a and 3b show the applied load against indentation depth for different PGMA gels with different cross-linking degrees for C 2 and C 6 cross-linkers, respectively. A change in the slope of the force-versus-depth curve occurs at the depth where the AFM cantilever begins to be noticeably influenced by the substrate; the steep part is caused by a substrate effect (the substrate is close, and the brush appears stiffer). In general, the substrate influence begins to be felt at around 10% indentation of the unperturbed brush height. 61,62 Hence, we can approximate the height of the PGMA brushes and gels by the penetration depth before this sudden change of the indentation force. With C 2 cross-linkers, as the degree of crosslinking increases from 5% to 50%, the substrate effect is shown at a lesser depth, which indicates a decrease in the swelling ratio with increase in degree of cross-linking. The indentation curves for PGMA brushes and PGMA gels with 5% crosslinking are similar, as are the friction forces measured by LFM (cf. Figure 2a). The plausible decrease in swelling ratio with an increase in the degree of cross-linking could explain the increase of friction force: with increasing in degree of crosslinking, there are few brush-forming chains available at the outer film layer, which are responsible for the low-friction behavior in polymer-brush-based lubrication. 9,23,35 The indentation curves for PGMA gels with C 6 cross-linkers also reflect the tribological behavior of gels observed in LFM experiments. At a degree of cross-linking of 3%, the substrate effect is already significant at penetration depths above 30 nm (implying a decrease in swelling ratio compared to PGMA brushes), which correlates with the increase in coefficient of friction. As the degree of cross-linking is increased to 18%, there is a further decrease in swelling ratio, and an increase in coefficient of friction was observed (Figure 2b). Upon further increasing the degree of cross-linking to 36%, there is no significant change in the indentation behavior anymore; similarly, we did not observe any significant change in the coefficient of friction. 3.2. MD Simulation. 3.2.1. Equilibrium Molecular Dynamics Simulation. We equilibrated the polymer brush/ gel against wall system at different separations D between the Figure 3. Applied force against penetration depth measured by colloidal-probe atomic force microscopy with a 10 μm silica sphere glued to a tipless cantilever (0.6 N/m stiffness) for (a) PGMA gels with C 2 cross-linkers and (b) PGMA gels with C 6 cross-linkers. % values denote the degree of cross-linking in each system (as for Figure 2). Macromolecules Article graft and the counter-wall surface (see Figure 1a). A reduction of separation distance by 1 (LJ unit) was achieved as follows: A number of solvent beads was randomly removed from the system to ensure the same number density 0.8 at the new separation distance. The grafting surface was kept fixed, and the counter-wall was moved toward the grafting surface with a constant velocity v = 0.01 for a duration of 10 5 steps at an integration time step Δt = 0.001. At each separation D between the polymer-chain-bearing surface and counter-wall, the polymer brush/gel system was allowed to equilibrate for 3 × 10 6 time steps (10 6 steps at Δt = 0.001 followed by 2 × 10 6 steps at Δt = 0.0025). Figure 4 shows the number-density profiles of polymer beads versus the z position measured from the grafting surface. Upon inspection of the density profiles, the systems with shorter cross-linkers show a decrease in brush height with increasing degree of cross-linking, and more polymer density is accumulated at the grafting surface. There is hence a lower polymer concentration present toward the outer layer of grafted chains to assist in brush-mediated lubrication. 9,63 AFMbased indentation experiments ( Figure 3) show that the wet thickness decreases with increasing degree of cross-linking; the simulation observations are in complete agreement with the experiments. 3.2.2. Nonequilibrium Molecular Dynamics Simulation (NEMD). The equilibrated systems at different separations (D) were used to run nonequilibrium MD (NEMD) simulations. Steady shear was applied by moving the tethered beads with the prescribed velocity, keeping the separation between walls constant during each run of given shear velocity. 20,58 At each separation and velocity, the stress tensor was calculated using the Irving−Kirkwood expression. 59,64 The NEMD studies were performed at a fixed shear velocity v = 0.001 applied on the tethered beads at different separations between explicit wall and polymer-bearing surface. At each separation, normal and shear stresses acting on the brush and cross-linkers were calculated for different combinations of lengths and numbers of cross-linkers to study the effect of cross-linking on the frictional behavior of model polymer brushes. The simulations were done for 3 × 10 7 integration steps, where data for the first 10 7 steps at time step Δt = 0.002 were ignored to allow the system to reach steady state. Data for subsequent 2 × 10 7 steps at Δt = 0.0025 were recorded and analyzed. Simulations at each separation (D) were repeated for 10 different initial configurations of randomly grafted polymer chains, and mean values from these runs are reported with error bars calculated from the corresponding standard deviations. Figure 5 shows the results on the effect of degree of crosslinking on polymer brushes for different systems having crosslinkers of length L cross = 1 and L cross = 2. In particular, Figures 5a and 5b display normal stress against distance curves for systems with L cross = 1 and L cross = 2 cross-linkers, respectively. It can be seen that the normal stress increases as the separation (D) between grafting surface and counter-wall surface decreases for all the systems. For systems with L cross = 1 cross-linkers the normal stress was found to be decreasing with increasing degree of cross-linking at all separations. The decrease in normal stress with the increase in the degree of cross-linking can be explained with the help of the density profile curve (Figure 4a). The brush height decreases with increasing degree of cross-linking; therefore, less deformation is felt in brushes with a higher degree of cross-linking at the same separation between wall and the polymer-bearing surface. This results in a decrease of the normal stress at the same separation with increasing degree of cross-linking. For the system with L cross = 2 cross-linkers, normal stress was found to be similar at different degrees of cross-linking and lower in comparison to the un-cross-linked system at all separations. This can be explained with similar density profiles for systems with different degrees of cross-linking. Figures 5c and 5d show the shear stress versus separation distance for systems with L cross = 1 and L cross = 2 cross-linkers, respectively. We observe an increase in shear stress as the separation D between grafting surface and counter-wall surface decreases for all the systems. We also notice an increase in shear stress with increasing degree of cross-linking at all separations. This increase in shear stress is found to be quite similar for L cross = 1 and L cross = 2. Figures 5e and 5f show a parametric plot of shear against normal stress for different separation distances D for systems with L cross = 1 and L cross = 2 cross-linkers, respectively. The shear stress for all the cross-linked systems is found to be higher compared to that of the un-cross-linked system at a given normal stress. We also find an increase in shear stresses with increasing degree of cross-linking at all normal stresses for systems with L cross = 1 and L cross = 2 cross-linkers. These observations can be rationalized as follows: Cross-linking leads to an interdependent motion of cross-linked grafted chains Macromolecules Article under shear, resulting in an increase in the shear stress for all the cross-linked systems when compared to un-cross-linked polymer brush systems. Under shear, the un-cross-linked systems are deformed more easily than a cross-linked network of polymer brushes. 36 The increase in the degree of crosslinking leads to more chains moving interdependently under shear. We therefore find an increase in friction upon increasing the degree of cross-linking. 3.3. Comparison between Simulation and Experimental Results. We are now in a position to attempt a qualitative comparison of the experimental and simulation results. We compare these studies in terms of the coefficient of friction (CoF), which is a frequently used quantity to characterize the tribological behavior of surfaces ( Figure 6). To compare flow conditions between experiment and simulation, the dimensionless Weissenberg number (Wi = γτ Rex with shear rate γ̇and relaxation time τ Rex ) is typically used. Under the experimental and simulation conditions used in our study, Wi numbers have comparable values, as demonstrated in the section SIV of the Supporting Information. Our simulations and experiments are located in the boundary-lubrication regime. Friction forces arise due to the interactions among wall, solvent, and polymer beads. We have calculated the coefficient of friction from the slope of the friction force against normal force. Thus, the presented results for the coefficient of friction are unaffected by adhesion between wall and polymer brush. The interaction potential between wall and polymer beads in the simulation is not purely repulsive as mentioned already (section 2.2). It is important to note that the overall interaction between brush and wall can be considered repulsive. There is an attractive van der Waals force present between the brush and wall, which reduces the overall repulsion, but it does not lead to an overall attractive interaction. The van der Waals interactions between polymer brushes and surfaces are considered as "bridging forces" and can be specific or nonspecific. Israelachvili 65 explained in detail various attractive "intersegment", "bridging", and "depletion" forces acting between polymers and counter-surfaces. Under suitable conditions, "bridging forces" can lead to an overall attractive force. For the experiments, a straight line was fitted to the frictionforce-versus-normal-load curve in Figure 2. The coefficient of friction is defined by the corresponding slope. Figure 6a shows the resulting CoF as a function of the degree of cross-linking measured by lateral force microscopy at a shear speed of v = 1 μm/s for different lengths of cross-linkers. We see an increase in friction force with speed for both cross-linking lengths studied here, which translates into an increase in CoF (not shown). We also find an increase in CoF with increasing degree of cross-linking (similar to ref 35) for both cross-linker lengths studied, while the CoF does not change significantly beyond a degree of cross-linking of 18% for C 6 cross-linkers. The coefficient of friction was found to be similar for C 2 and C 6 cross-linkers for lower degrees of cross-linking. At a higher degree of cross-linking, the friction was found to be lower for the gel with longer cross-linkers. For the simulations, the coefficient of friction was estimated from the slope of the shear-stress-versus-normal-stress curves from the initiation of deformation (D < 24) of polymer brushes and gels. The shear-stress-versus-normal-stress curve in this regime is predominantly linear, and a linear curve was fitted taking into account the error at each point in the curve. 66 Figure 6b shows the coefficient of friction versus the degree of cross-linking for different lengths of cross-linkers, as obtained from our simulations. In qualitative agreement with the experiments, the CoF for all the cross-linked systems is found to be higher than that of the un-cross-linked system. The coefficient of friction was also found to increase with the degree of cross-linking for systems having different lengths of cross-linkers in a very similar manner as observed in the experiments. Similar observations were made in the experimental results of Li et al. 35 where the coefficient of friction was found to increase with increasing cross-linker content in PAAm hydrogel brushes. At a sufficiently high degree of cross-linking, experiments and simulations both show that shorter cross-linker lengths lead to larger values of the CoF. This effect vanishes or is unclear at low degrees of cross-linking. The cross-linkers tend to restrict the configurational space for the chains, so that energetic effects become more relevant. This effect increases with decreasing cross-linker length and increasing degree of cross-linking. In the presence of cross-linkers, the brush thus adopts a more compact density profile (Figure 4), which tends to resist sliding. As a result, the coefficient of friction increases with increasing degree of cross-linking. CONCLUSIONS Experimental and simulation studies were performed to clarify the effect of cross-linking on the tribological behavior of polymer brushes. The tribological experiments on PGMA brushes and gels in DMF solvent were performed against silica microspheres using the LFM technique. The PGMA brushes showed a remarkable decrease in friction forces when compared to bare silicon surfaces. We also observed a general increase in friction with cross-linking for PGMA brushes in DMF. An increase in the coefficient of friction was observed with increasing degree of cross-linking, and a decreasing coefficient of friction was observed with increasing length of cross-linkers beyond a certain degree of cross-linking. AFMbased indentation of PGMA brushes and gels in DMF solvent showed a decrease in their swelling ratio with increasing degree of cross-linking and can very well explain the tribological response of gels at different degrees of cross-linking for different lengths of cross-linkers. Cross-linked polymer brushes were successfully modeled using the coarse-grained MD technique. The tribological behavior of cross-linked polymer brushes under shear has been qualitatively compared with that of un-cross-linked polymer brushes and also with our experimental data. Simulations were performed at a constant shear velocity at different separations in the presence of explicit solvent beads. Results were presented in the form of shear stress versus normal stress. The coefficients of friction were calculated from the slopes of shear-stress-versus-normal-stress curves. The trends were consistent with the experimental observations: increase in coefficient of friction with increasing cross-linking degree and decrease in coefficient of friction with increasing cross-linker length. We were able to explain these findings with the help of simulated density profiles. As the degree of cross-linking increases, the polymer concentration in the outer layer that can participate in brush-assisted lubrication is reduced. In addition, cross-linked polymer brushes are more resistant to shear, compared to their non-cross-linked counterparts. We did not attempt to match the shear speeds to achieve a better quantitative agreement between experiments and simulations. Rather, the present simulations aim to study the underlying effects seen in the experiments on a more qualitative level. This work can be extended by performing studies over a wider range of degree of cross-linking for various lengths of cross-linkers to gain a better understanding of the influence of the length of cross-linkers on the mechanical behavior of gels under shear. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.macromol.8b01363. Estimated characteristics of the experimentally studied polymer brushes, possible reaction routes between crosslinkers and polymer brushes, comparison of graft density between experiment and simulation, estimation of Weissenberg number under experimental and simulation conditions, possibility of bond crossing in harmonic bond used for cross-linking, potentials used in simulation, and friction force against normal load at speed 5 μm/s (PDF)
2019-04-10T13:12:47.323Z
2018-12-12T00:00:00.000
{ "year": 2018, "sha1": "9f87c86f96f7903a905cfd4d925361d25f2f6f52", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.macromol.8b01363", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "48487434efdd68ccc4d6c0936c607ca19c610314", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
18109090
pes2o/s2orc
v3-fos-license
Evidence and Cost Effectiveness Requirements for Recommending New Biomarkers. The literature is full of new biomarkers which are claimed to add to the laboratory repertoire in a range of conditions. The literature is often confusing and may be contradictory. The past 20 years is littered with publications claiming the next big thing in a biomarker, some of which have been implemented on high throughput laboratory platforms. The number of novel biomarkers which have reached widespread clinical acceptance and implementation is relatively small. How can the laboratory community realistically assess claims for new markers? There is, to date, no completely defined set of criteria which should be used. However, there are some common themes in biomarker assessment. The two major areas which need to be considered are evidence required to assess test performance and cost effectiveness. A R T I C L E I N F O IS THIS TEST 'APT'? The literature is full of new biomarkers which are claimed to add to the laboratory repertoire in a range of conditions. The literature is often confusing and may be contradictory. The past 20 years is littered with publications claiming the next big thing in a biomarker, some of which have been implemented on high throughput laboratory platforms. The number of novel biomarkers which have reached widespread clinical acceptance and implementation is relatively small. How can the laboratory community realistically assess claims for new markers? There is, to date, no completely defined set of criteria which should be used. However, there are some common themes in biomarker assessment. The two major areas which need to be considered are evidence required to assess test performance and cost effectiveness. Assessment of test performance can be broadly considered under three categories, Analytical suitability, Plausibility and Treatment effectiveness; is the test APT. Analytical suitability means an assessment of the evidence-based analytical performance of the assay. This will include at least the following. Pre-analytical factors that will affect the test must be well understood before a test can be put into routine clinical practice. This will include the collection conditions required, anticoagulant requirements, pre-analytical sample handling factors and stability in storage. A marker needs to be measurable in the routine clinical laboratory without the need for special handling conditions if it is to form part of the routine work-up of the patient. Tests requiring complex pre-analytical steps are tolerated by the laboratory, rather than embraced. Often there is no alternative; the test is confined to special circumstances and particular patient types which are usually rare. A test in the clinical routine which will be ordered in large numbers requires simplicity of laboratory handling. A recent example is the measurement of soluble CD40 ligand (sCD40l), a marker of platelet activation. Measurement of sCD40l was shown to be a powerful predictor of mortality in patients with unstable angina. In addition, it was shown to be a predictor of a successful therapeutic response to the anti-glycoprotein IIb/IIIa antagonist abciximab (1). These studies were done using serum as matrix. It was subsequently found that clotting releases significant but variable amounts of sCD40l. Studies demonstrated that the release of sCD40l was critically affected by sample handling and the assay utilised for measurement (2).Only EDTA plasma could be used and values were significantly affected by delay in sample processing (3,4). Finally, it was shown that sCD40l was primarily produced by in vitro platelet activation (5) and the first use of a commercial assay failed to confirm the promise of the initial publication (6). Analytical performance of the test needs to be also appropriate for clinical use. Bodies such as the Clinical Laboratory Standards Institute produce protocols for the routine assessment of limit of blank, limit of detection and imprecision profile. It is also important that these analytical performance measures are independently assessed and that laboratories do not rely on the manufacturers' datasheets as the sole source of this information. Assay imprecision has a profound influence on the ability to define the 99 th percentile and the value of the relative change required between two consecutive measurements to be reliably different. It is an interesting observation that the redefinition of myocardial infarction (7-9) considers a 10% imprecision to be adequate at the 99 th percentile but also recommended a 20% change in values. Unfortunately, if the data is modelled it is apparent that an imprecision rather less than 10% is required to reliably detect a 20% change (http://www.westgard.com/troponininterpretations.htm). In addition to the ability to measure the biomarker with precision and accuracy, the analysis must be simple and have a rapid turnaround time. Ideally it should be implemented on existing laboratory equipment rather than requiring additional apparatus. In practice this means that a colorimetric or more likely an immunoassay for the marker is available. Population aspects of the test need to be understood in particular the influence of age, gender, ethnicity and comorbid conditions on the reference interval need to be considered. These can be quite subtle. Occult comorbid conditions profoundly influence the reference interval for cardiac troponin but can only be unmasked by the use of rigorous patient selection including cardiac imaging (10,11). The need for appropriate patient selection for troponin reference intervals has been the subject of discussion and recommendations made (12,13). The plausibility of the biomarker for the putative clinical role needs also to be established. The pathobiology of the biomarker needs to be understood. This means an understanding of the genesis of the biomarker and of the relationship of the biomarker to the medical condition of interest. A good example of this is ischaemia modified albumin (IMA). The concept of a biomarker of ischaemia is very attractive. Ischaemia would be detected prior to necrosis (we have excellent markers for this in the cardiac troponins) allowing intervention to abort the pathophysiology before irreversible cardiac injury occurs. The background concept of IMA was that the N terminus of albumin was altered during an ischaemic event resulting in the loss of the ability to bind transition metals. This was detectable by loss of the ability to bind cobalt, which could be determined by a simple colorimetric reaction (14). Preliminary studies using angioplasty as a model of human myocardial ischaemia showed that IMA increased after balloon inflation then returned rapidly to baseline levels, supporting the role as a biomarker of ischaemia (15,16). Subsequently, sequencing of the N terminus of IMA positive albumin showed that the N-terminal amino acid sequence was not removed (17). Physicochemical studies suggested that it was the binding of free fatty acids to albumin that induced a conformational change that reduced transition metal binding (18). A lack of fundamental understanding of the biomarker was therefore apparent and contributed to the lack of any clinical application (19). Plausibility also includes the clinical plausibility for the putative clinical role. This means that the biomarker must have appropriate sensitivity and specificity to detect the medical condition of interest in clinically appropriate populations where the test will actually be used in routine clinical practice. Many studies on biomarkers have evaluated them in clinical trial sample banks or alternatively in highly selected patient groups. This does not constitute an appropriate environment to evaluate test performance as disease prevalence is inappropriately high, often close to 100%. Such studies allow proof of concept that needs to be followed up by prospective evaluation in clinically representative populations. Comparison of a sensitive with a less sensitive troponin assay clearly shows earlier diagnostic sensitivity (20), as would be expected. Early studies of the new high sensitivity assays showed excellent analytical performance but compared them with the conventional assays and included patients with ST segment elevation in the evaluation (21,22), overstating the diagnostic performance of the assays. Have there been independent studies? Has there been a multicentre study? Is there meta-analysis of evidence? Has there been an RCT? Can I measure it in the routine lab without additional equipment and staff? Table 1 Key questions for evaluating the evidence base for clinical use Paul Collinson Evidence and cost effectiveness requirements for recommending new biomarkers intervention or to change the management pathway such as more prompt hospital discharge or admission to an appropriate level of clinical care. The questions which should pass through the laboratory practitioners' mind are shown in Table 1 below. An example of a randomised controlled trial of the diagnostic test is the Randomised Assessment of Treatment using Panel Assays of Cardiac markers (RATPAC) (23). This was a pragmatic randomised controlled trial which compared two treatment strategies, conventional management with measurement on admission and at 90 minutes of a panel of cardiac troponin I, creatine kinase MB and myoglobin by point of care testing. The outcome measure was a proportion of patients discharged or a decision to discharge within four hours of attendance with no adverse events during the following three months. Randomisation to the point of care arm of the study was reflected in increased successful discharge and no change in the frequency of adverse events. There was increased use of coronary care in the point of care arm. One of the most interesting aspects of this study was the significant differences between the six different sites with only two showing very large differences in length of stay in those randomised to the point of care arm (24). It highlights the importance of process within the utilisation of test results. Simple provision of rapid results will be ineffective unless it is accompanied by treatment decision. IS THIS TEST COST EFFECTIVE? Cost effectiveness considers the impact on health care resources utilisation and how we assess it. Cost effectiveness can be considered under four categories as shown in Table 2 below. It should be noted however that the terminology is often mixed. Cost minimisation analysis is the most straightforward. It assumes that the consequences of the two interventions being compared are identical so the analysis reduces to the comparison of costs alone. An example would be the diagnosis of acute myocardial infarction using cardiac troponin (cTn) compared to the measurement of creatine kinase MB isoenzyme (CK-MB). If the assumption is that CK-MB costs 20 currency units (CU) and cTn 30 CU then a protocol involving three hourly CK-MB measurements for 12 hours (total cost 80 CU) will be more expensive than a protocol measuring cTn on admission and 12 hours from admission (total cost 60 CU). In cost effectiveness analysis differences can be expressed in terms of changes in one main parameter. The differences in costs are related to the main differences in events. An example of this type of analysis is the use of measurement of B type natriuretic peptide (BNP) in patients with suspected chronic heart failure. The basic Type Measurement and valuation of consequences Cost minimisation analysis No measurement. Consequences assumed or shown to be equivalent. Cost effectiveness analysis Natural units (Life years gained) Cost utility analysis Health state preference values (quality adjusted life years gained) Cost benefit analysis Monetary gains Table 2 Cost effectiveness categories premise is that two pathways are compared: direct referral for hospital assessment of patients with suspected heart failure and referral only of those with an elevated BNP. A simple analysis compares costs at the pathway level where the costs of echocardiography on all patients is compared with the combined cost of BNP measurement followed by echocardiography only in the those with BNP levels above a certain designated threshold. This is effectively a cost minimisation analysis and shows that the BNP based pathway is cheaper (25). A more sophisticated approach utilising a sequential testing strategy modelled on individual patient data meta-analysis was performed as part of a health technology assessment informing the National Institute of Clinical and health Excellence (NICE) guidelines on BNP testing. This modelling produced very similar results to the cost minimisation model. Cost effectiveness was driven by the prior probability of disease and favoured BNP measurement as the first test (as in strategy discussed above) unless the probability of heart failure was very high (26). Cost utility analysis typically utilises the quality adjusted life year (QALY). A QALY takes into account longevity and quality-of-life. The number of QALYs accrued by a patient is estimated by multiplying the years of survival by quality-of-life measured on a scale from zero (equivalent to death) to 1 (perfect health). States of health below zero are possible for a health state considered worse than death. QALYs have the advantage of allowing comparison between any healthcare intervention that can influence survival or quality-of-life. Analysis is based on willingness to pay (cost per QALY) with a typical threshold of £20,000 in the UK. An example would be comparison of the cost effectiveness of measurement of high sensitivity troponin on admission versus conventional troponin management at 10 hours (27). Such a study shows that high sensitivity troponin measurement on admission is superior to conventional troponin measurement and that measurement on admission and at three hours is the most sensitive approach. Measurement of conventional troponin at 10 hours is only cost effective if an immediate decision to discharge is made, highlighting again the importance of process in the application of laboratory testing. One problem with cost effectiveness analysis in diagnostics is that the data is often inadequate or even non-existent. Modelling approaches are typically used but the accuracy of the cost modelling is often challenging though mitigated by sensitivity analysis (changing the model parameter and looking at the impact, a large change suggests that the modelling is not robust). Very small differences in QALY's may be present. A systematic attempt to evaluate the evidence for diagnostics including laboratory testing is used by the Diagnostics Assessment Committee of NICE. They utilise a systematic evidence-based review followed by cost economic modelling. The recommendations and their evidence base can be found on the NICE website (www.nice. org.uk) and in the publications of the UK health technology assessment programme. These are all available online. Examples are the recent recommendations for the use of faecal calprotectin (www.nice.org.uk/guidance/dg11) and the accompanying evidence report (28). CONCLUSIONS In conclusion, assessment of test suitability is a combination of the traditional laboratory attributes of the analytical performance of the test but combined with other features. The underlying scientific validity of the test needs to be understood and the diagnostic utility demonstrated in appropriate populations, to show the test is plausible. Finally, the test result must produce a treatment change. All of these, Analytical, Plausibility, Treatment will make a test APT. But an APT test clinically also needs to be cost effective. Conversely, unless a test has been shown to be APT, the probability of demonstrating cost effectiveness is small. The challenge for the laboratory is to work together with clinicians to develop test evaluation strategies that will allow demonstration of all the attributes to show that the test is both APT and cost effective.
2018-04-03T02:07:04.351Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "641486c7258fc138d401bc36719585c1256d1847", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "641486c7258fc138d401bc36719585c1256d1847", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119129963
pes2o/s2orc
v3-fos-license
On the sign-imbalance of skew partition shapes Let the sign of a skew standard Young tableau be the sign of the permutation you get by reading it row by row from left to right, like a book. We examine how the sign property is transferred by the skew Robinson-Schensted correspondence invented by Sagan and Stanley. The result is a remarkably simple generalization of the ordinary non-skew formula. The sum of the signs of all standard tableaux on a given skew shape is the sign-imbalance of that shape. We generalize previous results on the sign-imbalance of ordinary partition shapes to skew ones. Introduction A labelled poset (P, ω) is an n-element poset P with a bijection ω : P → [n] = {1, 2, . . . , n} called the labelling of P . A linear extension of P is an order-preserving bijection f : P → [n]. It is natural to define the sign of f as −1 to the power of the number of inversions with respect to the labelling, i.e., pairs x, y ∈ P such that ω(x) < ω(y) and f (x) > f (y). The sign-imbalance I P,ω of (P, ω) is the sum of the signs of all linear extensions of P . Note that I P,ω is independent of the labelling ω up to sign. In this paper we will mainly discuss the square of sign-imbalances, and then we may drop the ω and write I 2 P = I 2 P,ω . If I 2 P = 0 the poset is sign-balanced. Such posets have been studied since 1989 by F. Ruskey [4], [5], R. Stanley [12], and D. White [13]. It is a vast subject however, and most of the work has been devoted to a certain class of posets: the partition shapes (or Young diagrams). Though no one so far has been able to completely characterize the sign-balanced partition shapes, this research direction has offered a lot of interesting results. Many people have studied the more general notion of sign-imbalance of partition shapes, among those T. Lam [2], A. Reifegerste [3], J. Sjöstrand [9], M. Shimozono and D. White [8], R. Stanley [12], and D. White [13]. Young tableaux play a central role in the theory of symmetric functions (see [1]) and there are lots of useful tools for working with them that are not applicable to general posets. One outstanding tool is the Robinson-Schensted correspondence which has produced nice results also in the field of sign-imbalance, see [9], [3], and [8]. As suggested in [9] a natural step from partition shapes towards more general posets would be to study skew partition shapes. They have the advantage of being surrounded by a well-known algebraic and combinatorial machinery just like the ordinary shapes, and possibly they might shed some light on the sign-imbalance of the latter ones as well. We will use a generalization of the Robinson-Schensted algorithm for skew tableaux by B. Sagan and R. Stanley [6]. In a recent paper [10, Theorem 4.3 and 5.7] E. Soprunova and F. Sottile show that |I P,ω | is a lower bound for the number of real solutions to certain polynomial systems. Theorem 6.4 in [10] says that |I P,ω | is the characteristic of the Wronski projection on certain projective varieties associated with P . When P is a skew partition shape this is applicable to skew Schubert varieties in Grassmanians (Richardson varieties). An outline of this paper: • After some basic definitions in section 2, in section 3 we briefly recall Sagan and Stanley's skew RS-correspondence from [6]. • In section 4 we state our main results without proofs and examine their connection to old results. • In section 5 and 6 we prove our main theorems through a straightforward but technical analysis. • In section 7 we examine a couple of interesting corollaries to our main results. One corollary is a surprising formula for the square of the sign-imbalance of any ordinary shape. • Finally, in section 8 we suggest some future research directions. Preliminaries An (ordinary) n-shape λ = (λ 1 , λ 2 , . . .) is a graphical representation (a Ferrers diagram) of an integer partition of n = i λ i . We write λ ⊢ n or |λ| = n. The coordinates of a cell is the pair (r, c) where r and c are the row and column indices. Example: For any subshape µ ⊆ λ the skew shape λ/µ is λ with µ deleted. A skew n-shape λ/µ is a skew shape with n cells, and we write λ/µ ⊢ n or |λ/µ| = n. Here is an example of a skew 6-shape: (6, 4, 2, 2, 1)/(4, 3, 2) = A domino is a rectangle consisting of two cells. For an ordinary shape λ, let v(λ) denote the maximal number of disjoint vertical dominoes that fit in the shape λ. A (partial) tableau T on a skew n-shape λ/µ is a labelling of the cells of λ/µ with n distinct real numbers such that every number is greater than its neighbours above and to the left. We let ♯T = n denote the number of entries in T , and PT(λ/µ) denote the set of partial tableaux on λ/µ. A standard tableau on a skew n-shape is a tableau with the numbers [n] = {1, 2, . . . , n}. We let ST(λ/µ) denote the set of standard tableaux on the shape λ/µ. Here is an example: The (skew) shape of a tableau T is denoted by sh T . Note that it is not sufficient to look at the cells of T in order to determine its shape; we must think of the tableau as remembering its underlying skew shape. (For instance, (6, 4, 2, 2, 1)/(4, 3, 2) and (6, 4, 3, 2, 1)/(4, 3, 3) are distinct skew shapes that have the same set of cells.) The sign of a number sequence w 1 w 2 · · · w k is (−1) ♯{(i,j) : i<j, w i >w j } , so it is +1 for an even number of inversions, −1 otherwise. The inverse sign is defined to be (−1) ♯{(i,j) : i<j, w i <w j } . The sign sgn T and the inverse sign invsgn T of a tableau T are the sign respectively the inverse sign of the sequence you get by reading the entries row by row, from left to right and from top to bottom, like a book. Our example tableau has 4 inversions and 11 non-inversions, so sgn T = +1 and invsgn T = −1. Definition 2.1. The sign-imbalance I λ/µ of a skew shape λ/µ is the sum of the signs of all standard tableaux on that shape: An empty tableau has positive sign and I λ/λ = I ∅ = 1. A biword π is a sequence of vertical pairs of positive integers π = i 1 i 2 ···i k j 1 j 2 ···j k with i 1 ≤ i 2 ≤ · · · ≤ i k . We define the top and bottom lines of π bŷ π = i 1 i 2 · · · i k andπ = j 1 j 2 · · · j k . A partial n-permutation is a biword where in each line the elements are distinct and of size at most n. Let PS n denote the set of partial n-permutations. For each π ∈ PS n we associate an ordinary n-permutationπ ∈ S n constructed as follows: First take the numbers among 1, 2, . . . , n that do not belong toπ and sort them in increasing order a 1 < a 2 < · · · < a ℓ . Then sort the numbers among 1, 2, . . . , n that do not belong toπ in increasing order b 1 < b 2 < · · · < b ℓ . Now insert the vertical pairs ar br , 1 ≤ r ≤ ℓ into π so that the top line remains increasingly ordered (and hence must be 12 · · · n). The bottom line is a permutation (in single-row notation) which we denotē π. Example: If n = 5 and π = 124 423 thenπ = 42135. In the following we let ⊎ denote disjoint union interpreted liberally. For instance, we will writeπ⊎T = [n] meaning that the set of numbers appearing inπ and the set of entries of the tableau T are disjoint and their union is [n]. The skew RS-correspondence In [6] Bruce Sagan and Richard Stanley introduced several analogues of the Robinson-Schensted algorithm for skew Young tableaux. Their main result is the following theorem. Theorem 3.1 (Sagan and Stanley;1990). Let n be a fixed positive integer and α a fixed partition (not necessarily of n). Then there is bijection between π ∈ PS n with T, U ∈ PT(α/µ) such thatπ⊎T =π⊎U = [n], on the one hand, and P, Q ∈ ST(λ/α) such that λ/α ⊢ n, on the other. Though we will assume detailed familiarity with it, we do not define the bijection here, but refer to [6] for the original presentation. Our results In [9] and [3] the author and Astrid Reifegerste independently discovered the formula for sign transfer under the RS-correspondence: where λ is the shape of P and Q. Our main theorem is a generalization of this to Sagan and Stanley's skew RS-correspondence: Note that if α = ∅ the theorem reduces to Theorem 4.1. Remark. If we specialise to the skew RS-correspondence (π, T ) ↔ P of involutions (see Corollary 3.4 in [6]), Theorem 4.2 gives that where sh P = λ/α and sh T = α/µ. This is also a simple consequence of Corollary 3.6 in [6] which is a generalization of a theorem by Schützenberger [7, page 127] (see also [11, exercise 7.28 a]). A fundamental application of Theorem 4.1 appearing in both [9] and [3] is the following theorem. Let α = (3, 1) and n = 3. There are 10 skew shapes λ/α ⊢ 3. Here we have evaluated (−1) v(λ) I 2 λ/α for each one of them: (It happens that all these skew shapes have sign-imbalance 0 or 1, but in larger examples we would find much more exotic integers, like −7 for instance.) Now we compute (−1) v(µ) I 2 α/µ for the two skew shapes α/µ ⊢ 2: Finally, there is only one skew shape α/µ ⊢ 3: +1 We check that We give a natural generalization of this using Theorem 4.2. It may be called a "sign-imbalance analogue" to Corollary 2.2 in [6]. Theorem 4.4. Let α be a fixed partition and let n be a positive integer. Then For convenience, let rsgn T := rsgn sh T for a skew tableau T . Observe that for an ordinary shape λ we have rsgn λ = (−1) v(λ) . For the sake of bookkeeping we will make two minor adjustments to the skew insertion algorithm that do not affect the resulting tableaux: • Instead of starting with an empty Q-tableau, we start with the tableau U after multiplying all entries by ε. Here ε is a very small positive number. • During an internal insertion a new cell with an integer b is added to the Q-tableau according to the usual rules. New additional rule: At the same time we remove the entry bε from the Q-tableau. Consider the (adjusted) skew insertion algorithm starting with P-tableau P 0 = T and Q-tableau Q 0 = U ε. After ℓ insertions (external or internal) we have obtained the tableaux P ℓ and Q ℓ . The following two lemmas state what happens when we make the next insertion. Lemma 5.1. Let (P ℓ+1 , Q ℓ+1 ) be the resulting tableaux after external insertion of the number a 1 into (P ℓ , Q ℓ ). Then where m is the number of entries in P ℓ that are less than a 1 . Proof. We insert the number a 1 which pops a number a 2 at (1, c 1 ) which pops a number a 3 at (2, c 2 ) and so on. Finally the number a r fills a new cell (r, c r ), see Figure 2. For 2 ≤ i ≤ r, the relocation of a i multiplies the sign of the P-tableau by The placing of a 1 in the first row multiplies the sign of the P-tableau by (−1) m−(c 1 −1−γ 1 ) where m is the number of entries in P ℓ that are less than a 1 . We get and rsgn Q ℓ+1 rsgn Q ℓ = (−1) r−1 . Since sgn R invsgn R = (−1) ( ♯R 2 ) for any tableau R, we have Combining the equations above proves the lemma. Lemma 5.2. Let (P ℓ+1 , Q ℓ+1 ) be the resulting tableaux after internal insertion of the entry a 1 at (r, c 0 ) into (P ℓ , Q ℓ ). Then Proof. During an internal insertion the entry a 1 at (r, c 0 ) pops a number a 2 at (r + 1, c 1 ) which pops a number a 3 at (r + 2, c 2 ) and so on. Finally the number a k fills a new cell (r + k, c k ), see Figure 3. For 1 ≤ i ≤ k, the relocation of a i multiplies the sign of the P-tableau by What happens to the Q-tableau? According to our adjustments of the algorithm the entry bε at (r, c 0 ) is removed and the entry b is added at the new cell at (r + k, c k ). Observe that bε is the smallest element in Q ℓ ; this is the very reason why we are making an internal insertion from its cell (r, c 0 ). Also note that b is the largest entry in Q ℓ+1 . The transformation from Q ℓ to Q ℓ+1 can be thought of as consisting of two steps: First we replace the entry bε by b, thereby changing the sign of the tableau by a factor (−1) ♯Q ℓ −1 . Then we move the b to the new cell at (r + k, c k ), thereby changing the sign of the tableau by a factor Now, after observing that the lemma follows. Now we are ready to prove our main theorem. Proof of Theorem 4.2. From Lemma 5.1 and 5.2 we deduce by induction that where n = ♯P and the last sum m is taken over all external insertions. Let t 1 < t 2 < · · · < t g and u 1 < u 2 < · · · < u g be the entries of T and U , and write π = i 1 i 2 ···i h j 1 j 2 ···j h . Let π ′ be the permutation you get (in singlerow notation) by precedingπ with the elements of T decreasingly ordered, i.e., π ′ = t g t g−1 · · · t 1 j 1 j 2 · · · j h . It is easy to see that the sum m equals the number of non-inversions of π ′ , i.e pairs i < j such that π ′ (i) < π ′ (j). This means that (−1) m = invsgn π ′ . What is the relationship between invsgn π ′ and sgnπ? Let us go from π ′ toπ by a sequence of moves. Start with π ′ = t g t g−1 · · · t 1 j 1 j 2 · · · j h . 6. The proof of Theorem 4.4 In Theorem 3.1 we have adopted the original notation from Sagan and Stanley [6]. However, for some applications (and among them the forthcoming proof of Theorem 4.4) it is inconvenient to work with partial tableaux. For that matter we now present a simple bijection that will allow us to work with standard tableaux only. Lemma 6.1. Let n be a fixed positive integer and α and µ fixed partitions. Then there is a bijection (π, T, U ) ↔ (π,Ĩ,T ,Ũ ) between • triples (π, T, U ) such that π ∈ PS n , T, U ∈ PT(α/µ) andπ⊎T = π⊎U = [n], and • quadruples (π,Ĩ,T ,Ũ ) such thatπ ∈ S n ,T ,Ũ ∈ ST(α/µ) andĨ ⊆ [n] is the index set of an increasing subsequence ofπ of length |α/µ|. Proof. Given a quadruple (π,Ĩ,T ,Ũ ), let the triple (π, T, U ) be given by the following procedure: Writeπ in biword notation and remove the vertical pairs corresponding to the increasing subsequenceĨ. The resulting partial permutation is π. Order the elements inĨ increasingly: i 1 < i 2 < · · · < i k . Now, for 1 ≤ j ≤ k, replace the entry j inŨ by i j and replace the entry j inT byπ(i j ). This results in U and T respectively. It is easy to see that this is indeed a bijection with the claimed properties. Let LHS and RHS denote the left-hand side and the right-hand side of the equation above. The left-hand side trivially equals The right-hand side is trickier. Fix 1 ≤ i 1 < i 2 < · · · < i k ≤ n and consider the sum S := π∈Sn π(i 1 )<···<π(i k ) sgn π. • If k ≤ n − 2 there are at least two integers 1 ≤ a < b ≤ n not contained in the sequence i 1 < i 2 < · · · < i k . The sign-reversing involution π → π · (a, b) (here (a, b) is the permutation that switches a and b) shows that S = 0. • Suppose k = n − 1 and let a be the only integer in [n] not contained in the sequence i 1 < i 2 < · · · < i k . We are free to choose π(a) from [n], but as soon as π(a) is chosen, the rest of π must be the unique increasing sequence consisting of [n] \ π(a) if π should contribute to S. The sign of π then becomes (−1) π(a)−a so In the case where n is odd and k = n − 1, the double sum 1≤i 1 <···<i k ≤n π∈Sn π(i 1 )<···<π(i k ) sgn π = n a=1 (−1) a−1 = 1. In summary we have showed If n is even we finally obtain Analogously, if n is odd we get Specialisations of Theorem 4.4 Apart from the special case α = ∅, Theorem 4.4 offers a couple of other nice specialisations if we choose the parameters α and n properly. First we obtain a surprising formula for the square of the sign-imbalance of any ordinary shape: Corollary 7.1. Let α be a fixed n-shape. Then if n is even, and Proof. First suppose n is even. Theorem 4.4 yields The right-hand side consists of only one term, namely (−1) v(∅) I 2 α/∅ = I 2 α . From Theorem 4.4 we also get The second term of the right-hand side vanishes and the first term is I 2 α as before. Now suppose n is odd. Then Theorem 4.4 yields The right-hand side consists of only one term, namely (−1) v((1)) I 2 α/(1) which equals I 2 α since in an ordinary tableau the 1 is always located at (1, 1). Next we present another generalization of Theorem 4.3. Corollary 7.2. Let α be a fixed n-shape. Then for any integer m ≥ n + 2 if n is even, and for any integer m ≥ n if n is odd. Proof. If m is even Theorem 4.4 yields The right-hand side vanishes since m > |α|. If m is odd Theorem 4.4 yields If m ≥ n + 2 the right-hand side vanishes simply because m − 1 > |α|. Otherwise n is odd and the only remaining case is m = n. But then the right-hand side becomes I 2 α/(1) − I 2 α = 0. Future research For an ordinary shape λ, let h(λ) be the number of disjoint horizontal dominoes that fit in λ and let d(λ) be the number of disjoint 2 × 2-squares (fourlings) that fit in λ. In [9] the following theorem, conjectured by Stanley [12], was proved (the (a)-part was independently proved by T. Lam [2]): The (a)-part is about signed sums of sign-imbalances without taking the square. From an RS-correspondence perspective it is unnatural not to take the square of the sign-imbalance since the P-and Q-tableaux come in pairs. In fact it might be argued that non-squared sign-imbalances are unnatural in all cases, because their sign is dependent on the actual labelling of the poset, i.e., it is important that we read the tableau as a book. However, part (a) in the theorem is still true (and there are even stronger theorems, see [9]) and it can be proved by means of the RS-correspondence as was done in [9]. This suggests that the skew RS-algorithm could be a useful tool for studying signed sums of non-squared sign-imbalances too. As a tool for proving Theorem 8.1 the concept of chess tableaux was introduced in [9]. A chess tableaux is a standard Young tableau where odd entries are located at an even Manhattan distance from the upper-left cell of the shape, while even entries are located at odd distances. This notion of course generalizes to skew tableaux (in fact it generalizes to many other posets) and since it proved so useful in the study of sign-imbalance of ordinary shapes we think it will shed some light on the skew shapes as well. Another direction of research is to find analogues to Theorem 4.2 for other variants of the RS-algorithm. For instance, in [6, Theorem 5.1] Sagan and Stanley present a generalization of their skew RS-correspondence where the condition that sh U = sh T and sh P = sh Q is relaxed. From that they are able to infer identities like where f λ/µ = ♯ST(λ/µ). This correspondence may give interesting formulas for sums of products of sign-imbalances as well.
2019-04-12T09:17:40.730Z
2005-07-16T00:00:00.000
{ "year": 2005, "sha1": "2c5c57f12c11026101d7f1526aba6c14a8a3455e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a5b6fdd87fa697c420e704224e076a60978f0acb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
268272919
pes2o/s2orc
v3-fos-license
Study of radiomics based on dual-energy CT for nuclear grading and T-staging in renal clear cell carcinoma Introduction: Clear cell renal cell carcinoma (ccRCC) is the most lethal subtype of renal cell carcinoma with a high invasive potential. Radiomics has attracted much attention in predicting the preoperative T-staging and nuclear grade of ccRCC. Objective: The objective was to evaluate the efficacy of dual-energy computed tomography (DECT) radiomics in predicting ccRCC grade and T-stage while optimizing the models. Methods: 200 ccRCC patients underwent preoperative DECT scanning and were randomized into training and validation cohorts. Radiomics models based on 70 KeV, 100 KeV, 150 KeV, iodine-based material decomposition images (IMDI), virtual noncontrasted images (VNC), mixed energy images (MEI) and MEI + IMDI were established for grading and T-staging. Receiver operating characteristic analysis and decision curve analysis (DCA) were performed. The area under the curve (AUC) values were compared using Delong test. Results: For grading, the AUC values of these models ranged from 0.64 to 0.97 during training and from 0.54 to 0.72 during validation. In the validation cohort, the performance of MEI + IMDI model was optimal, with an AUC of 0.72, sensitivity of 0.71, and specificity of 0.70. The AUC value for the 70 KeV model was higher than those for the 100 KeV, 150 KeV, and MEI models. For T-staging, these models achieved AUC values of 0.83 to 1.00 in training and 0.59 to 0.82 in validation. The validation cohort demonstrated AUCs of 0.82 and 0.70, sensitivities of 0.71 and 0.71, and specificities of 0.80 and 0.60 for the MEI + IMDI and IMDI models, respectively. In terms of grading and T-staging, the MEI + IMDI model had the highest AUC in validation, with IMDI coming in second. There were statistically significant differences between the MEI + IMDI model and the 70 KeV, 100 KeV, 150 KeV, MEI, and VNC models in terms of grading (P < .05) and staging (P ≤ .001). DCA showed that both MEI + IDMI and IDMI models outperformed other models in predicting grade and stage of ccRCC. Conclusions: DECT radiomics models were helpful in grading and T-staging of ccRCC. The combined model of MEI + IMDI achieved favorable results. Introduction Clear cell renal cell carcinoma (ccRCC) is a type of malignant tumor originating from the urinary system, accounting for about 70%-85% of renal cell carcinoma, [1] and, it is the most lethal subtype with a high invasive potential. [2]The 5-year survival rate of patients with ccRCC is closely related with the pathological nuclear grade. [3]Patients with lower pathological nuclear grade of ccRCC have better prognosis and lower risk of recurrence than those with higher pathological nuclear grade. [4,5]reatment of RCC includes radical resection, partial resection, tumor enucleation, as well as minimally invasive ablation and targeted therapy developed in recent years.Conservative surgery or minimally invasive ablation can be used for RCC with low pathological grade and staging, and active monitoring or targeted therapy can also be performed in some cases. [6]Tumor T-staging is a comprehensive assessment of tumor progression, and has great significance to the selection of treatment including surgical methods, the formulation of perioperative treatment plan and the prognosis of patients.Biopsy and histopathology are most commonly used for renal cancer grading and staging before operation.However, the disadvantages such as its inherent invasive, hysteresis, in vitro, and dependence on the accuracy puncture tissue limit its application. Therefore, it is necessary to develop a noninvasive technique for accurately grading the preoperative pathology and T-staging for ccRCC. Radiomics can extract a large number of image features, combine image quantitative analysis with machine learning, and transform the tumor internal features into rich quantitative features through different algorithms. [7,8][11][12][13] As a noninvasive imaging technology radiomics has attracted much attention in predicting the preoperative T-staging and nuclear grade of ccRCC. Compared with single energy computed tomography, dualenergy scanning of dual-energy computed tomography (DECT) can obtain mixed energy images (MEI) of different proportions, virtual mono-energy images (VMI), and iodine-based material decomposition images (IMDI) through postprocessing workstation, and significantly improve tissue resolution and material recognition ability. [14,15]Moreover, IMDI can reflect the vascularization of various tissues via measuring the concentration of iodine (contrast reagent) [16] and is conducive to the detection of vascular rich tumor. [16,17][20] However, the pathological grading and staging of ccRCC based on DECT radiomics are rarely reported.Moreover, there is no consensus with multiple parameters of DECT based radiomics including multiple VMI and IMDI, so it needs to be further studied to find out the best radiomics model.Herein, we investigated the value of radiomics based on the DECT in predicting pathological nuclear grade and T-stage of ccRCC.The efficacies of radiomics models based on different mono-energy VMI, IMDI, and MEI were compared.The potential of DECT as the noninvasive method in clinical decision-making and precision medicine was explored. Patients This retrospective study was approved by the Institutional Ethics Committee of Jinan Central Hospital Affiliated to Shandong First Medical University and the patient consent was waived.A total of 200 patients with postoperatively pathologically confirmed ccRCC in our hospital from January 2015 to January 2022 were included in the study.There were 137 males and 63 females.Their mean age was 57 ± 11.24 years old and their age range was 33-82 years old.The inclusion criteria were as follows: radical nephrectomy or nephron sparing surgery was performed, and postoperative pathology confirmed ccRCC; complete clinical data could be obtained; and contrast-enhanced DECT of kidney was performed within 1 week before surgery. The exclusion criteria were as follows: patients with poor image quality that affected the delineation and feature extraction of the region of interest; patients with cardiovascular or renal disease that seriously affected the degree of renal enhancement; patients with previous abdominal surgery; and patients with multiple lesions and poorly defined tumor boundaries. Pathological staging and nuclear grading All patients received radical nephrectomy or nephron sparing surgery.Surgical specimens were stained with H&E and examined by 2 pathologists with more than 5 years of professional experience.According to WHO/ISUP nuclear grade of renal cancer, 149 cases were defined as low-grade (grade 1-2), and, 51 cases were defined as high-grade (grade 3-4).According to the AJCC T-staging system, 152 cases had ccRCC at T1-T2 and 48 cases had ccRCC at T3-T4.The final classification and T-staging were decided by the 2 pathologists in consensus.General clinical data of all patients were shown in Table 1. DECT imaging acquisition All patients underwent contrast-enhanced DECT before surgery and signed informed consent before CT scanning.Somatom Force CT scanner (Siemens Healthineers, Forchheim, Germany) was used for scanning.Nonionic contrast agent (Omnipaque, 300 mgI/mL) (1.2 mL/kg; 60-80 mL) was injected intravenously at injection rate of 3.5 mL/s.In dual-energy mode, the cortical phase and parenchymal phase enhanced scanning was performed with the automatic exposure system.The respective parameters were as follows: the delay times were 30 seconds (cortical phase) and 80 seconds (parenchymal phase) respectively; the tube voltages were 100 kvp and sn150 kvp; and the tube currents were 130~180 mAs and 80~90 mAs.The images were reconstructed at 1.0 mm slice thickness and 1.0 mm interval, and then analyzed by using the postprocessing workstation (syngo.via).Finally, the 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, and virtual noncontrasted (VNC) images of the 2 phases were obtained.Then, all these data were imported to radcloud platform (https:// mics.huiyihuiying.com/). Image segmentation and image preprocessing All images were reviewed and the 3D volume of interests (VOIs) were delineated slice by slice manually by 2 junior radiologists with more than 5 years of working experience in this field, who were blinded to the clinical information of the patients but were aware that the lesions were ccRCC.Then, all contours were reviewed and revised by a senior radiologist with 20 years of experience.If the discrepancy was ≥5%, the tumor borders were determined by the senior radiologist with 20 years of experience. [21]Before Resampling and filtering were used to reduce noise and increase feature stability.Voxels in each CT image body were resampled to an isotropy voxel size of 1.0 × 1.0 × 1.0 mm 3 to correct for different voxel spacing and section thickness between different centers.At the same time, the discretization of resampled image data was also used to reduce noise and increase the stability of features.All features were normalized using z-score normalization. Feature extraction and establishment of the radiomics models The radiomics workflow was shown in Figure 1.A total of 1439 quantitative imaging features were extracted from the VOIs, encompassing 262 first order statistics features delineating the distribution of voxel intensities, 28 3-dimensional features reflecting the shape and size of the region, and 1060 texture features quantifying heterogeneity differences in region characteristics such as gray run length, gray co-occurrence texture matrix (GCTM), gray level size zone matrix, gray level dependence matrix, and neighboring gray tone difference matrix (https:// mics.huiyihuiying.com/).The feature selection methods, including the variance threshold (variance threshold = 0.8), the SelectKBest, and, the least absolute shrinkage and selection operator (LASSO), were used to reduce the redundant features.The optimal features obtained after screening were used for machine learning, and then the classification models were established.Our preexperiments showed that the relative standard deviation of SVM was low and the area under the curve (AUC) was high among KNN, DT, LR and SVM models.Based on the literature [22,23] and our preexperiments, we selected the commonly used support vector machine (SVM) model.The validation method was used to test the effectiveness of the models. Two groups of models were established according to WHO/ ISUP nuclear grading and T-staging.A total of 14 radiomics models were established, including 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, VNC, and MEI + IMDI models of nuclear grading group and T-staging group, respectively. Qualification and statistical analysis Feature extraction, dimensionality reduction and modeling were carried out on the Radcloud platform.All statistical analyses were performed by R Studio (version 4.0.2,2020-06-22) software package.The receiver operating characteristic (ROC) curve was plotted and the area under the ROC curve (AUC) as well as sensitivity and specificity were calculated both in the training cohort and the validation cohort.Delong test was performed to evaluate the differences between the ROC curves.P < .05 was considered statistically significant.Decision curve analysis (DCA) was used to assess which model obtained the greatest net benefit. Results of nuclear grading group 3.1.1.Dimensionality reduction and selection of taskspecific features.The feature selection methods included the variance threshold (variance threshold = 0.8), SelectKBest, and LASSO in WHO/ISUP nuclear grading group.After reducing the dimensionality, a total of 31 optimal features were selected, including 11 firstorder, 7 GLDM, 2 GLRLM, 9 GLSZM, and 2 shape features.About 7 features were selected from the cortical phase, while 24 features were chosen from the medulla phase.Compared with cortical phase, medullary phase images provided more features to help nuclear classification. The final number of selected features for the 70 KeV, 100 KeV, and 150 KeV models as well as the MEI, IMDI, VNC, and MEI + IMDI models were determined to be 6, 1, 6, 3, 4, 2, and 5 respectively. Table 2 and Figure 2 displayed the radiomic features that were selected, along with their corresponding coefficients for each model.The performance metrics, including AUC value, 95% CI, sensitivity, specificity, and support value of the models developed for nuclear grading in both the training and validation cohorts are presented in Table 3 and Figure 3. Results of ROC curve analysis and The Delong test demonstrated that the MEI + IMDI model outperformed the 70 KeV, 100 KeV, 150 KeV, MEI and VNC models (P < .05) in the validation cohort with statistically significant differences.There were no significant differences in AUC values between the IMDI model and the MEI + IMDI model (Table 4). The DCA of the validation group for grading is illustrated in Figure 4.The findings indicate that the MEI + IDMI model enhances the ability to predict nuclear grade of ccRCC at a higher risk threshold, and both MEI + IDMI and IDMI models exhibit superior predictive performance compared to other models in the validation group. Dimensionality reduction and selection of task-specific features. The feature selection methods included the variance threshold (variance threshold = 0.8), SelectKBest, and the LASSO in T-staging group.After reducing the dimensionality, a total of 56 optimal features were selected for T-staging group, including 18 firstorder, 4 GLDM, 6 GLRLM, 26 GLSZM and 2 shape features.The number of selected features was 28 for both the arterial and venous phases. The final number of selected features used for the 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, VNC, and MEI + IMDI models were 9, 5, 5, 12, 12, 4, and, 9 respectively.The radiomic features selected and their coefficient for each model and the final number of selected features were shown in Table 5 and Figure 6).ROC curves of SVM methods to classification are shown in Figure 6. For the models based on different energy images, the AUC value of the MEI was the lowest, but when MEI combined IMDI, the MEI + IMDI model achieved the best performance, with the AUC value 0.96 (0.82) in the training (validation) cohort.The AUC value of 150 KeV model was lower than those of the 70 KeV, 100 KeV models. As we expected the VNC model had the lowest AUC value among the 7 models. The AUC, 95% CI, sensitivity, specificity and support value of models for T-staging in the training cohort and the validation cohort are shown in Table 6. The Delong test compared the predictive performance of the 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, and VNC models with that of the MEI + IMDI model.The results showed that the differences between models have statistical significance (P ≤ .001;Table 7). The DCA of the validation group for T-staging showed that MEI + IDMI model could improve the ability to predict T-stage can quantify the actual iodine concentration and indicate increased tumor angiogenesis. [32]Homayounieh F et al [33] compared the pathological results of liver lesions with dual-energy IMDI and found that the coincidence rate of IMDI detection results with postoperative pathological examination was higher than that of conventional CT.Wu et al [34] confirmed that radiomics analysis based on IMDI of DECT imaging could provide a relatively high diagnostic value for predicting microsatellite instability status in patients with colorectal cancer.This study showed the combined model of MEI + IMDI for nuclear grading and T-staging in this study achieved better performance in the validation cohort, with the AUC of 0.72 and 0.82, sensitivity of 0.71 and 0.71, and specificity of 0.70 and 0.80, respectively, the AUC of IMDI models were the next highest to combined model of MEI + IMDI.Among the models based on different energy images, the AUC of the MEI was the lowest, but when MEI combined IMDI, the MEI + IMDI model achieved the best performance.IMDI model is expected to play a bigger role in the diagnosis and treatment of ccRCC.This is similar to previous studies. As we expected, VNC model has the lowest AUC among the models, this is because concentration of iodine can reflect the vascularization of various tissues and provides important information for diagnosis.Without information of iodine VNC provides limited information. Several studies [22,23] had shown that SVM combined with quantitative MDCT texture analysis has the highest predictive performance in different machine learning based classifiers for distinguishing low-grade from high-grade ccRCC.Our results are similar to theirs, so our research mainly focuses on SVM for machine learning.Generally, radiomics features can be divided into 3 types, including firstorder statistics features, shape-and size-based features and textural features (calculated from gray level run-length and gray level co-occurrence texture matrices). In our study, among all selected radiomics features, the number of texture features was the highest, with the number 1060/1439.The texture features showed higher discrimination ability.The reason for the good performance is that 3D texture features can provide the overall characteristics of tumor heterogeneity by analyzing the gray distribution of pixels or pixels in CT images and its relationship with gray level. [35]Radiomics is mainly composed of 3D texture features, and its prediction performance is significantly superior to morphological features and firstorder features. [36,37]Mayerhoefer et al [38] showed that radiomics could be used to describe tumor heterogeneity.According to previous studies, the risk of malignancy in high-grade tumors can increase with tumor size, and tumor size is significantly correlated with metastasis. [39]Shape features refer to the characteristics that describe the size and morphology of a region of interest, such as maximum 2-dimensional diameter, volume, and area.These parameters reflect information about the entire tumor shape.Our findings are consistent with this conclusion.Our study also has some limitations.Firstly, the sample size was relatively small and cases were not evenly distributed across different grades or stages.Secondly, the T-staging subgroups (T1-2 and T3-4) were coarse due to clinicians' emphasis on other subgroups of T-staging (such as T1a and T1b).Finally, this study is limited to a single center and lacks external validation.In the future, multi-center studies should be carried out to enhance the generalizability of the model. Conclusion Radiomics models based on DECT have the potential to aid in nuclear grading and T-staging of ccRCC prior to surgery, thereby Table 7 The Delong-test of the models'AUC for T-staging. Figure 1 . Figure 1.The radiomics analysis workflow.The radiomics workflow includes VOI segmentation, feature extraction, feature selection, model establishment (machine learning, radiomics model), analysis (ROC curve drawing, predictive performance validation and model testing). Figure 2 . Figure 2. Features extraction and dimensionality reduction for nuclear grading.A-G: LASSO algorithm (regression coefficient diagram) for feature extraction and dimensionality reduction in nuclear grading based on image features at 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, VNC, and MEI + IMDI. Figure 3 . Figure 3. ROC curves of SVM methods for classification in nuclear grading group.A-G: ROC curve of validation set of the 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, VNC and MEI + IMDI models respectively.GT: ROC curve of training set in MEI + IMDI models. 5. Figure 4 . Figure 4.The decision curve analysis of various prediction models for identify high-grade ccRCC from low-grade ccRCC in validation set. Figure 6 . Figure 6.ROC curves of SVM methods for classification in T-staging group.A-G: ROC curve of validation set of the 70 KeV, 100 KeV, 150 KeV, MEI, IMDI, VNC, MEI + IMDI models respectively.GT: ROC curve of training set of the MEI + IMDI model. facilitating treatment strategies and assessment.This provides additional incremental value for the development and utilization of DECT. Table 1 General clinical data of all patients of the 200 patients n (%). Table 2 Description of selected radiomics features with their associated feature group and filter for nuclear grading.GLDM = Gray Level Dependence Matrix, GLRLM = Gray Level Run Length Matrix, GLSZM = Gray Level Size Zone Matrix VOI segmentation, all images were uniformly enlarged by 1.5 times, and window width and window level were 250/50 HU.Eventually, the VOIs of 200 patients were segmented on Radcloud platform.The patients were randomized into validation cohort and training cohort at the ratio of about 3:7. Table 3 The results of AUC, 95 CI, sensitivity, specificity for nuclear grading. Table 4 The Delong test of the models'AUC for nuclear grading.
2024-03-09T05:07:46.964Z
2024-03-08T00:00:00.000
{ "year": 2024, "sha1": "c41a9de65584894bebf8b68181408bdf1b5c83e8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c41a9de65584894bebf8b68181408bdf1b5c83e8", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
55906048
pes2o/s2orc
v3-fos-license
The future is in childhood: Evaluation of the quality of sustainability programmes in the early years During the last decade Environmental Education (EE) programs have drawn attraction from both experts and educators as a way to answer the increasing preoccupation about the different challenges and problems faced by the Environment. For this reason, many schools have started programmes and strategies related to EE with different success. In this work, we have carried out a case study over 30 teachers of Early Years Education (EYE), working at preschools in Granada, Spain, in order to assess the implementation of Environmental Education in their daily practice. By means of an interview protocol we have gathered information about 15 different categories encompassed in topics which are central to EE such as curricular innovation, participation of the educational community and sustainable management of the school. Results show how teachers are positive regarding the inclusion of EE in their daily practice and the participation of the community. On the other hand we observed the difficulties faced by the educators to effectively implement EE programs due to lack of economic support by the Administration and the need to carry out an objective consultancy to overcome possible biases which may exist when teachers evaluate their own performance. Introduction What kind of planet will we leave to our descendants?Climate change, desertification, thawing of the Poles, deterioration of the ozone layer; these are some of the examples of the progressive worsening of the environmental reality we are facing; problems, in which the anthropocentric attitude of human beings, has great responsibility. Educational programs have the obligation of incorporating convincing answers to these challenges, so they contribute to the transition towards a new model of society based on the a Corresponding author: abigail@ugr.esdevelopment of environmental ethics which promote protection of the environment in a sustainable and durable way. The importance of environmental education Many of the students starting their schooling will hold in the future management positions in companies with environmental responsibilities, political charges in which they will make high level decisions, or they will simply be citizens committed with new forms of mobility, consumption, use of water and natural resources.Environmental Education (EE) is a powerful tool to help us in educating future generations, as it can be seen as a permanent process in which individuals and communities become aware of their environment and acquire the knowledge, values, skills, experience and determination which prepare them to act, individually or collectively, in the resolution of current and future environmental problems. The importance of environmental education There is a growing interest in the study of EE in Early Years Education (EE) among researchers from different countries including Australia, [18,9], the USA [10, 11, and 4], the UK [2,20], Mexico [3], and New Zealand [8]. A good example that examines this field of research can be found in [15], which includes a thorough review of the aforementioned papers and many others related to EE and Education for Sustainable Development (ESD) that were published from 1996 to 2003.One of the goals of the authors is to determine how EE and ESD are approached by researchers in EYE.In this regard, they identified two major approaches in the literature: (1) How teachers understand EE/EDS and (2) How they apply it in their daily practice.The understanding of EE/EDS entails teaching facts about the environment, manipulating the behavior of children and developing their critical thinking skills.Regarding the application of EE/EDS in the curricula, two different trends were found, the first one analyzing the potential of the implementation of EE/EDS in EYE and the second one assessing the efficiency with which it is actually implemented. The results of the studies vary considerably between countries, states, provinces and even cities since the economical, geographical, social and educational contexts differ.In spite of this fact, an important part of the findings are applicable and of interest worldwide, therefore our interest to follow the steps initiated by researchers around the world to study the current situation in the city of Granada in Southern Spain. The work of our research group deals with all the topics aforementioned at all education levels.We have carried out studies oriented to analyze how teachers understand EE/ESD, their degree of interest in the topic, how they apply it in their daily practice and the potential of its application in schools having never applied it.Additionally, we have also developed different consultancies in which the strengths and weaknesses of different programs were evaluated [5, 6, 7, 13, 14, 16, 17, 21, and 22]. The work presented in this paper is oriented to apply our experience to study and evaluate EE and ESD programs in EYE.More specifically, as a first step in our immersion, we will focus on analyzing teacher's insight of EE, their opinion about its importance and the techniques and strategies they follow to include it in their daily practice.The remainder of the paper is structured as follows; section 2 defines the motivation and the objectives of the study; section 3 explains the methodology in detail; section 4 presents and analyzes the qualitative and quantitative results and finally, future work and the conclusions are drawn in section 5. Motivation To our knowledge there are no previous studies of the application of EE in preschools in the city of Granada.Therefore, our main motivation was to fill this gap since we believe it is of crucial importance that EE also takes place in the initial phase of formal education in order to address pro-environmental values with future generations and promote commitment with sustainable development models. Aim and research questions The main objective of this research is to study how preschool teachers implement EE in their daily practice and the degree of sustainability of the policies and infrastructures of the education centers they work in.More specifically we seek answers for the following questions:  How is EE integrated in the daily practice through curricular innovation?  To what degree is the community involved with the EE practices of the school? Are the school and its resources managed in a sustainable way? Sample We have interviewed 30 teachers from 10 different preschools in Granada (Spain) who were selected using a non-probabilistic sampling of intentional character, since we consider they fairly represent the population under study.86.7% of interviewees were women, while 13.3% were men. Data collection tools The employed technique to gather information is based on an interview protocol composed of 50 questions.The interview includes questions which are related to three main aspects: curricular innovation, participation of the educational community and sustainable management of the school.These three main aspects are, in turn, subdivided into 15 different categories, namely: Additionally, in order to quantitatively analyze the information, we developed a scale of value in which subjects evaluated from 1 (non-satisfactory) to 5 (highly satisfactory) their overall satisfaction with the 15 categories of study.The methodological approach is, therefore, twofold as it comprises both qualitative and quantitative analysis.Quantitative analysis is employed to ease the identification of the overall strengths and weaknesses of the application of EE in the curricula, while qualitative analysis allows us to deepen the evaluation. Data analysis tools Regarding the quantitative analysis, we have elaborated a classification of the addressed topics from an analytical triangulation strategy [23] in which we have considered both descriptive (means and percentiles) and multivariate (hierarchical cluster analysis) levels. The mean of answers to the scale value are depicted in table 1. Following the analytical triangulation strategy explained in we have, in first place, implemented the classification of the arithmetic mean of the 15 categories using the 50th percentile (P50=4 in this case) as the threshold to divide the answers in two groups.Thus, those categories showing means lower than 4 were classified as moderately satisfactory while those higher than 4 were labeled as highly satisfactory.Figure 1 (left) shows a histogram and the value of satisfaction assigned to each category. After the descriptive classification, we implemented a multivariate content classification using hierarchical cluster analysis through Ward's method and the squared Euclidean distance.The result of this analysis is shown as a dendogram in figure 1 (right).Regarding the analysis of qualitative data, the transcription of the interviews was codified using Nudist Vivo, allowing us to match the testimonies to the different categories. 4 Discussion of results Qualitative analysis When analyzing the testimonies related to curricular innovation we observe how teachers are involved with EE "The social climate is very rich and participative.That's the general tone in this school", "We encourage collaborative and experimental work at many levels", "EE is part of the educational project of the school", "All teachers are aware of the importance of addressing EE topics in the ages we work with" and carry out different activities "we have a vegetable garden in which we grow lettuce and tomatoes.We have different natural elements we work with in the classroom such as insects, stones and shells". When asked about citizen participation, most subjects claimed they were very happy with the relationship of the school with the families of the students as well as with the surrounding environment "families are highly interested with all the educational elements, we have different means to interchange information in a continuous way", "parents have participated in different research projects, helping their children to prepare conference talks" "we are in touch with a group of farmers who have ecological vegetable gardens next to the river.We have visited them and they have also visited the school.There is also a close relationship with the neighborhood association.We are also working with the city hall in a project called "my neighborhood is my home". Regarding the sustainable management of the school and mobility the testimonies are less positive "there is a drinking fountain in the courtyard with a continuous water flow.We have requested the city hall to install a push button", "the tanks of the toilet are only filled to half of their capacity.We encourage responsible consumption in the assembly", "there are no solar panels, we are considering to install them next year", "we have got the city hall to install dumpsters in the neighborhood by means of an awareness campaign", "most parents bring their children by car", "mobility causes a large problem, cars invade the sidewalks surrounding the school", "the menu of the canteen is controlled by dieticians.The cooks assist to yearly seminars in which they analyze the types of food, the Mediterranean diet.There is continuous innovation", "In order to control the surplus of food we try to adjust the amount that is cooked.To do so, the attendance sheet is given to the cooks everyday". Quantitative analysis From the obtained results, it can be highlighted that in all 15 categories, the average score is above 2.93.For this reason, we can affirm in global terms that the perception of the subjects regarding the implementation of EE in their schools is moderately high.Secondly, we must emphasize that both analysis strategies have established two groups of contents: those that teachers believe have been addressed in a moderately satisfactory way and those which were applied in a highly satisfactory way.Among the first set of aspects, it is important to notice that they were all related to the sustainable management of the school (water, waste, mobility, energy and responsible consumption), which is to a large extent not controlled by the teachers. Conclusions and future work We have carried out a first approximation to the situation of EE in EYE in the city of Granada (Spain).By interviewing 30 teachers from 10 different schools we have studied how they apply EE in their daily practice, what kind of activities they make, what values they promote, as well as the involvement of the community in the dynamic of the schools and the sustainable management of the resources. We have confirmed there is great disposition by the teachers to start addressing EE topics at early ages.We have also observed that the valuation was more positive towards those aspects related to curricular innovation and citizen participation.This is mainly due to the fact that teachers feel they can directly influent the social climate, work as a team, include EE in a cross-curricular way, encourage the participation of families and other external agents, visit the environment and use the green areas they have access to.On the other hand, we have observed there are other aspects central to EE and, in which teachers have little range of action, such us the sustainable management of the school and mobility which need further work and support from the administration.The lack of funds hinders the adaptation of infrastructures, and the way they are managed, to warrant coherence with the values transferred to children in the classrooms. Future work will be oriented to implement a consultancy process using EE principles and standards to objectively determine the quality of the implementation of EE in EYE, and to identify possible subjective biases in the responses of the subjects.Additionally, the consultancy will include a series of strategies and recommendations to be followed by both the Administration and teachers to improve the implementation of EE. Fig. 1 . Fig. 1.Histogram of data obtained from the value scale (left).Dendogram of the scaled value obtained using Ward Linkage (right). Table 1 . Mean and standard deviation of the answers to each category of the scale value.
2018-12-05T01:16:13.764Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "27c2b0b57ae13abdb3d5d3006434720f70072cbc", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2016/04/shsconf_erpa2016_01044.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "27c2b0b57ae13abdb3d5d3006434720f70072cbc", "s2fieldsofstudy": [ "Environmental Science", "Education" ], "extfieldsofstudy": [ "Engineering" ] }
191194036
pes2o/s2orc
v3-fos-license
A comparative study of salivary and serum calcium and alkaline phosphatase in patients with osteoporosis Background: This study was undertaken to investigate the changes in salivary and serum calcium and alkaline phosphatase in osteoporosis patients. The objective was to compare the change in serum levels with those in saliva. Methods: The study was conducted in the department of biochemistry, National Institute of Medical Sciences and Hospital, Shobha Nagar, Jaipur, Rajasthan, India. Subjects were selected from department of orthopedics, National Institute of Medical Sciences and Hospital, Shobha Nagar, Jaipur, Rajasthan, India. At the same time one hundred adult osteoporosis patients confirmed by DEXA were taken. Calcium and alkaline phosphatase were measured in serum and saliva of each patient. The data obtained was statistically analyzed. Results: Serum calcium has strong positive correlation with salivary calcium (r=0.726) while serum ALP and salivary ALP had weak positive correlation (r =0.453). Conclusions: Saliva can be used to measure calcium level instead of serum as it is non-invasive, quick and easy method. INTRODUCTION Osteoporosis is a progressive systemic skeletal disease which is associated with reduced bone mass/density and micro architectural deterioration of bone tissue in human body. 1 It is usually diagnosed by weakened bones and as cause of pain and debilitating fractures. 2 In the developed world, depending on the method of diagnosis, 2% to 8% of males and 9% to 38% of females are affected. 3 Rates of disease in the developing world are unclear. 4 About 22 million women and 5.5 million men in the European Union had osteoporosis in 2015. 5 Osteoporosis primarily affects older people, particularly women, and is associated with 80% of fractures in people older than age 60 years. Osteoporosis is called a "silent disease" because it progresses without symptoms until a fracture occurs. The fractures caused by osteoporosis have a great impact on public health, as they are often associated to increased morbidity, mortality, reduced quality of life, long hospital stays and high economic cost. 6 Menopause is a physiological process occurring due to decrease in levels of estrogen, in the fifth decade of life in women, involving permanent cessation of menstruation. Menopause is accompanied by physiological and sensorial oral changes in select individuals. The prevalence of oral symptoms was found to be significantly greater in menopausal women (43%) than in premenopausal women (6%). 7 Osteoporosis have greater risk in women after menopause and certain oral changes like xerostomia and burning mouth syndrome, which are a caused of dry mouth, lead to decrease salivary flow. 8 Osteoporosis is asymptomatic and the condition usually presents only after bone fracture and is usually associated with low trauma 'fragility fractures. Osteoporotic (fragility) fractures are fractures that result from mechanical forces that would not ordinarily result in fracture. Osteoporotic fractures are defined as fractures associated with low bone mineral density (BMD) and include spine, forearm, hip and shoulder fractures. 9 BMD of the hip is a stronger predictor of future fracture risk than spine BMD. The risk of fracture increases 1.5-3 times each standard deviation of BMD (T score) below the reference population. 10 Normal BMD is indicated by a T score of 1 to -1, while a T score ≥-2.5 is diagnostic for osteoporosis. T score values between -1 and -2.5 identify a condition known as osteopenia which is associated with low to medium fracture risk, but frequent progression to osteoporosis. All cases of osteoporosis an imbalance exists between bone resorption and formation, the rate of bone formation is often normal, whereas resorption by osteoclasts is increased. 11 X-ray absorptiometry (DEXA) is presently considered the gold standard imaging technique which is used for the early detection of osteoporosis and risk of fracture, which is expensive and difficult in result interpretation. 12 Saliva is important for maintenance of the oral tissues health and can be used for assessing the hormones levels, drugs, and inflammatory factors. Saliva contains organic and inorganic components which may vary both qualitatively and quantitatively. So, saliva examination may be a new tool for the diagnosis of osteoporosis. 13 In a longitudinal study, Sewon L et al, suggested that salivary calcium concentration decreases in stimulated saliva when hormone replacement therapy was initiated in menopausal women. They concluded that this may indicate that individual salivary calcium concentration is modified and/or regulated by factors other than salivary flow. 14 Wardrop RW et al, also reported that menopausal women with oral discomfort were relieved of symptoms after systemic hormone replacement therapy, supporting the fact that there is a correlation between oral discomfort and level of hormones in menopausal women. 15 Calcium is an important nutrient which is essential for bone health. Resorption of bone may lead to diffusion of calcium into blood and further into the saliva. 16 Increased salivary Ca can be used as a potential screening tool for assessing the risk for osteoporosis. According to Maryam Rabiei M et al, the highest salivary calcium level is 6.1 mg/dl and above which (i.e. >6.1mg/dl) can used as a screening tool to identify osteoporosis risk in postmenopausal women. 17 Alkaline phosphatase and Ca level in osteoporosis Alkaline phosphatase is the enzymes mainly derived from the liver, bones and in lesser amounts from intestines, placenta, kidneys and leucocytes. Alkaline phosphatase enzyme plays an important role in bone metabolism and bone homeostasis by probably accumulating calcium ions and matrix vesicles during calcification process. Along with alkaline phosphatase, calcium also plays a major role in the bone homeostasis. Level of calcium depletes with age thereby resulting in the reduction of bone strength. Thus, blood levels of alkaline phosphatase and calcium become inconsistent with age especially in females. Saliva considered as an ultra-filtrate of serum can overtake blood as a proxy due to its non-invasive nature and can be used for estimation of alkaline phosphatase and calcium levels. [18][19][20] Serum calcium and alkaline phosphatase (ALP) are the bone turnover markers which help in bone formation and mineralization. 21 Menopause and ageing are known to associate with accelerated loss of cortical bone. Bone loss occurs when the balance between formation and resorption is upset and resorption is excessive resulting in a negative remodeling balance. 22 Post-menopausal stage and ageing alter the serum calcium and ALP levels. Bhattrai et al, reported the decreased level of serum calcium in postmenopausal women compared with premenopausal women and ALP level was found to be slightly higher among postmenopausal women, these two are the key marker of bone mass reduction. 23 METHODS This study was approved by the ethical committee of the institution. This study was conducted from June 2018 to December 2018 in the period of 7 months. The study population comprised of multiethnic groups of patients from Jaipur district. The sample consisted of 100 patients in the age range of 35 to 70 years of both sexes. The patients were taken from the outpatient department of orthopedics, NIMS medical college and hospital, Shobha Nagar, Jaipur, Rajasthan, India. Clinical assessment Each patient in osteoporotic group was asked about the history of systemic diseases, history of intake of drugs (hormonal therapy) and DEXA scan analysis report and their detailed health questionnaire was completed in our pre decided format. In case of female patients, they were asked about duration and age of menopause. Samples of blood and saliva were collected from all the patients for the measurement of calcium and alkaline phosphatase. The level of calcium and alkaline phosphatase in serum was compared with that in saliva. Serum total calcium was measured calorimetrically using ready to use kit (Arsenazo method). 24 Serum alkaline phosphatase was measured by using ready to use para nitro phenol reagent kit of human (Kinetic method). Inclusion criteria • Adult patients which are clinically diagnosed cases of osteoporosis of both genders. Exclusion criteria • Patients older than 40 years, who had not diabetes or metabolic disorders, hypertension, thyroid disorders and oncological disorders, who were taking continuous medication other than that for the treatment of osteoporosis, and who refused to sign the informed consent form were excluded from the study. Statistical analysis The paired student's t test was used to compare serum and saliva levels and P<0.05 was considered significant. Pearson's correlation coefficient was used to find correlation between serum analytes and salivary analytes. RESULTS Mean±SD serum calcium in osteoporosis patient was 9.7±0.4mg/dl. The calcium level in saliva of the patients was 7.8±0.5mg/dl. The level in saliva was lower but there was a strong positive correlation (r=0.7) between the serum level and the salivary level. The Mean±SD alkaline phosphatase in serum and saliva respectively was 250.24±57.04IU/L and 31.5±6.9IU/L. The level in saliva was much lower. The correlation between serum alkaline phosphatase and salivary alkaline phosphatase was positive but weak. Hence, saliva may not be a substitute to serum for measurement of alkaline phosphatase. In Table 1, correlation between serum calcium and salivary calcium in osteoporosis patients is shown in which Pearson's correlation analysis showed a significant (r=0.0726) highly positive correlation of calcium in serum and saliva in osteoporosis patients with a p-value of <0.001. Figure 1 correlation between serum calcium and salivary calcium in osteoporosis patients is shown in which Pearson's correlation analysis showed a significant (r = 0.0726) highly positive correlation of calcium in serum and saliva in osteoporosis patients with a p-value of <0.001. Figure: 2 Correlation of alkaline phosphatase (IU/L) in serum and saliva in osteoporosis patients. In Figure 2, correlation between serum alkaline phosphatase and salivary alkaline phosphatase in osteoporosis patients is shown in which Pearson's correlation analysis showed a significant (r = 0.453) but low positive correlation of calcium in serum and saliva in osteoporosis patients with a p-value of <0.01. DISCUSSION Calcium and phosphorus which quantitatively account as the main mineral component of the human skeletal system are present as inorganic components in the saliva. Calcium, phosphorus, type I collagen related peptides, osteocalcin and alkaline phosphatase are the common markers for osteoporosis which are assessed in the blood. Biochemical markers of bone turnover are said to be related to the current bone mass and help in predicting future bone loss. Ross PD et al, and Taguchi A et al, reported that the levels of serum total alkaline phosphatase and bone-specific alkaline phosphatase are increased in subjects with low bone mineral density. 25,26 The changes in serum calcium and alkaline phosphatase in osteoporosis are well documented but whether similar changes occur in salivary calcium and alkaline phosphatase is not known. This study was conducted to reveal whether changes in salivary calcium and alkaline phosphatase are similar to those in serum calcium and alkaline phosphatase and whether saliva can be used as a substitute to serum for measurement of calcium and alkaline phosphatase. Calcium is the most important salivary electrolytes due to its effective role in bone structure and bone formation. It plays a significant role in bone regeneration, which is directly related to osteoporosis, a change that may occur as a consequence of reduced absorption of this electrolyte, which may occur in either the bone or saliva. Moghadam et al. did not find any correlation between salivary calcium and low bone mineral density. This difference in results may be due to the different age of patients in each sample and to the study designs as well. 27 Rabiei M et al, studied a group of similar patients and assessed salivary calcium, applying a cutoff point of 6.1 mg/dL. Salivary calcium concentration demonstrated that about 67.5% of the patients had osteoporosis, while 60% of women with salivary calcium levels below the cut-off point were free of osteoporosis. They came to the conclusion that salivary calcium can be used to diagnose bone mineral changes, thus obviating the need for bone densitometry. 28 The resorption of bone leads to release of calcium in serum which is filtered into urine and excreted. Reddy S et al, studied that an increased level of salivary calcium and alkaline phosphatase in saliva of osteoporotic subjects which is an ultrafiltrate of plasma. Hence, salivary parameters should be used as predictors for these diseases and further investigation should be done to support any definite conclusions. 29 Our results show that serum calcium had positive correlation with salivary calcium with an r value of 0.726 which showed strong correlation. Since salivary calcium levels paralleled serum calcium levels, authors can use salivary calcium as a diagnostic tool as it is non-invasive, quick and easy method of sample collection. Pearson analysis of serum alkaline phosphatase and salivary alkaline phosphatase revealed an r value of 0.453 which showed positive but weak correlation. Therefore, authors cannot use salivary alkaline phosphatase as a substitute for serum alkaline phosphatase.
2019-06-14T14:20:48.156Z
2019-05-29T00:00:00.000
{ "year": 2019, "sha1": "a61d90319d3725fa5c103cdc2ef9f40c622694a6", "oa_license": null, "oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/6575/4841", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "54ebb4bfe35de21f446ab68b87ef62a72732d4fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227175742
pes2o/s2orc
v3-fos-license
Convalescent Memory T Cell Immunity in Individuals with Mild or Asymptomatic SARS-CoV-2 Infection May Result from an Evolutionarily Adapted Immune Response to Coronavirus and the ‘Common Cold’ Recent studies have shown a significant level of T cell immunity to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in convalescent coronavirus disease 2019 (COVID-19) patients and unexposed healthy individuals. Also, SARS-CoV-2-reactive T memory cells occur in unexposed healthy individuals from endemic coronaviruses that cause the ‘common cold.’ The finding of the expression of adaptive SARS-CoV-2-reactive T memory cells in unexposed healthy individuals may be due to multiple cross-reactive viral protein targets following previous exposure to endemic human coronavirus infections. The opinion of the authors is that determination of protein sequence homologies across seemingly disparate viral protein libraries may provide epitope-matching data that link SARS-CoV-2-reactive T memory cell signatures to prior administration of cross-reacting vaccines to common viral pathogens. Exposure to SARS-CoV-2 initiates diverse cellular immune responses, including the associated ‘cytokine storm’. Therefore, it is possible that the intact virus possesses a required degree of conformational matching, or stereoselectivity, to effectively target its receptor on multiple cell types. Therefore, conformational matching may be viewed as an evolving mechanism of viral infection and viral replication by an evolutionary modification of the angiotensin-converting enzyme 2 (ACE2) receptor required for SARS-CoV-2 binding and host cell entry. The authors propose that convalescent memory T cell immunity in individuals with mild or asymptomatic SARS-CoV-2 infection may result from an evolutionarily adapted immune response to coronavirus and the ‘common cold’. Recent studies have provided empirical evidence of T cell immunity to SARS-CoV-2 infection in COVID-19 patients and unexposed healthy individuals [1][2][3][4]. Braun et al. recently identified reactive CD4+ T cells in 83% of COVID-19 patients and in 35% of unexposed healthy donors, which was confirmed by negative reverse transcription polymerase chain reaction (RT-PCR) and serological screening, following in vitro stimulation of peripheral blood mononuclear cells by S-I and S-II peptide pools corresponding to predicted HLA class II epitopes within the NH2-and COOH-terminals of the SARS-CoV-2 spike glycoprotein [1]. More than 80% of reactive CD4+ T cells from healthy donors were derived from stimulation trials utilizing S-II peptide pools corresponding to COOH-terminal epitopes [1]. These epitopes were independently determined to share a higher degree of sequence homology to spike glycoproteins of the endemic 'common cold' coronaviruses, including 229E and OC43, and the spike glycoprotein of SARS-CoV-2 [1]. Also, SII-reactive CD4+ T cells from COVID-19 patients and healthy donors were predominantly of a TH1 memory phenotype [1]. These data support that the existence of SARS-CoV-2-reactive T memory cells in unexposed healthy individuals originated from previous immune responses to endemic coronaviruses that cause the 'common cold' in humans [1]. Grifoni and coworkers further studied SARS-CoV-2-reactive T memory cells in COVID-19 patients and unexposed healthy individuals to include CD4 + and CD8 + T cells responsive to predicted HLA class I epitopes [2]. Following stimulation with peptides corresponding to predicted HLA class I and class II epitopes, SARS-CoV-2-reactive CD8 + and CD4 + T cells were identified in between 70% and 100% of COVID-19 convalescent patients [2]. Also, reactive CD4 + T cells from COVID-19 patients were responsive to HLA class II epitope pools corresponding to the complete sequence of the SARS-CoV-2 spike glycoprotein and those contained within the sequences of highly-expressed SARS-CoV-2 M and N proteins [2]. Small but significant CD4 + T cell responses were observed following stimulation by HLA class II epitope pools corresponding to minor protein species expressed from SARS-CoV-2 open reading frames (ORFs), including nsp3, nsp4, ORF3a, and ORF8 [2]. There was also a diverse pattern of SARS-CoV-2-specific CD4 + T cell reactivity in COVID-19 patients that correlated with predicted concentrations of viral protein expression in infected cells [2]. The spectrum of SARS-CoV-2-specific CD8 + T cell reactivity from COVID-19 patients appeared to be more evenly distributed across targeted protein species, including spike protein, N and M proteins, nsp6, ORF8, and ORF3a. Importantly, SARS-CoV-2-specific CD8 + T cells were detected in at least 4 different healthy donors, but with a narrower distribution of targeted SARS-CoV-2 protein species compared with reactive CD4 + T cells [2]. The patterns of SARS-CoV-2-specific CD4 + and CD8 + T cells responsive to HLA class I and class II epitopes found in multiple species of viral proteins were observed in a significant number of unexposed healthy donors [2]. These findings suggest a more extensive and more pervasive expression of SARS-CoV-2-reactive T memory cells in unexposed healthy individuals than previously believed [1][2][3]. These findings might possibly be due to multiple cross-reactive viral protein targets following previous exposure to circulating human endemic 'common cold' coronaviruses [1][2][3]. From a different perspective, a recent bioinformatics analysis of a large medical records database selectively correlated decreased SARS-CoV-2 infection rates in individuals who recently received non-COVID-19 vaccinations [5]. Specifically, prior administration of vaccines to polio virus, Hemophilus influenzae type-B (HIB), measles-mumps-rubella (MMR), varicella zoster, the pneumococcal conjugate (PCV13), influenza, hepatitis A, and hepatitis B (HepA-HepB) during 1, 2, and 5 years were associated with reduced SARS-CoV-2 infection rates, after elimination of potential confounders [5]. However, whether SARS-CoV-2-reactive T memory cells originated from previous immune responses to cross-reactive epitopes in human endemic 'common cold' coronaviruses remains to be determined. Importantly, supporting evidence is required for the possibility of involvement of similar immunologic mechanisms on a much broader scale. Accordingly, determination of protein sequence homologies across seemingly disparate viral protein libraries may provide invaluable epitope-matching data linking SARS-CoV-2-reactive T memory cell signatures to prior administration of cross-reacting vaccines directed against other common viral pathogens. Given that exposure to SARS-CoV-2 initiates profound and diverse cellular immune mechanisms, it is possible to speculate that the intact viral particle possesses a required degree of conformational matching, or stereoselectivity, to effectively target the ACE2 receptor on multiple cell types. However, this complementary communication phenomenon is, by nature, restrictive and selective [6,7]. In living organisms, several compounds and processes emerge from genetic information through temporally determined evolutionary processes. Genetic processes are driven by change and adaptation, and the evolution of interactive regulatory mechanisms of gene evolution result in major molecular strategies that preserve and protect this information. Therefore, it may be expected that biochemicals, pharmacological agents, and organisms, including viruses, can both positively and negatively interact because of these commonalities. In this regard, all that is needed to influence other systems and organisms is the complementary matching of critical biochemical components, making them extremely compatible or partially compatible. Also, host 'target' processes that promote both viral and bacterial infection and replication tend to be conserved during evolution, probably due to requisite stereoselective aspects of these systems. Therefore, there are so many conformational matching steps e929789-2 in an entirely integrated organism or system it would be impossible to invent an entirely new system. It is both efficient and economical to merely add tolerated favorable mutations that benefit survival. Furthermore, this conserved core information is relatively stationary in time, allowing for the presence of chance pathological 'bullets' that include viruses and bacteria, which periodically result in a conformational match that both alters the host systems and may use it for existential propagation [8]. In most instances, these external assaults may result in host death because of the host's overall informational mismatching. Conformational matching as an evolving mechanism of viral infection and replication has been shown by a recent study of the angiotensin-converting enzyme 2 (ACE2) receptor, which is required for SARS-CoV-2 binding and host cell entry [9]. Braun et al. undertook a systematic analysis of the ACE2 conservation and coevolution of an interactive protein network across 1671 eukaryotes [1]. They identified potential therapeutic targets responsive to widely used drugs, such as nonsteroidal anti-inflammatory drugs (NSAIDs) and vasodilators [1]. Therefore, the prevention of SARS-CoV-2 binding to the ACE2 receptor is predicted to exert profound effects on viral infectivity and disease progression in COVID-19. Conclusions The primary host immune strategies for survival, which include immune memory, may also be susceptible to disruption, as shown by the significance of host immune defense processes. Furthermore, this commonality may provide a strategy for therapeutic intervention, especially if pre-exposure to a prior infective agent containing homologous sequences in matched protein epitopes confers a significant degree of immunity. The evolutionarily adapted immune response to coronavirus that is the cause of the 'common cold' may represent a vulnerability in the host defense system. Individuals who are asymptomatic or unexposed to SARS-CoV-2 infection may benefit from prior exposure to cross-reactive endemic coronaviruses containing homologous epitopes distributed across viral proteins. Increasing awareness of the role of this evolutionarily adapted immune response to coronavirus may be both important and save time during the current urgent need to understand and develop biochemical and immunological strategies to treat, control, and prevent infection during the SARS-CoV-2 pandemic.
2020-11-27T14:06:20.266Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "fbbe5f2d7215c5e1c2ed1cc0e59e0a45b22b6ae1", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7706138", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3c032d9002edee5b4b19590b2c0516b32875196c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
25897006
pes2o/s2orc
v3-fos-license
Expression of novel ING variants is regulated by thyroid hormone in the Xenopus laevis tadpole. The candidate tumor suppressor gene, ING1, encodes several protein isoforms as a result of alternative splicing that may possess agonistic and antagonistic roles in the control of cell proliferation and apoptosis. Recently a related gene, ING2, was isolated in human whose expression is increased in adenocarcinomas. Little is known about the cellular function and regulation of these ING family members, but the fact that ING proteins contain a plant homeodomain finger suggests that these proteins may modulate transcription factor-mediated pathways. To elucidate how ING may interact in different tissues to modulate function, we used amphibian metamorphosis as a model system in which a single stimulus, thyroid hormone (TH), initiates tissue-specific proliferation, differentiation, and apoptosis. We have isolated the first Xenopus laevis ING2 and demonstrate that transcript levels increase in response to TH treatment. We provide evidence for the existence of splice variants that are differentially expressed in tissues with different TH-induced fates. Western blots using an antibody directed against the highly conserved C-terminal end of ING proteins reveal a tissue-specific pattern of ING isoform expression in adult Xenopus tissues. Analyses of premetamorphic tadpole tissues show a TH-induced accumulation of ING proteins in tail, whereas the levels in the leg are not affected. This TH-induced accumulation is also observed in serum-free tail organ cultures and is prevented by inhibitors of tail apoptosis. Therefore, this work presents the first link between ING expression and a hormonally regulated nuclear transcription factor-mediated apoptotic response opening the possibility that ING family members may be involved in transducing the signal initiated by TH that determines cell fate. Introduction The ING1 (inhibitor of growth 1) gene was first isolated by PCR-mediated subtractive hybridization for the enrichment of transcripts found in non-tumorigenic breast epithelial cells followed by a novel in vivo positive selection procedure for growth inhibitors (1). ING1 is implicated in the control of several key cellular processes (for review, see (2)) including cellular proliferation (1,3,4), apoptosis (5-7), senescence (3), and drug resistance (8). ING1 transcript levels are depressed (1,(9)(10)(11)(12)(13) and the ING1 gene is a target for loss of heterozygosity or rearrangement (1,3,(13)(14)(15)(16) in a variety of cancer cells suggesting that ING1 functions as a tumor suppressor. At least four ING1 transcripts are ubiquitously expressed in adult and fetal tissues with varying levels; products of alternative splicing of a variable first exon and a common second exon (4,9,17,18). Several known protein products are encoded by these transcripts; however, no systematic analysis of protein expression in different normal tissues has yet been reported. (9). Both ING2 and ING4 transcripts are ubiquitously found in fetal and adult human tissues. The gene structure of ING2 appears to be similar to ING1 with 2 exons (Nagashima et al, unpublished), but no splice variants have been described. ING2, like p33 ING1b , can regulate the expression of genes involved in apoptosis such as p21 and bax (20). ING4 transcript levels are decreased in breast and melanocyte cancer cell lines (9) and ING2 transcripts are elevated in adenocarcinomas compared to adjacent normal tissue (19) suggesting that each ING family member is independently regulated and has its own unique effects. ING proteins belong to a family of plant homeodomain (PHD) finger-containing proteins that include transcription factors and proteins that regulate chromatin structure (22). Although the mechanisms of action for ING proteins have yet to be fully elucidated, there is evidence that ING proteins may affect the activity of p53 (6,20,23), histone acetyl transferases (HATs) (24,25), and histone deacetylases (HDACs) (25,26). The combination of splice variants, multiple potential protein products and at least three related genes allows for considerable possibilities for ING to modulate cellular effects as both agonists and antagonists. Indeed, recent reports suggest that p33 ING1b and p24 ING1c are functional antagonists with respect to modulation of p53 (4) and HDAC activity (26) and that p33 ING1a and p47 ING1 have opposite effects on HAT activity (25). In order to elucidate how ING modulates cellular outcome in different tissues, we turned to amphibian metamorphosis as a model system in which a single stimulus, thyroid hormone (TH), initiates tissue-specific proliferation, differentiation and apoptosis. The metamorphosis of the tadpole to a frog is absolutely dependent upon a substantial increase of endogenous levels of 3,5,3'triiodothyronine (T 3 ) from undetectable levels in the plasma (27,28). The other TH, thyroxine (T 4 ), also increases, but it is the predominant form transported to target tissues where it is converted to the more active T 3 form. Virtually every tissue in the tadpole is a target of TH and these changes by guest on March 22, 2020 http://www.jbc.org/ Downloaded from can be precociously induced by exogenous TH administration in vivo and in culture (29-35). TH functions to selectively activate tissue-specific genetic programs by regulating gene transcription via specific nuclear receptors (TRs) (29,35-42). TRs have important roles as repressors and activators of gene transcription during Xenopus development (for review see (43)). In premetamorphic tadpoles, TRs function as repressors of TH-inducible genes in the absence of appreciable levels of TH thereby preventing precocious metamorphosis. When endogenous TH levels rise, they act as activators of these genes thereby initiating metamorphosis. Thus, the presence or absence of ligand plays a critical role in gene regulation. However, what still remains enigmatic is how TRs can promote the development of multiple cell fates such as proliferation, reprogramming and apoptosis during metamorphosis. Several factors that modulate TR activity have been described and include HATs/HDACs and p53 (44)(45)(46)(47)(48)(49)(50)(51)(52)(53)(54)(55)(56) and it is postulated that tissuespecific factors may modulate the TH-induced outcome (for review see (43)). Herein, we describe the isolation, cloning and initial characterization of the first frog ING2 gene (xING2 for Xenopus ING2) and provide evidence suggesting that xING2 is subject to alternative splicing. We demonstrate that transcript levels differentially increase in response to T 3 treatment in tissues with different metamorphic fates. Western blots using an antibody directed against the highly conserved C-terminal end of ING proteins reveal a complex pattern of expression in adult Xenopus tissues. While premetamorphic tadpole tissues show T 3 -induced accumulation of ING proteins in the tail, the levels in the leg are not affected. This T 3 -induced accumulation is also observed in serum-free tail organ cultures and is abrogated by inhibitors of tail apoptosis. Therefore, this work presents the first link between ING expression and a hormonallyregulated nuclear transcription factor-mediated response. Since ING proteins appear to associate with chromatin (7,25) These membranes were placed on LB plates with 100 µg/mL ampicillin and grown overnight at 37°C. The bacteria were then lysed on the membrane by placement on filter paper with 0.5M NaOH for 5 min, 1M Tris-HCl pH 8.0 for 5 min, and 0.5M Tris pH 8.0/1.25M NaCl 5 min. The DNA was UV cross-linked to the nitrocellulose as described previously and then washed with 2X SSC/2% SDS followed by 2X SSC. The membranes were hybridized with human ING1 cDNA by guest on March 22, 2020 http://www.jbc.org/ Downloaded from probe as described above but for only 1 h prior to the detection with CDP-Star reagent. Positive clones were used to inoculate 5 mL cultures of LB broth with 100 µg/mL ampicillin which were grown overnight at 37°C with shaking. Plasmids were harvested with a Qiaprep Spin Miniprep kit (Qiagen) and subsequently digested with EcoRI. A 1% agarose gel was used to separate the inserts from plasmid vector and the products were Southern blotted overnight as described previously. Positive clones were then sequenced. The DNA and derived amino acid (aa) sequences were aligned using Clustal W version 1.8 software (57). Northern Blot Analyses Analyses of RNA transcripts was done according to the method described in (58) RT-PCR Analyses To determine the relative expression levels of xING2 transcripts, RT-PCR analyses were performed. The primers spanning the putative exon 1/2 boundary (XB5/XB8) and within the putative exon 2 (XB9/XB10) are indicated in Figure 1 and yield amplicons of 635 and 253 bp, respectively. All reactions were determined to be in the linear range of amplification and normalized to L8 ribosomal protein transcript whose expression is not affected by T 3 treatment (59). For amplification of control L8 ribosomal protein transcript, the sense primer For TRα the sense primer (5'CACTACCGCTGTATCACTTG3') and antisense primer (5'GGGTGATTATCTTGGTGAACT3') were used (60). For TRβ the sense primer (5'CCAGTGCCAAGAATGTCG3') and antisense primer (5'GTAAACTGGCTGAAGGCT3') were used (60). All primers were used at 20 pmol in a typical 50 µl reaction containing 1.5 U Taq DNA polymerase (Amhersham Pharmacia), 10 nmol dNTPs (Life Technologies) and 1.5 mM MgCl 2 . The PCR reaction was: 7 min at 94°C, 35 cycles of 60 s at 94°C, 60 s at 55°C (for TRα, TRβ, and L8) or 54°C (for xING2 primers), and 1 min at 72°C. A final 10 min extension at 72°C was done. The L8 reaction was the same except only 30 cycles were used. The amplified products were separated on 2% agarose gels and visualized by ethidium bromide staining. Western blot analyses were performed using a rabbit polyclonal antibody generated using a human GST-ING1 C-terminal end fusion protein (1) using methods described previously (58) with minor modifications. Briefly, blocking was performed overnight at 4°C in PBS containing 5% skim milk, 2% fetal bovine serum (Life Technologies) and 0.1% v/v Tween-20 in PBS pH 7.2. The polyclonal antibody was used at a 1:10,000 dilution in blocking buffer. The antibody incubation was carried out at room temperature for 1 h. Membranes were washed extensively in PBS with 0.1% v/v Tween-20 for 10 minutes and incubated with goat anti-rabbit IgG polyclonal antibody conjugated to horseradish peroxidase (Calbiochem, La Jolla, CA). Peroxidase activity was detected by using an ECL kit according to the manufacturer's instructions (Amhersham Pharmacia Biotech). Control animals had an equal volume of DMSO added to their water. The tadpoles were sacrificed at the indicated times after treatment for the isolation of RNA or protein as described above. Organ culture of tadpole tails Tadpoles were anaesthetized in 0.1% MS222 (Syndel Laboratories, Vancouver, BC) and quickly immersed (5 s each) in a series of sterile water, 70% ethanol and two more beakers of sterile water. The tails were severed under aseptic conditions with a sterile scalpel and placed into culture dishes containing culture medium. The culture medium consisted of a 55% dilution of 1X alpha MEM (ICN Pharmaceuticals, Costa Mesa, CA) pH 7.2 supplemented with 14.5 mM NaCl, 1.1 mM Na 2 HPO 4 , 1.1 mM NaH 2 PO 4 , 2 mM L-glutamine, 1 mM L-methionine, 25 mM HEPES, 10 µg/ml fungizone and 50 µg/ml gentamycin sulfate. The cultures were incubated at 25 o C under air and the medium was changed daily. The tails were allowed to recover overnight before the addition of 100 nM T 3 with or without 2 mM EGTA, pH 8.0 (Sigma-Aldrich) that has previously been shown to inhibit tail regression (62). (Figure 2). The C-terminal PHD finger domain is completely conserved between ING1 and ING2 proteins and the 90 aa region spanning the PHD finger also displays a high degree of conservation. Isolation of a novel Previous work on human tissues indicated that ING2 is ubiquitously expressed as two major transcripts of 1.3 and 1.5 kb with the highest expression in the testis (19). Northern blot analyses of adult Xenopus tissues show a similar trend. xING2 is expressed in all tissues examined with testis having the highest expression levels followed by brain and skin in similar amounts then muscle and liver showing very low levels. A 1.3 kb transcript is found in all tissues with an additional 1.0 kb band found only in testis ( Figure 3A). Finding multiple bands in the Northern blot suggests that, at least in the testis, multiple splice variants or transcripts from genes highly related to ING2 are present. These results were obtained using the entire open reading frame of xING2. Since ING1 is subject to alternative splicing and since the gene structure of ING2 is similar to ING1, we wanted to test whether different splice variants for xING2 exist as well. To test this, we used differential RT-PCR analyses using two primer sets. One primer set (XB5/XB8; Figure 1) specifically amplifies the transcript we have reported in Figure 1. The resultant amplicon is referred to as xING2(1/2) in Figure 3B. The other primer set (XB9/XB10) amplifies a region in the conserved 3' end of the ORF that should be common to all ING2 variants (assuming that splicing occurs in a manner similar to ING1) and is referred to as xING2(2) in Figure 3B. Neither primer set amplifies ING1 sequences (data not shown). If no splice variants are present, then one would expect that relative levels of amplicons generated using the two primer sets would be equal. The RT-PCR results are consistent for the existence of xING2 splice variants in brain, testis and skin ( Figures 3B and C). TRβ transcript levels showed increases as was previously reported (33). In the leg, the xING2 (2) induction pattern closely resembles that of T 3 -induced TRα expression in these tissues where a biphasic response is observed at 6 h and 24 h with maximal levels at 6 h, rather than that observed for TRβ (single peak at 24 h; Figure 5C). In the tail, the xING2(2) induction pattern is similar to both TRα and TRβ with an initial modest increase at 2 h reaching maximal levels at 48 h ( Figure 5B). In the brain, xING2(2) amplicon levels increase 24 h after T 3 treatment and TRα levels decrease slightly. TRβ levels show a marked increase in this tissue ( Figure 5D). Figure 5E). xING2 expression gradually increases from low levels at NF stage 58 to maximal levels at NF stage 63, whereas TRα levels are maximal at NF stage 60 and decline by NF stage 63, and TRβ levels reach maximal levels at NF stage 60 and remain high ( Figure 5E). The leg does not show any induction of overall xING2 expression similar to the TRα expression pattern ( Figure 5F). TRβ levels peak at NF stage 58 and decrease thereafter. These observed patterns of TR expression concur with those previously reported (33). These data also show that the relative levels of both TRα and xING2 are much lower in the leg compared to the tail during spontaneous metamorphosis. RT-PCR analyses using primers RT-PCR analyses using primers spanning the exon 1/2 splice site show that this presumed splice variant of xING2 increases to maximal levels at NF stage 62 in the leg in contrast to the overall pattern of xING2 expression (compare xING2(1/2) to xING2(2); Figure 5F). At this stage, the relative amount is approximately twice that found at maximal levels in the tail at NF stage 62. In the tail, this presumed splice xING2 variant exhibits a slight delay in increased expression levels (maximal at NF stage 62; xING2(1/2) in Figure 5E) compared to overall xING2 expression; a result that is reminiscent of the T 3 -induction experiments ( Figure 5A). Similar results are observed using cycloheximide/anisomycin and H7 inhibitors (data not shown). Specific ING protein levels increase upon T 3 -induced apoptosis of the tail in vivo Given that the 1.3 kb transcript detected in the tail ( Figure 5) is too short to produce the 90-130 or 60 kDa bands, we suspect that these bands are more likely to represent ING1 gene products or unidentified splice variants of xING2. Northern blot analyses using a heterologous ING1 probe shows that a 3.9 kb transcript is identified that is increased upon T 3 treatment in tail tissue providing support for this idea ( Figure 6D). Together, these data show that accumulation of ING proteins correlates with the ability of the tail to undergo TH-induced apoptosis and provide the first evidence that ING is subject to hormonal control. In this study, we have investigated the expression patterns of ING proteins in a developmental model system in which a single stimulus (TH) can induce both outcomes. We have isolated the first Xenopus laevis ING2 homolog that has a high degree of identity with human and murine counterparts and we demonstrate for the first time that ING expression is hormone responsive. Moreover, xING2 was found to be an early response gene along with TRα and TRβ (38,63) placing it in a potentially important role for the control of cell fate during TH-dependent metamorphosis. The ING2 gene is conserved between frogs and humans and we provide the first evidence of differential regulation of presumed splice variants of this gene. Given the high degree of conservation of gene structure between ING1 and ING2 (4,9,17,18)(our work and Nagashima et al, unpublished), and given that several ING1 splice variants have already been identified (4,9,17,18), it is highly probable that alternative splicing contributes to the tissue-specific regulation of ING2. We also present the first systematic analysis of ING protein expression in adult and tadpole tissues using an antibody that is capable of recognizing both ING1 and ING2 proteins. We demonstrate that there is a great deal of similarity in expression pattern between brain and testis tissues and that there are distinct tissue-specific isoforms. We were unable to determine which protein bands correspond to ING1 versus ING2 proteins and are currently producing isoform-specific antibodies to address this question. ING1 plays an important role in apoptosis (4)(5)(6)(7)20). We have shown that ING protein expression is elevated in response to TH-induced tail regression and that the timing of this event corresponds to the point of commitment for cells to complete a TH-induced program (38). This study has provided evidence that the 90-130 kDa ING proteins may be important in regulating hormone-induced apoptosis. The tail and brain, two tissues that undergo extensive apoptosis, show a TH-dependent increase in these proteins whereas the leg, whose main response is proliferation and growth, does not. In addition, induction of these proteins is inhibited by a variety of agents that It is clear that cellular context is an important determinant of the TH-regulated response (33), but the mechanism is still poorly understood. TRs bind predominantly as heterodimers with RXR to accessible stimulatory TREs in the absence of T 3 (66)(67)(68)(69)(70). This receptor complex recruits transcriptional co-repressors such as N-CoR, Sin3 and mRPD3 that associate and form a functionally active histone deacetylase (44)(45)(46)(47)(48)(49)(50). Histones are deacetylated resulting in repression of transcription. In the presence of T 3 , the co-repressors are released and acetyltransferases (p300/CBP, P/CAF, TAFII250) are recruited (46,51,52). Histone acetylation then permits transcription. TR action is further modulated by interaction with many other proteins including auxiliary proteins (TRAPs) and the tumour suppressor p53 (53)(54)(55)(56)(71)(72)(73). Since p53, histone acetylases and deacetylases have been reported to interact with ING1 proteins (23)(24)(25)(26), it is reasonable to speculate that ING protein isoforms may modulate TR activity through affecting HAT/HDAC activity to produce tissue-specific outcomes during tadpole metamorphosis. 29. Helbing shown beginning at aa 111. The exon 1/2 boundaries for human (Nagashima et al, unpublished) and Xenopus ING2 and murine and human ING1 (4,9) are indicated by a question mark and an asterisk, respectively. The alignment was done using Clustal W alignment software (57). The Genbank Accession numbers for each of the sequences used are AB012853, NM_02353, AF181849, AF181850, AF078834, NM_011919, AF17775352 and AF149724. or within the putative exon 2 (xING2 (2)). L8 ribosomal protein transcript, known to remain constant between tissues (59), is shown below and was used to normalize the xING2 amplification products. C, Graph comparing the fold differences in normalized xING2(1/2) (hatched bars) and xING2(2) (solid bars) transcript levels relative to the liver. proteins were transferred to nitrocellulose membrane and probed with antibody that is specific for the common region of ING. C, Western blot analyses of total proteins isolated from cultured premetamorphic tails. Tails were cultured in serum-free medium using the method described in (74). Tails were treated with 100 nM T 3 in the presence or absence of 2 mM EGTA that inhibits tail regression (62). Similar results were obtained by inhibiting tail regression with cycloheximide/anisomycin (38) and the protein kinase C inhibitor, H7 (62) (data not shown). Total protein homogenates were isolated at the indicated times and Western blotted as above. The relative sizes of specific bands were determined by comparison with comigrating standard protein markers and are indicated in kiloDaltons. D, Northern blot analyses of total RNA isolated from the tail of premetamorphic tadpoles immersed in T 3 for the indicated times probed with a human 300 bp ING1 PCR fragment from exon 2 (upper panel). Only one band of 3.9 kb is detected. Relative RNA loading is indicated by the intensity of the 28S rRNA bands as visualized by ethidium bromide staining of the gel (lower panel).
2018-04-03T02:53:40.475Z
2001-12-14T00:00:00.000
{ "year": 2001, "sha1": "158445902fae59edc50ad316de371d0d2008218d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/50/47013.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f3efede3e993ea2792d4a063d65ec34ce3a4d948", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247572340
pes2o/s2orc
v3-fos-license
Robotic versus open pancreatic surgery: a propensity score-matched cost-effectiveness analysis Background Robotic pancreatic surgery (RPS) is associated with high intraoperative costs compared to open pancreatic surgery (OPS). However, it remains unclear whether several advantages of RPS such as reduced surgical trauma and a shorter postoperative recovery time could lead to a reduction in total costs outweighing the intraoperative costs. The study aimed to compare patients undergoing OPS and RPS with regards to cost-effectiveness in a propensity score-matched (PSM) analysis. Methods Patients undergoing OPS and RPS between 2017 and 2019 were included in this monocentric, retrospective analysis. The controlling department provided financial data (costs and revenues, net loss/profit). A propensity score-matched analysis was performed or OPS and RPS (matching criteria: age, American society of anesthesiologists (ASA) score, gender, body mass index (BMI), and type of pancreatic resection) with a caliper 0.2. Results In total, 272 eligible OPS cases were identified, of which 252 met all inclusion criteria and were thus included in the further analysis. The RPS group contained 92 patients. The matched cohorts contained 41 patients in each group. Length of hospital stay (LOS) was significantly shorter in the RPS group (12 vs. 19 days, p = 0.003). Major postoperative morbidity (Dindo/Clavien ≥ 3a) and 90-day mortality did not differ significantly between OPS and RPS (p > 0.05). Intraoperative costs were significantly higher in the RPS group than in the OPS group (7334€ vs. 5115€, p < 0.001). This was, however, balanced by other financial categories. The overall cost-effectiveness tended to be better when comparing RPS to OPS (net profit—RPS: 57€ vs. OPS: − 2894€, p = 0.328). Binary logistic regression analysis revealed major postoperative complications, longer hospital stay, and ASA scores < 3 were linked to the risk of net loss (i.e., costs > revenue). Conclusions Surgical outcomes of RPS were similar to those of OPS. Higher intraoperative costs of RPS are outweighed by advantages in other categories of cost-effectiveness such as decreased lengths of hospital stay. Introduction Laparoscopic surgery has been the established gold standard in the field of abdominal surgery for most procedures for several decades [1]. This also applies to hepatobiliary and pancreatic surgery, which was long considered the domain of open surgery [2][3][4][5][6]. One of the main reasons for the success of minimally invasive hepatobiliary surgery is the reduction of surgical trauma, leading to a shorter hospital stay and lower rates of postoperative complications [7,8]. While laparoscopic surgery has become widely established for the treatment of liver tumors [9,10], pancreaticoduodenectomies are still performed by most centers using conventional open surgery [11]. This dogma is currently undergoing a change, as robotic pancreatic surgery (RPS) is becoming increasingly established and significantly increases the feasibility and precision of distal pancreatectomies as well as pancreaticoduodenectomies [12,13]. Recent reports show safe feasibility with comparable oncological outcomes (R0 rate) and low morbidity and mortality rates at high-volume centers [14,15]. The potential benefits of minimally invasive roboticassisted pancreatic surgery with faster patient recovery and potentially lower rates of postoperative complications such as wound dehiscence, pneumonia, and surgical site pain [16] are offset by the high costs of the procedure [17]. For many centers, these costs are the reason why these surgeries are not yet performed on a widespread basis. Nevertheless, it is important to determine whether a shortened postoperative recovery period and the associated cost savings will offset the costs incurred by the use of the surgical robot. Our group was able to show this, for example, for laparoscopic hemihepatectomies compared to open hemihepatectomies [18]. Reports comparing these outcomes of open, laparoscopic, and robotic pancreatic resections exist. However, most of them only analyze the cost-effectiveness of open, laparoscopic, and robotic distal pancreatectomies [6,16,[19][20][21], whereas there is little evidence on the costeffectiveness of robotic pancreaticoduodenectomies [22]. Furthermore, there is currently no evidence on the costeffectiveness of RPS in Germany, where accounting is performed by applying the diagnose-related groups (DRG) system. Since scheduling patients for either open or robotic surgery include a relevant selection bias, a one-to-one comparison of both approaches with regard to cost-effectiveness is not possible. Therefore, the present study aims to compare open and robotic-assisted pancreatic surgery with respect to direct and indirect costs using a propensity score-matched analysis and to evaluate the cost-effectiveness of robotic pancreatic surgery. Patients and study design The present study is a retrospective single-center analysis. All patients who underwent open or robotic partial pancreaticoduodenectomy (pylorus-preserving, PPPD, or Whipple's procedure), distal pancreatectomy (DP), or total pancreatectomy (TP) at the Charité -Universitätsmedizin Berlin, Campus Charité-Mitte, and Campus Virchow-Klinikum in Berlin, Germany between 2017 and 2019 were included in the analysis. Of note, data from patients who underwent RPS were obtained and analyzed from a prospective database from the post-marketing CARE-Study (surgical assistance by robotic support; originally Chirurgische Assistenz durch Robotereinsatz, ethical approval code E/A4/084/17 (DRKS00017229)), which had been approved by the local ethics committee. The trial was funded by Intuitive Surgical, Inc. (Sunnyvale, California, United States). For further analysis, patients were divided into groups (1) OPS and (2) RPS. The inclusion criteria were RPS or OPS between January 2017 and December 2019; full financial data and medical history available. The exclusion criteria were patients who underwent procedures other than PPPD/Whipple's procedure/DP/TP such as draining procedures (e.g., Partington-Rochelle), or enucleations; conversion from RPS to OPS; laparoscopic pancreatic and hybrid (laparoscopic + open) surgery; multivisceral resection (i.e., resections of three or more organs), concomitant colorectal resections; and major hepatectomy, respectively, patients who were operated in 2019 and were still hospitalized in 2020. Of note, an oral presentation which included parts of the data from the current report with different inclusion criteria was held in 2021 at the Viszeralmedizin congress in Leipzig, Germany [23]. Figure 1 shows the patient selection process. Pre-and postoperative evaluation Hospital admission took place 1 day before surgery, applying the concept of enhanced recovery after surgery (ERAS) according to the latest ERAS guidelines [24] with some modifications: Nasogastral tubes were placed in all cases after PD where a pancreaticojejunostomy was performed. The tubes were left until the 5th postoperative day and were eventually removed when the gastrointestinal passage X-ray showed no pathologies. The main ERAS elements including guidelines on preoperative biliary decompression, preoperative fasting, peridural anesthesia, postoperative nausea and vomiting (PONV) prophylaxis, early postoperative mobilization, and early postoperative nutrition were implemented as recommended. Perianastomotic/peripancreatic drains were placed routinely and were usually removed between the 3rd and 5th postoperative day in case lipase/ bilirubin levels were not elevated. Patients who were scheduled for surgery all underwent routine preoperative workup including physical examination, laboratory testing (including carcinoembryonic antigen (CA) 19-9 and carbohydrate antigen (CEA) if indicated). Preoperative imaging included either CT or MRI scans. Preoperative imaging, as well as intraoperative findings, determined the type of pancreatic resection. Surgical technique Pancreatic head resections (PPPD or Whipple's procedures) were preferably performed as PPPD with standard lymphadenectomy. Standard reconstruction was either performed as pancreaticogastrostomy and pancreaticojejunostomy; biliary reconstruction was performed with a handsewn, retrocolic end-to-side hepaticojejunostomy. DP was indicated in patients with lesions located in the body or tail of the pancreas. In cases of underlying/suspected malignancy, standard lymphadenectomy and splenectomy were performed as well. Patients with benign lesions underwent spleen-preserving DP, according to Kimura et al. [29]. OPS was performed by specialized hepatobiliary and pancreatic (HBP) surgeons. RPS was performed by the same team of two experienced pancreatic surgeons using the DaVinci® Xi surgical system (Intuitive Surgical Inc., Sunnyvale, CA, USA). Patients undergoing RPS were carefully selected. Exclusion criteria for RPS were severe chronic obstructive lung diseases with contraindication of pneumoperitoneum, suspected excessive intraabdominal adhesions (e.g., after multiple laparotomies, peritonitis), and suspected infiltration of big vessels requiring vascular resections. The oncological principles (lymphadenectomy/splenectomy in patients with underlying malignancies) were the same as in open surgery. The pancreas was dissected by electrocautery (PPPD/Whipple) or a stapling device (60-mm black cartridge, EndoGIA™, Medtronic, Minneapolis, MN, USA; reinforced by a bioabsorbable mesh: SEAMGUARD®, W.L. Gore, Flagstaff, AZ, USA). Reconstruction (pancreaticogastrostomy) was either performed via a small midline incision in the upper abdomen or completely minimally invasive. The operative setup, port placement, and description of the surgical technique have recently been published by our group [31]. Statistics IBM SPSS Statistics for Macintosh Version 26.0 (IBM Corp., Armonk, NY, USA) was used for all calculations. Continuous variables are displayed as median and range and statistically compared using the non-parametric Mann-Whitney U test. Counts/proportions are reported for categorical variables and statistically compared using the Pearson χ 2 test was used. A binary logistic regression analysis was performed to identify independent risk factors for cost-ineffectiveness; findings are shown as odds ratio (OR) and 95% confidence interval (95% CI). Propensity score matching We performed propensity score matching (PSM) analysis in order to balance possible confounders between OPS and RPS. We used R Studio Version 1.2.5033 (R Studio, Boston, MA, USA) to generate linear propensity score values (PSV) using the logistic regression method. The PSV were used to create matches with the nearest-neighbor matching method and a 1:1 ratio including replacement and a caliper of 0.2 of the standard deviation of the logit of the propensity score. The match is started from cases with the greatest propensity score. For propensity score matching (PSM), the following covariates were included in model age, American society of anesthesiologists (ASA) score (ASA 1-4), gender (male/ female), body mass index (BMI), and type of pancreatic resection (PPPD or Whipple/distal pancreatectomy/total pancreatectomy). These baseline variables were selected as covariates due to (a) significant differences between the unmatched OPS/RPS groups and (b) because these variables potentially have a significant impact on important clinical outcome parameters such as morbidity, mortality, and duration of surgery. The surgical approach (RPS vs. OPS) was used as a dependent variable in the regression model. Patients' characteristics In total, 376 eligible patients could be identified during the study period, of which 282 underwent OPS and 92 underwent RPS (Figure 1). Compared to the RPS group, patients in the OPS group tended to be older (p = 0.004) and had more severe comorbidities (ASA 3 93% vs. 32%, p < 0.001, Table 1). Regarding the type of pancreatic surgery, there was a significant imbalance between the groups (p < 0.001, Table 1), DP was significantly more often performed in the RPS group (45% vs. 19%), whereas there were more TP in the OPS group (21% vs. 3%). After propensity score matching for age, BMI, gender, ASA score, and type of pancreatic resection, no significant differences were found in the respective variables. Both groups contained 41 patients after matching. Table 1 provides an overview of all patients' characteristics including concomitant procedures before and after propensity score matching. Perioperative details RPS procedures were shorter than OPS procedures (262 vs. 313 minutes, p < 0.001), these differences were not present anymore after matching (p = 0.164, Table 2). ICU stay was comparable in both groups before and after matching. However, the total hospital stay in days was shorter in the RPS group both before and after matching (p < 0.001 and 0.003, respectively, Table 2). Major complications (Dindo/Clavien > grade II) were more frequent in the RPS group before matching (55% vs. 41%, p = 0.014). Pancreas-specific morbidity was significantly higher (POPF and PPH, respectively, both p < 0.05). These differences could not be observed after propensity score matching except for delayed gastric emptying (p = 0.048). Table 2 shows the perioperative details both before and after matching. Costs and proceeds after OPS and RPS Regarding the costs for OPS and RPS, there were significant differences in numerous categories. ICU costs were significantly lower in the RPS group (907€ vs. 2629€, p < 0.001, Table 3), whereas surgery costs (such as operating room time, staff costs, materials) were significantly higher when RPS was performed (7092€ vs. 4881€, p < 0.001). Costs for anesthesiology, laboratory tests, therapeutic methods as well as patient admission were lower in the RPS costs (all p < 0.05, Table 3). Total costs were comparable in both groups (OPS: 21,933€, RPS 20,907€, p = 0.305). With regard to proceeds, there were significant differences in the categories ICU, surgery proceeds, endoscopy, radiology, laboratory tests, other diagnostic features, therapeutic methods, and patient admission (all p < 0.001, Table 3). This led to a significantly higher net profit in the OPS group (+ 151€ vs. − 912€, p = 0.039, Table 3). After propensity score matching, costs were found to be higher for OPS in the categories surgical ward, radiology, and laboratory tests (all p < 0.05, Table 4). Surgery-associated costs were higher in the RPS group (7334€ vs. 5115€, p < 0.001, Table 4). Proceeds for cardiology, other diagnostic features, therapeutic methods, and patient admission were all below 1000€ but significantly higher in the RPS group (all p < 0.05, Table 4). After matching, median net profit tended to be higher in the RPS group; however, the differences were short of statistical significance (Table 4, Figure 2). Figure 2 shows the total costs, total proceeds as well as net profit/loss for both groups. Risk factors for net loss after pancreatic surgery To identify independent risk factors for net loss after pancreatic surgery, a binary logistic regression analysis was performed (Table 5). Of 374 patients, costs exceeded revenues in 194 patients (52%), resulting in net loss. Regression analysis revealed that major complications (Dindo/Clavien > grade II, p < 0.001), a longer hospital stay (p = 0.015), and ASA score < 3 (p = 0.040) were independent risk factors for net loss (Table 5). Discussion Patients who are scheduled for robotic pancreatic surgery are highly selected according to various patient characteristics such as age, BMI, and comorbidities, which makes the comparison of the cost-effectiveness of this approach difficult. This was confirmed in the present study; patients undergoing RPS were significantly younger and had fewer comorbidities than patients scheduled for OPS. We, therefore, performed propensity score matching, after which there were no differences in patient characteristics. The comparison of costs and proceeds before matching showed clear advantages of OPS over RPS, which was not evident anymore after matching. The higher intraoperative costs of RPS were compensated in particular by a reduced length of hospital stay. The perioperative data from our two cohorts presented here, including operative time, postoperative morbidity and mortality, are comparable to those from previous studies [22,[32][33][34]. Pancreas-specific morbidity (PPH, POPF, and DGE) were higher in the unmatched cohorts but tended to be similar in the matched cohorts. We thus conclude that the differences that were found in the unmatched cohorts are likely to be due to differences in the study populations that are not existent anymore after matching. Today, there are various studies examining the costeffectiveness of RPS [6, 16, 19-22, 32, 34-39]. Nonetheless, the generalized comparability of these studies is not easy since some of these studies are from different countries with different health systems and currencies. Furthermore, most of them focus on DP procedures only [6,16,19,20,34,[36][37][38], of which some merely compared robotic and laparoscopic DP [21,34,36,37]. Most authors agree that robotic DP is of advantage with regard to the length of hospital stay as well as perioperative costs [19,20], which is in line with the results of the present study. We furthermore found significantly lower costs for postoperative imaging ("radiology") and laboratory tests. Baker et al. compared the perioperative outcomes and costs of open and robotic PD. They found no significant differences in severe morbidity and postoperative mortality between the groups. Intraoperative costs were higher for RPS, but total costs did not differ significantly between RPS and OPS [22]. Kowalsky and colleagues found significantly better cost-effectiveness in patients who underwent robotic PD when the ERAS pathway was implemented. ERAS also led to a significantly shorter hospital stay in patients who underwent RPS. This effect was not present in patients who underwent OPS [39]. We were not able to examine this effect since ERAS was the standard approach for all patients. Nonetheless, it is likely that the positive effect of the ERAS program is also one of the reasons for the good results of the RPS group in the present study. In our analysis, the largest and most significant difference between the costs for OPS and RPS were operative costs. This is in line with the findings from other studies [19,22]. An aspect that is unique when compared to previous studies assessing the cost-effectiveness of RPS and OPS is the fact that we were able to identify factors that were independently associated with cost-ineffectiveness (i.e., net loss). Besides major complications and length of hospital stay, we found that lower ASA scores (1 and 2) were associated with a significantly higher risk for a net loss. This can be explained by the fact that comorbidities are known to trigger higher DRG classes and increase reimbursements by insurance companies [40]. The present study has some limitations, such as its retrospective nature leading to potential bias. Furthermore, group sizes are not equal before matching, which is due to the fact that RPS is not an eligible approach for all patients. Nonetheless, this is the first propensity score-matched cohort study comparing costs and profits after OPS and RPS, respectively, in Germany and other countries where reimbursement by health insurers is based on the DRG system. In addition to patient-specific differences such as age and ASA score, that can be overcome by matching, there is another potential selection bias. This bias is due to tumor-specific differences such as locally advanced tumors which are generally not eligible for RPS. Also, there were differences in operating surgeons between RPS and OPS that might potentially impact the outcome. Another important issue is the fact that there were-despite propensity score matching-there were some non-significant differences between the OPS and RPS groups that could not be overcome. The slightly higher proportion of pancreatic head resections as well as slightly more malignant tumors in the OPS groups may be a bias influencing morbidity and mortality rates [41,42]. One of the main strengths of the present study compared to previous studies is that we did compare not only the total costs but also the revenues that were reimbursed by the health insurance companies. This allows us to truly compare the cost-effectiveness of both approaches in the German DRG system. Conclusions The present study shows that RPS does not only lead to comparable surgical outcomes when compared to OPS but also significantly reduces the median hospital stay. This, in Declarations Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent Due to the retrospective nature, informed consent could not be obtained from participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-03-21T13:41:56.413Z
2022-03-21T00:00:00.000
{ "year": 2022, "sha1": "3dba35792fbada87b98bd2e2c62b9aef722c992a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00423-022-02471-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "654d21c83beb00d92a60f20e1083ee97dcc406e2", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
17832225
pes2o/s2orc
v3-fos-license
The material consumptive: domesticating the tuberculosis patient in Edwardian England The proliferation of general and specialist hospitals, lunatic asylums, and workhouse infirmaries in the nineteenth century challenged the popular perception of the home as a suitable site of health care. Amidst the emergence of yet another type of institution, the tuberculosis sanatorium, tuberculosis control in the Edwardian period was re-sited and re-scaled to accommodate what might be termed a ‘preventive therapy’ of domestic space. Three interlinked perspectives demonstrate why and how this happened. First, I explore the role of the national and local state in legitimating domestic space as a scale and a site for the regulation of tuberculosis patients and prevention of the disease. Second, I investigate how tuberculosis self-help manuals promoted a technology of the self that was founded largely on the principles of sanatorium therapy but was necessarily reconfigured to reflect the social relations of domestic space. Third, I assess the marketing of consumer goods to the domiciled tuberculosis sufferer through the pages of the British Journal of Tuberculosis. It is suggested that a common tubercular ‘language’ of material consumption was fashioned in order to normalise the accumulation of possessions for use in the home. These arguments are situated in relation to recent historical research on material culture and identity at the turn of the twentieth century, which has stressed the cultivation of individuality and that the right sort of possessions appropriately arranged in domestic space signified well-regulated morality. The nineteenth century witnessed an unprecedented transformation in the locus of care for the sick in Britain. General and specialist hospitals, insane asylums and workhouse infirmaries warehoused ever greater numbers of ill people. The perception that the home was the natural space in which to tend to the unwell was destabilised by this shift towards the institutionalisation of health care. Another type of institution was the tuberculosis sanatorium. Sanatoria emerged in Europe and the United States towards the end of the nineteenth century as part of an approach to controlling the disease that also included the dissemination of behavioural advice through health education and the regulation of meat and dairy products. These latter interventions were amongst the first public health policies to address the problem of tuberculosis directly and were related to the idea of tuberculosis as a 'social disease' that dominated policies in Britain and its colonies. 1 Whilst we know a great deal about the cultural and social significance of sanatoria in the Edwardian period, it nevertheless remained the case that the vast majority of tuberculosis patients were not institutionalised. The dawning realisation that it was unrealistic to hospitalise the mass of tuberculosis sufferers prompted a reformulation of regulation that focused efforts on the home as a viable site of intervention. The domestication of tuberculosis was achieved by moulding elements of existing public health policy with components of the sanatorium regimen into E-mail address: gmooney3@jhmi.edu a kind of domesticated 'preventive therapy' for tuberculosis. 2 This meant deployment of tuberculosis surveillance and disinfection, but without the threat of mandatory hospitalisation. The sanatorium formula of rest, exercise, and diet was adapted to the domestic environment and promoted a 'technology of the self' through the social relations of domestic space. 3 Equally significant was the fashioning of a tubercular 'language' of material consumption that promoted the accumulation of possessions for use in the home. The British Journal of Tuberculosis (BJTB) in particular endorsed and promoted a vast array of everyday objects and appliancesdbaths, reclining chairs, bed rests, reading stands and many, many mored that bridged the therapeutic divide between institutional and domestic space. 4 By encouraging a culture of possession, tubercular patients were drawn into the realm of mass consumerism that normalised their identity. As a result of these multiple strategies, tuberculosis control was re-scaled and re-sited from the sanatorium to the home. The national and local state came to share the regulatory site of the home with alternate, but complementary, bureaucracies of power. Some experts outside formal government were armed with medical knowledge and hygienic ideas; others deployed marketing strategies, advertising skills and the language of selling. 5 The specifics of this sort of re-scaling and re-siting have largely been overlooked by historians, though the form and idea of 'home' and 'domesticity' have been much debated, particularly for the Victorian period. It is important, then to resist naturalising scale as a preexisting component of spatiality that historical actors operate within or pass between and through. For the purposes of this paper, sensitivity to the production of scale (for example, national, local, urban, rural, domestic) and site (such as sanatorium, home) helps disclose the underlying scope of moral regulation inherent in the tactics of tuberculosis control at the end of the nineteenth century and the beginning of the twentieth. For example, the cultivation of individuality and the right sort of possessions appropriately arranged in domestic space signified a well-regulated morality. 6 In the case of tuberculosis, a multitude of consumer goods were marketed to the domiciled sufferer as the material expression of a morality that conditioned the behaviour of patients. They nurtured domesticity, homely pastimes and healthy pursuits that were as much an antidote to morally corrupt activities as they were crucial to therapeutic success. Moral regulation here concerns how 'morality'dthat is, normative judgements about what is wrong or bad conduct across a broad range of behaviours 7 dwas channelled through disease and health, at a time when wider notions of subjectivity were undergoing change. The paper addresses these issues in four main sections. The first two parts outline the ways in which public health administrations re-scaled and re-sited tuberculosis strategies as 'domestic' and how these strategies can be understood in terms of subjectivity. In this respect, the discourses and practices around what was one of the most pressing health issues of the early Edwardian period are used to build on the recent work in historical geography by Sallie Marston, Alison Blunt and others on the political context of everyday and mundane activities. It is stressed that the production of 'domestic' as a geographical scale and the management of 'home' as a site of risk and opportunity reveals much about the structure and exercise of power. 8 The third and fourth sections examine the shifting terrain of subjectivity in the early twentieth century (from 'character' to 'personality') through tuberculosis self-help manuals that were written for domiciled patients, and the materialisation of domestic preventive therapy via analysis of a section in the BJTB which promoted products for tuberculosis patients. One of the key points to emerge is the extent to which the market for consumer goods operated as a crucial arena for moral regulation through health. Subjectivity, site and scale in British public health The Edwardian period witnessed a shift in subjectivity from one that was rooted in 'character' shifted to one based on 'personality'. The former pointed to conformity with a set of public virtues that 'comprised a citizen's moral constitution', while the latter was based around the formation of the unique self, associated with the tantalising quest for individuality. 9 Although this sort of narrative about subjectivity is undoubtedly over-generalised, its broad contours can be meshed with an account of the politics of public health in the nineteenth century in order to understand the process of how and why the 'domestic' came to be reconstituted as a necessary scale of tuberculosis therapy. 10 Scalar issues were significant in public health in the early Victorian period, when much activity centred on ambitious infrastructural projects that sought to remediate the detrimental environmental aspects of urban industrial growth. 11 Victorians struggled with the conceptual and geometric scale of these sanitary schemes. 12 The political tussle over sanitary measures was essentially a contest made through hierarchical scaledthe national 2 Today the term 'preventive therapy' is used in multiple medical contexts, though it is most commonly associated with tuberculosis. centre and the local province. 13 Yet Victorian sanitary reform was replete with tacit moral assumptions about domestic space, issues, and relations. 14 Connections between the intimate space of the home and the public space of the environment were several and layered. 15 Sanitarians argued that the degradation of urban places fostered immoral habits and vice. An environmentally-based public health offered the chance to create the appropriate conditions under which individuals (or more precisely, voting men) might achieve domestic propriety. 16 National debate about the electoral franchise between the 1830s and the 1860s constantly circled this issue and qualification for the vote in the 1867 Reform Act solidified the ideal of the domestic man with 'a house, a wife, children, furniture and the habit of obeying the law'. 17 Historians Patrick Joyce and Chris Otter have argued that drains, sewers and water mains were material expressions of liberalism that mediated the freedom of the governed to act rationally in public and private space. 18 From the 1870s, however, management of the physical environment in order to help create favourable domestic circumstances was complemented by a set of public health interventions that explicitly interfered with the social relations of individuals inside the home itself. 19 The notification of infectious diseases (under which private family doctors were paid by the local health authority to report infectious cases to its medical officer), the isolation of the infected, contact tracing, and the destruction or neutralisation of biological threats with disinfection, comprised a set of surveillance practices that disrupted the intimate channels of disease transmission between humans, even if those channels were barely understood. 20 Such techniquesdwhich excluded tuberculosis and focused mainly on childhood infections such as scarlet fever and diphtheriadwere adopted incrementally in the late nineteenth century, though they were most fully realised in towns and cities that had the political drive and financial wherewithal to implement them. These interventions preceded the emergence and acceptance of germ theory, though they were retrospectively endorsed and refined by the more precise knowledge of disease transmission furnished by bacteriological research. 21 Infectious disease surveillance characterised a more individualised approach to public health at the turn of the twentieth century that mirrored and was constitutive of the broader shift to subjective individuality. Establishing and preserving a biologically risk-free home was a barometer of hygienic citizenship. But the administrative structures, policies and techniques of infectious disease surveillance in Britain were poly-scalar and poly-sited. Infectious patients were medically managed in multiple sites, such as isolation hospitals or separate wards in general hospitals and poor law infirmaries. A lack of available beds restricted hospital admissions in many places, but the Sanitary Act of 1866 declared that patients would be hospitalised if their home did not have 'proper lodging or accommodation, or lodged in a room occupied by more than one family'. The definition of 'proper' was left to the discretion of local health authorities and interpretation of the law implicitly preserved the privacy of middle-class householdsdpoor families were more likely to lodge together and rarely had enough space at home to demarcate a sickroom. Meanwhile, national government trod a cautious scalar path: public health policy debates were driven by the conflicting demands of international/national, local/national, urban/rural, and individual/community constituencies. As a consequence, much national public health legislation was permissive rather than compulsory. For example, while some districts adopted infectious disease notification through local acts of parliament, it took more than two decades for central government to impose it on all local authorities. Discourse on infectious disease surveillance was strongly influenced by a scalar argument: it was pointless to have city-wide policies to regulate urban residents yet not subject the inhabitants of the surrounding rural hinterlands to the same sorts of restrictions. 22 The solution to this uneven and biologically dangerous geography was to mandate infectious disease notification across the whole nation, which finally happened in 1899. The contested status of tuberculosis as a communicable disease provided an interesting dilemma for advocates of infectious disease surveillance. The position of tuberculosis as the biggest (though declining) killer provoked multiple explanations as to its proximate causes. As Michael Worboys has shown, between 1880 and 1930, sanitary conditions (including housing), person-to-person contact, behaviour and lifestyle, inherited susceptibility, and herd immunity were all put forward as possible reasons for why tuberculosis was declining. 23 To varying degrees, components of the first three influenced the way in which domestic space was articulated as a site and scale of intervention to regulate patients and prevent the disease. Disinfection, tuberculosis, and mobilisation of the domestic The characterisation of a risk-abundant domestic space was fairly commonplace by the 1860s. 24 The mid-1870s and early-1880s were replete with warnings about the evils of dust. Particular ire was reserved for the dust-retaining properties of carpets, curtains and ornate hangings. 25 In his address on 'Domestic Health' delivered to the annual meeting of the Sanitary Institute in Brighton, the wellknown public health activist Alfred Carpenter said in 1881 that, 'there is scarcely a house in the kingdom in which excreta are not to some extent retained'. 'Excreta' here means not just drain flush, but all forms of bodily seepage that never made it beyond the confines of the home: The most civilised and luxurious home is, in some cases, carefully prepared for the cultivation of disease-germs or factors, if they come into our midst: carpets, curtains, and comforts of all kinds retain the débris from our skins and our pulmonary membranes; the excreta from our sweat-glands are allowed to settle upon our uncleaned windows, out-ofthe-way cornices, useless ledges, and so-called architectural or upholstering ornaments. 26 By shifting the geography of risk ineluctably inwards, public health forced itself into the space of the mundane, the everyday and the domestic. 27 There was, of course, a massive popular literature on design and decoration of the domestic interior and this was directed predominantly at women as the moral guardians of the home. 28 After the 1866 Sanitary Act, householders' own hygienic efforts were supplemented by municipal disinfecting activities that entered domestic space. From the late 1870s in urban Britain, bacteriological experiments and technological know-how were harnessed to create highly mobile, city-wide systems of disinfection that deployed pressurised steam and a cocktail of chemicals to rid peoples' homes, clothes and belongings of dangerous microbes. Though municipal disinfection of homes was geographically variable, incomplete, and deployed both 'old' (fumigation) and 'new' (disinfecting) methods, public health officers were extremely confident by the early 1900s that they could eliminate most biological threats from the domestic environment. 29 Much of this confidence can be accredited to bacteriological research in the 1890s on tubercular dust (that is, the dried sputum of the tuberculosis patient). This research was readily grafted onto existing anxieties about the accumulation of dust and dirt in the over-adorned Victorian domestic space. 30 It was a common trope of hygienists that dust was a 'carrier' or 'common conveyer' of tuberculosis and that the tuberculosis bacillus 'lingers long in the dust of rooms, inhabited by careless tuberculous subjects'. 31 Eradicating this 'matter out of place' became a hygienic duty and conditioning the behaviour of these reckless individuals became a central plank of effective anti-tuberculosis campaigns. 32 Despite clarification from the early-1880s that tuberculosis was indeed a communicable disease, a prolonged period of institutional sequestration was not a viable option for the vast majority of sufferers due to the potential loss of earnings and social functioning. Domestic space had to be the main therapeutic site. If home-based interventions were to have any impact, as many patients as possible needed to be reached and as many risks as possible nullified. Under this rationale, the notification of tuberculosis was the only realistic way forward and a number of voluntary schemes were developed. 33 The first, in Brighton in 1899, was quickly copied elsewhere and by 1904 eleven other towns encouraged GPs to voluntarily inform the sanitary authority of tuberculosis cases coming under their care. 34 Public health officials and tuberculosis campaigners complained that voluntary notification only revealed a limited number of sufferers; it was thought that Sheffield's scheme routinely missed between 40% and 50% of all cases in the city. 35 Annmarie Adams discusses the ways in which doctors discredited architects' contribution to the healthiness of homes. See her Architecture in the Family Way: Doctors, Houses, and Women 1870e1900, Montreal and Kingston, 1996, 36e72. Architects such as Robert Edis and E.W. Godwin, Adams argues, were relegated to providing advice on decoration and furniture, precisely the aspects of middle-class homes that Carpenter was worried about. Also on Edis, see Neiswander, The Cosmopolitan Interior (note 25), chapter 3. 28 An American perspective can be found in G. Wright, Moralism and the Model Home: Domestic Architecture and Cultural Conflict in Chicago 1873e1913, Chicago, 1980, especially chapter 4. 29 Municipal governments did carry out cleansing operations, particularly during epidemics, but these usually focused on public spaces such as dirty streets, alleys and health administrators from Sheffield worked hard behind the scenes to alter the LGB's position. 36 In May 1903 negotiations took place with the LGB's medical officers (one of whom, Dr Theodore Thomson, just happened to be a former Medical Officer of Health for Sheffield 37 ) to draft a law that was acceptable to central government. 38 Sheffield's representatives argued that the public themselves did not see notification as problematic and witnesses attested that 'the dread of consumption amongst the poor' was 'very much greater than the dread of notification'. 39 Furthermore, interference with the future livelihood of the tuberculous patient was not an issue, because Sheffield's proposed law stated that 'no provisions contained in any general or local Act of Parliament relating to infectious disease shall apply to tuberculosis of the lung'. 40 This meant that patients could not be removed to hospital after notification (at least not without their consent). Sheffield did not operate a sanatorium of its own at this point. 41 Another aspect of Sheffield's law concerned the domesticated nature of the actions taken after notification. There were two elements to this: domiciliary education and disinfection. The city made it apparent that it possessed the wherewithal to carry out disinfection measures. The LGB's Theodore Thomson argued the public, as consumers of health services, had every right to benefit from disinfection if their property rates were being used to pay physicians to notify their illnesses. Thomson used the language of citizenship to maintain that compulsory tuberculosis notification should not be granted 'unless you have [disinfection] staff and appliances sufficient to deal with the problem': There is, in a case like this, as in very many other sanitary provisions, some interference with the liberty and possibly the comfort of the subject. I should not care to see a power given which conveyed that danger unless some adequate amount of benefit was to be obtained in return. 42 Sheffield had modern disinfecting apparatus, adequate staff and an efficient administration to deal with the potential workload that compulsory notification would generate. 43 The city's high-pressure steam disinfector had begun operation in 1888 and the annual number of houses disinfected leapt from around 30 in the mid-1880s to an average of more than one thousand by the early 1890s. 44 As well as disinfecting homes and possessions, the second prong of Sheffield's policy of domestication was to give patients intensive education in the home. The position of Sheffield's Medical Officer of Health John Robertson was that public lectures and blanket leafleting of the community about the dangers of tuberculosis did not, and would not, have any lasting impact. 45 In Sheffield and other cities with voluntary tuberculosis notification, patients and their carers already received verbal and written advice, in the domestic setting, about isolation, ventilation, disinfection and the disposal of sputum. A compulsory scheme meant that interventions could be targeted directly at people with active disease. The House of Commons put compulsory notification of tuberculosis on trial in Sheffield for seven years. In 1910, the clauses were renewed for a further 10 years, as were those that had been given in the interim to the Lancashire towns of Bolton, Burnley and Oldham. 46 These local acts were granted on the understanding that patients would not be compulsorily sequestered in sanatoria, that people would be educated about prevention, and that adequate facilities existed to disinfect patients' homes. All the local laws were superseded by a national act in 1912. In this discourse about tuberculosis at the turn of the twentieth century, the spatiality of 'home' and 'domestic' was expressed as both site and scale. The home was naturalised as a location for intervention. Three points can be made about this. First, it is intriguing that this deliberative re-siting had to happen at all. Hospitals, workhouse infirmaries, lunatic asylums and pertinently in this case, tuberculosis sanatoria, were increasingly seen as the most appropriate sites of health care for many kinds of patients. Of course, a lot of sick people were still cared for at home, but institutionalisation destabilised this tradition, necessitating a restatement about the value of domiciliary care. 47 Second, the domestication of state-sponsored disinfection activities provides a balance to the dominant view that responsibility for cleanliness in the home devolved solely onto housewives and mothers as popular knowledge about germs spread. 48 Whilst not disputing this parallel trend, it is also plausible to argue that at the same time the state actively sought heightened obligations in the domestic arena. Furthermore, it was recognised that certain aspects of domestic architecture, materiality, organisation and social relations presented a risk to the health of tuberculosis patients and their relatives. The potential return on the mitigation of these risks was very high, which meant that the home had to be made into a viable site of effective regulation. Domestic space needed to be thought of and reproduced as a scale through which interventions could be imagined and implemented. Self-help and the diffusion of therapeutic space The reluctance of Parliament to sanction the notification of tuberculosis partly helps to explain why, despite the mushrooming of private, public and voluntary sanatoria from the late nineteenth century, these institutions treated only a minority of people who had the disease in the early twentieth century. Worboys estimates there were 1500 sanatoria in-patient stays in 1900. By 1913 this had risen eightfold to 12,000 stays. This represented provision for less than 5% of all tuberculosis sufferers nationwide; or about 20% of patients with the active form of the disease. 49 The 'sanatorium benefit' included in the National Insurance Act of 1911 stimulated more provision. 50 From this date to 1920, the number of beds available for tuberculosis patients (excluding the Poor Law system) increased by about a factor of five, to almost 16,000. 51 The term 'sanatorium benefit' was somewhat misleading, because it also provided funds for care beyond the walls of sanatoria, including home treatment. Most notably, local authorities were required to provide a tuberculosis dispensary that served as a diagnostic out-patient centre with welfare functions. The dispensary's tuberculosis officer and a team of nurses worked on a case-by-case basis in conjunction with the patient's GP to determine the most suitable treatment. 52 Dispensary treatment rocketed after the National Insurance Act: there were 64 tuberculosis dispensaries in 1911; by 1920 there were 398. 53 Debate about the effectiveness of home treatment seems to have intensified in the aftermath of the 1911 act. Some commentators, such as Oscar Holden, a tuberculosis officer with experience in Birmingham and Southampton, warned that the continuing risk of infection by advanced cases living at home was a serious danger to the wider community. 54 Others were sceptical about the lack of moral support at home: anxious parents, relatives and friends made for bad advisors and it was impossible for them to be alert to all sorts of imprudent actsdkissing, visiting the pub, singing around the pianodthat might endanger the patient and their family. 55 On the other hand, some were convinced that many cases of tuberculosis could be successfully managed and cured at home, particularly with the additional level of monitoring offered by the dispensary system. 56 Despite quite vehement support for both sides, there was no clear and convincing evidence about the benefits of either approach. An uneasy consensus emerged that sanatorium treatment was most suitable for early cases that were actively infectious, whereas home treatment was more appropriate for cases that were not currently infective or infective advanced cases that were beyond salvation. 57 While precise details varied from place to place, a sanatorium 'cure' commonly included a combination of wholesome diet, exercise, graduated labour and plentiful fresh air. 58 The involvement of the adult patient in deciding the parameters of their sanatorium treatment was important. Sanatoria staff tailored treatments to provide an individualised regime that the patient could manage him-or herself. Despite continual dispute about the regimen's therapeutic effectiveness, contemporaries nevertheless believed that sanatoria and the open-air movementdwhich transferred most activities including sleeping, eating, schooling and rest from indoors to out 59 dhad the potential to 'revolutionise social, municipal and national life'. 60 Michael Worboys has gone so far as to suggest that the sanatoria-based 'attempt at "cultural control" was of equal, if not greater, significance than the betterknown Edwardian campaigns about motherhood and physical degeneration; indeed it may be artificial to separate them'. 61 Historian Flurin Condrau's view is that the sanatorium has been mistakenly written about as a 'façade behind which discipline, order and militaristic concepts of hygiene were used to control working-class sufferers', and that more subtle historical interpretations should include the integration of patient agency into the therapeutic narrative. 62 Whilst I broadly concur with Condraudat the very least, the notion of 'control' might be more appropriately thought of as 'regulation'dthe predominant historical concern with the discrete site of the sanatorium potentially underestimates the wide-ranging influence of its therapeutic reach. A complementary focus on domestic preventive therapydthe design of which was drawn up in the sanatoriumd clarifies how unsatisfactory the crude concept of 'control' really is. Furthermore, in the context of moral regulation, a rather complex set of practices of self-formation take shape in the domestic sphere that defy simple categorisation into projects articulated by the state and those performed by individuals on themselves. As Alan Hunt argues: Projects of moral regulation and ethical self-formation frequently come together in the complex and varied forms of interaction between governing others and governing the self . a significant dimension of moral regulation projects is that they are projects directed at governing others while at the same time they result in self-governing effects . We need have no fear that the term moral regulation refers only to projects to impose external moral codes, even though history is replete with just such endeavours; moral regulation is often directed at inducing projects of self-formation, manifest in ubiquitous incitements to 'self-control'. 63 Self-control through tuberculosis education was popularised from the 1880s. The inculcation of both preventive and curative behaviours in the domestic sphere was adopted as a credible tactic because so few non-pauper tuberculosis sufferers were ever likely to be admitted to local institutions. Publicly-and privately-funded leaflet campaigns for tuberculosis control emerged in Britain in the mid-1880s, initially in Lancashire (notably Oldham and Manchester). 64 These educational drives were conducted at the urban scale. Rhetorically, they identified transgressions in domestic and public space and codified appropriate forms of self-regulation: keep your home clean; ventilate it; disinfect furniture; maintain an unblocked chimney; destroy your sputum; practice open-air living; and stay away from close and crowded rooms, especially concert halls, theatres, and pubs. This is familiar historical ground. Yet, as we have seen in the case of Sheffield, some public health administrators grumbled about this dissipated technique of didactic intervention. 65 Lifestyles and behaviours were simply too slow to change, if they did at all. While leaflets and pamphlets had drawbacks, advice targeted directly at domiciled tuberculosis patients was also limited in that it was impossible for health visitors to monitor patients closely or reinforce the lessons on anything like a regular basis. 66 Nor did the various local prevention schemes give detailed advice on how to approach the disease therapeutically. Some enterprising doctors identified this gap in the market and produced self-help manuals for the home-based tuberculosis patient. Medical self-help books have a long tradition. 67 Those for consumptives that appeared at the beginning of the twentieth century were little different to many others that had gone before them in that they provided a fall-back mechanism for poor patients who were either unable or unwilling to avail themselves of medical attendance. 68 However, tuberculosis self-help manuals were also directed at patients who had undergone a period of sanatorium treatment and, it was presumed, wanted it maintained in a domestic setting. Even then, a sanatorium stay was not necessarily a guarantee that the patient had been given the information needed to survive. In 1909, Sheffield's Medical Officer of Health, Henry Scurfield, bemoaned that 'in many sanatoria, when the patient leaves he has not been educated', even in some basic measures such as the disposal of sputum. 69 These manuals were the product of a particular historical moment when sanatorium treatment was gaining credence as the most appropriate mode of therapy; at the same time, facilities did not exist to provide that treatment for all patients, and the chronic nature of the disease made the argument for long-term hospitalisation a tough one to make politically. Essentially, the regime of home treatment replicated as far as possible that of the sanatorium. The key to success was a patient's self-knowledge of, and control over, his or her own body and its domestic surroundings. Bodily regulation was achieved by the constancy and repetition of physical conditioning: exercise, rest and diet were scrutinised as the patient undertook vigilant surveillance of her or his own temperature and weight. Patients were expected to eat, sleep and rest at prescribed times of the day and even to breathe in precise ways. 70 Body temperature synchronised the eatesleeperesteexercise rhythms of home treatment; body weight was used to gauge the patient's progress. Normal temperature and weight were achieved through strict control over dietary intake (the type and amount of food to be consumed), exercise (suitable forms of recreation were suggested), 62 Condrau, Urban tuberculosis patients and sanatorium treatment in the early twentieth century (note 58), 204e205. See also S. Craddock, Engendered/endangered: and the timing and extent of rest periods. Experts expressed differences of opinion about these aspects of the regime, differences that depended partly on the stage of the disease and partly on the individual patient; no two patients were the same. 71 More important was the implementation of regularity and strict adherence to the set programme. Henry Hyslop Thomson wrote in his manual that: 'The more closely one day conforms to another, and the more strictly the patient adheres to the routine of treatment, from day to day, and from week to week, the more beneficial and effective will the result be'. 72 As such, disagreements over detail were overridden by the commonalities and the shared acknowledgement that some activities had a deeper moral purpose. Self-responsibility and strength of character were the key to concluding of a successful course of home treatment, be that complete cure or, more realistically, a prolonged period of capacity for work and the enjoyment of life. Noel Bardswell's guidance stressed that: Character or temperament, as in all other things, is a very large factor in success. The irresponsible, the undisciplined, and the despondent have nothing like the same chance of recovery as the cautious, the level-headed optimist, and the man of purpose. It is well for the patient to recognise frankly, from the first, that the fight for life and health is to be a hard one, with the odds against him . the odds can be levelled up by learning the principles by which consumption may be cured, and resolutely adhering to them . the happy-golucky consumptive, though perhaps the shorter-lived, is a happier man that the discontented hypochondriac. He is certainly the more contented companion. The mid-course between these two extremes should be aimed at. 73 These interlinking points about individual responsibility and determination constantly reiterated familiar tropes of self-help rhetoric. Thomson spoke of 'personal effort', 'intelligent effort', 'constant effort', 'unswerving allegiance to the rules' and 'personal endeavour'. 'In many cases', wrote Thomson, 'the consumptive holds in his hands the power of treating and curing himself . [he] . must intelligently order and supervise his whole method of living'. 74 Bardswell was sure that there would be 'little setbacks' and the regime involved 'much troublesome care and self sacrifice'. 75 But the most successful patients, argued Henry Warren Crowe, will be 'those who can form a resolution and steadfastly carry it out, and who are sufficiently master of their own surroundings'. 76 Nurturing such characteristics was intrinsically linked to the prevention of immoral behaviours. Bardswell did not just emphasise the benefits of exercise when he advocated taking a walk around the local park each evening; he also believed that it staved off the temptation to stray into the pub and fritter money away on alcohol. 77 According to Thomson, intemperance was the main predisposing cause of tuberculosis not because it corroded the liver of the habitual drinker, but because of 'the conditions of life to which it gives rise. The children of the drunkard are badly clothed, indifferently fed and poorly housed, and readily fall prey to the ravages of tuberculosis'. 78 As well as developing moral rectitude through bodily discipline, the patient had to comprehend the domestic conditions that mediated the risk of infection and predisposed their 'deviation from the normal healthy standard'. 79 Echoing the belief of Sheffield's public health officials, Thomson argued that the campaigns against poor ventilation, uncleanliness and inappropriate furnishings had made but a small dent on popular consciousness: there was still much 'ignorance and mistaken views as to what constitutes a healthy home'. 80 Crowe urged consumptives to study the direction of draughts through the house so that they might know where to position their bodies. 81 No detail was 'too trivial' if the patient was to become 'familiar with everything relating to himself and his surroundings'. 82 The many constituents of domestic spacedbeds, furniture, windows, floor coverings and peopled invariably were construed as problematic and risk-laden. Awareness of the minutiae of domestic life was therefore exalted and this also had an explicit moral dimension. Intimate interactions with other family members represented an obstacle to the patient's progress. Of course, one of the main benefits of sanatorium treatment in the eyes of its promoters was isolation from meddlesome family and friends, which proved extremely difficult to replicate in the crowded domestic sphere. 83 As with other infectious diseases, the patient was expected to take sole occupancy of a separate room in the house whenever possible. 84 Even still, complete segregation was practically impossible, particularly for working men and women and housewives who moved in and around the house out of necessity. 85 This feature of home treatment worried the writers of self-help manuals not only because the patient might continue to infect other family members, but also because the family could exert a corrupt influence on the stringent therapeutic regime. 86 Consequently, the routine of the patient was aimed at restricting the likelihood of contact. Consumptives with active disease 'should never be kissed on the lips'. 87 Retiring early to bed not only delivered much-needed rest, but it also 'removed the temptation to join the family circle in sitting up till a late hour'. 88 Clearly, these texts were not just about health but also about morality. The emphasis in the tuberculosis self-help manuals on discipline, self-control, responsibility and the suppression of intimacy reflects the important role of character and morality in the daily life of the domiciled consumptive patient. These ideals had their high-water mark in the late Victorian period and have been associated with a type of 'governmental self-formation' through which authorities and experts explicitly seek to shape individuals' conduct. 89 But the moment at which this tubercular self-help genre picks up was also the moment when the quest for 'character' shifted more towards subjectivity through self-discovery and the crafting of unique identity. Written from the standpoint of medical authority and expertise, yet emphasising individuality, selfquantification and bodily knowledge, these tuberculosis self-help manuals typify this transition. Indeed, it does not strike me as egregious to transplant to the early twentieth century Alan Hunt's argument that moral regulation today 'is more likely to be found in the guise of self-help texts or the discourses of "addiction" and "recovery". Yet such projects remain attempts at moral regulation in that they are concerned to effect changes in the conduct and ethical subjectivity of individuals'. 90 A material culture of consumption In her book Household Gods, Deborah Cohen describes the complex, shifting relationship between domestic space and possessions, and what the two together signified about notions of the self at the turn of the twentieth century: Possessions offered a lifeline for coming to terms with one's own identity in a society so much in flux. From its origins in the 1890s, the idea of 'personality' was fundamentally intertwined with the domestic interior. Character, an older conception of the self, connoted a moral state. Personality was about earned distinctiveness, performance, and display. No place was more of a stage for the turn of the century British than their homesdeven if no one was else was watching. 91 More of the intricacies and subtleties of the transition from 'character' to 'personality' can be teased out by looking at the consumer culture surrounding tuberculosis. 92 It is worth stressing at this point that it was not as if the nurturing of moral character disappeared as a fundamental aspect of tuberculosis treatment. Well into the twentieth century, subsequent editions of the selfhelp manuals considered above (the third and, it transpired, final edition of Thomson's book was issued in 1928) were no less insistent on the importance of character than were the earlier versions. 93 It continued to be common to think of consumptives as having character flaws that made them vulnerable to behavioural lapses which the ownership of, or contact with, certain things might help prevent. Nonetheless, it is difficult to resist the general thrust of Cohen's argument where the materiality of tuberculosis is concerned. One conduit through which household commodities were marketed and sold to tuberculosis sufferers was a section of the BJTB entitled 'Preparations and Appliances'. Medical and public health journals of the time carried direct advertising to supplement subscriptions and sustain print runs and circulation. Indeed, at the end of the nineteenth century, advertising constituted the largest revenue stream for prestigious medical journals. 94 As with other publications, however, the BJTB also provided editorial information about commercial products that it thought might be of interest to its readers. There is no direct evidence to substantiate this claim, but it seems likely that the 'Preparations and Appliances' section of the journal involved a form of indirect advertising known as 'puffing' (now commonly known as 'advertorials'); that is, the inclusion of an editorial item promoting the virtues of a product for which a payment was made by the advertiser. In the newspaper trade, a puff-piece commanded a higher rate than a regular advertisement because it carried the imprimatur of the newspaper itself. By this date, the layout of direct advertising in most newspapers and professional journals gave the impression of a multitude of 85 According to Newsholme, this was especially true for poor patients: A. Newsholme, Four and a half years' experience of the voluntary notification of pulmonary tuberculosis, Journal of the Sanitary Institute 24 (1903) 253e260, 254e255. 86 Crowe, Consumption (note 70), 4. 87 Thomson, Consumption, its Prevention and Home Treatment (note 72), 23. 88 Thomson, Consumption, its Prevention and Home Treatment (note 72), 55. 89 Hunt, Governing Morals (note 5), 16e18, 157. I am aware that this emphasis on auto-regulation and self-government downplays the fact that the main caregivers in the home were in most circumstances women. In fact, these self-help texts rarely mentioned the role of a caregiver and the tuberculosis sufferer was either masculinised or referred to in gender-neutral terms as the 'patient' or the 'consumptive'. It is perhaps instructive here (if not directly comparable) that much recent research on the geography of caregiving has highlighted the neoliberal devaluation and depoliticisation of female care work in the privacy of the home. commodities jostling for space and attention. Advertisements were individually bordered and designed in an attempt to distinguish themselves from all the others on a congested page. In contrast to these sorts of pages at the front and back of the BJTB, however, 'Preparations and Appliances' was set in the journal's preferred typeface and column format. It looked no different to the main articles in the journal. In the BJTB's running order, it frequently appeared between the book reviews and 'Notes', which was a segment that made readers aware of conferences they might attend, research they should keep up with, health resorts that could be visited, and tuberculosis institutions that needed patients (these topics help indicate the specialist readership of the journal, of which I can trace no firm evidence). 95 In other words, 'Preparations and Appliances' was a sequential component in the BJTB's procession of tuberculosis commodification; an example of the way in which the power of private capital became institutionally entrenched. Products appearing in 'Preparations and Appliances' were given a free promotional pass. Assertions claiming that a product 'only needs to be known to be extensively used', 96 or that it 'should gain an entrance to every home', 97 were just as common as the journal's assurances that it had tried and tested commodities extensively before recommending them. 98 Though the journal claimed that it vetted these products, it is not known what filtering criteria (if any) were used to include or exclude particular commodities. The author(s) praising these products were not identified; such anonymity applied a veneer of objectivity. Yet manufacturers' artwork was reproduced faithfully and the promotional language was never critical. Readers were directed either to retail outlets where goods could be purchased, or provided with the name and address of the manufacturer to buy direct. The journal's British audience of tuberculosis activists (sanatorium administrators, nurses, dispensary staff, and public health personnel), family doctors and patients was undoubtedly familiar with some form of purchasing-atdistance, either in response to advertisements in newspapers and magazines or through mail-order catalogues that were touted by agents. This type of consumer behaviour was not confined to the middle class. Networks of mail-order agents were also used by working-class people, partly because their origins were in the culture of working-class savings clubs. 99 Mail-order shopping was boosted by the institution of the Royal Mail's parcel post in 1883. Just two years earlier the introduction of postal orders had given people without access to a cheque book a secure means of sending money. By World War One, £57 million worth of postal orders were issued and the General Post Office delivered more than 130 million packages each year. Much of this traffic was driven by the mailorder business. 100 Journal writers at the BJTB and elsewhere who generated copy about commodities inserted themselves into this burgeoning mail-order trade, more or less acting like those catalogue agents who were working steadily in communities across the country. 101 Though plying their powers of persuasion from a distance and in print, these publicists compiled and showcased inventories of potentially useful products and cajoled readers into contacting a manufacturer or retailer for further details, if not enticing them into an impulse purchase. The motors of this consumer culture were rising incomes, the expansion of the middle classesdwho, by 1901, represented 25% of the national populationdand the ready availability of products through mass manufacture. 102 The purchasing power of both the working and middle classes increased, but the latter spent a smaller proportion of their incomes on housing and necessities such as food and heat. When the middle class's disposable income was expended on filling homes with possessions, the explosion of acquisitive behaviour abraded Victorian notions of religious restraint, particularly given the indelible influence of domestic space over individual character. One solution to this dilemma, argue Cohen and other historians of domestic material culture, was to bestow possessions with moral qualities. Household goods conferred domestic propriety and decency. Appropriate, well-designed furnishings raised the moral tone, whereas inappropriate fixtures and fittings indicated deceit or ugliness that should not be tolerated: 'By redefining consumption as a moral act, and the home as a foretaste of the heaven to come, the British middle classes sought to square material abundance with spiritual good'. 103 The right sort of mass-produced possessions and their appropriate arrangement in domestic space denoted morality, spirituality, distinctiveness and personality. I do not claim that the BJTB's puff pieces tell us very much, if anything at all, about the motives and actions of tuberculous consumers. Rather, I want to suggest that 'Preparations and Appliances' was an interstice where a common tubercular 'language' of material consumption was fashioned, particularly for the middle-class patient who was more able to afford most of the goods on offer. The journal's readers were educated on how to visualise the domiciled tuberculosis patient. The things on display helped solidify the idea of the home-based patient in their minds. This language could be understood by sanatorium officials, public health activists, family doctors, patients and patient carers alike; not because it was specific or unique to the world of tuberculosis activism, but because it was ubiquitous. 104 The pages of BJTB revelled in a technology of possession that 95 T. Nevett, Advertising and editorial integrity in the nineteenth century, in: M. Harris, A.J. Lee (Eds), The Press in English Society from the Seventeenth to Nineteenth Centuries, sought to integrate tuberculosis patients into a contemporary consumer culture that everyone could recognise (if not partake of equally). 105 A bewildering multitude of commodities were determined as 'requisites' of both the sanatoria and the 'hygienic' home. Some of these objects were explicitly medical, but their portability served to unify the regimen of these discrete spaces. The thermometer is a good example. The 'Presto Thermometer' purportedly overcame the deficiencies of most other thermometers, which could be slow acting, indistinctly marked and difficult to adjust, if not defective altogether. The Presto model had a scale marked for a range of 12 C (94 Ce106 C) instead of the customary 16 C or 20 C; these more generously-spaced fractional divisions were easier for the patient to read. Markings above the 'normal' temperature were indicated in red. 106 Some items were specifically related to the practice of open-air living. Window tents made it possible to sleep close to an open window but protected the patient from the elements and the rest of the house from the bracing cold. Steel-framed awnings made from canvas were manufactured with the intention that the patient, lying in an extendable cot-like bed, would sleep with his or her head and shoulders completely outside the window frame. 107 Significantly, bed screens and shelters were shown at the first Ideal Home Exhibition in 1908. A screen protected the patient from the wind and direct sunlight, as well as providing privacy. These devices sought to simplify the mechanics of sanatorium therapy for the domesticated patient: they were easily legible, readily manoeuvrable, and had manageable dimensions. 108 One revealing aspect to the promotional images for many rest-related products is that patients tended to be reading (Fig. 1). Portrayal of an intellectually-stimulating behaviour shaped the common understanding of how a domiciled tuberculosis patient should look and what they should be doing. In this case, one possible visual chain message was that possession of a screen had a moral purpose because it allowed control of the immediate environment, which in turn facilitated character-building activities like reading. 109 Keeping the home free from dirt was hard when practicing the open-air life. Anything that helped prevent mud from crossing the threshold was a boon. The 'Major' boot cleaner used revolving cocoa-fibre brush mats, steel wire and an underfoot scraper to remove dirt. A hot air boot dryer delivered a regulated current of warmth up a tree-shaped funnel (Fig. 2). This was particularly helpful in the rainy British climate. Shoes dried more quickly (but not so quickly so as to scorch and shrivel them) thus enabling frequent walks without the uncomfortable sogginess of damp leather. 110 Given the iconic status tuberculosis activists granted to dust, the BJTB observed with some relief that because of contraptions such as these, 'at last it would seem that homes may be kept free from dust all the year round'. 111 Other types of equipment explicitly inculcated hygienic norms. The 'Beb' Bath, for example, could be used in the bedroom or on the sanatorium ward (Fig. 3). Moreover, it was marketed as affordable way of instilling bodily cleanliness, since it was on sale 'at the specially low price of 13s 6d for the working classes'. 112 How affordable these sorts of devices were is open to question. Thirteen shillings represented more than half of the average labourer's weekly earnings and a couple of shillings more than the average weekly wage for a working woman. 113 Portable disinfection spray pumps and cleaning equipment that were relatively cheap also delivered the 'gospel of germs' into the home. 114 Lightweight, with directional nozzles and long handles, these appliances could reach every surface and penetrate the darkest nooks and crannies. As mentioned earlier, rest was a vital (if contested) feature of the sanatoria regime. Many commodities invited the home-based tuberculosis patient to become involved in middle-class leisure pursuits that complemented long periods of respite. 115 Bed rests, foot rests and telescopic reading stands such as the ReferReader (Fig. 4) were presented as an entryway into the market of goodsdsome luxurious, others less sodthat multiplied the contentment of modern life. 116 The Kumfee, which doubled up as a leg-rest and fire-screen, was 'an ingenious addition to the comforts of life which will be appreciated by invalids, luxury-lovers, and weary workers . appeals to both healthy and sick, and it is certainly an economiser of energy and an increaser of comfort' (Fig. 5). 117 The DumbNurse provided 'the greatest comfort and convenience to patients, invalids, and, perhaps we might add, healthy luxurylovers'. 118 For the bed-ridden, the DumbNurse was a table, a backrest, and a reading stand (Fig. 6). Similarly, the 'Axis' portable bed table could be used for personal hygiene rituals such as washing and shaving, the serving of meals and as a table for writing, work or playing cards. 119 This kind of multi-functionality was equated with sophistication which, in turn, indicated luxury. Designers of these products were responding to ideas about social relations as well as potential profitability in the market. Multiple designs of the same product were aimed at consumers who were stratified by sex, age, social class, and purchasing power. Product variation also served other aims. Not only did it anticipate an increase in sales, but it also spread the risk for a producer who was unsure about the market for a particular item. Variety promoted fashion by leaving older products behind. Such novelty was commensurate with the notion of consumer choice. The proliferation of options for what were essentially similar products was an important consideration for consumers who were keen to express their individuality through possessions. 120 The novelty of design also applied to product naming. Manufacturers used portmanteau words (the blending of two words to create a new one, such as ReferReader and DumbNurse) and new spellings of old words (such as 'Kumfee') to differentiate their merchandise in a crowded market and to create product excitement. Each new version of a product represented yet another breakthrough or step forward. At the same time, the constant repetition of novelty and variety in media advertisements granted such developments the trajectory of predictability; rendering progress inevitable was a vital mechanism of modernity. 121 So it was that every few months, readers of the BJTB could expect to marvel at yet another tranche of innovative products that would improve the lives of domesticated tuberculosis patients. The examples highlighted here are merely representative. Space precludes consideration of further products such as clothes, heating appliances, telephone mouthpieces, even plasticine . the list is a very long one. Suffice to say, the BJTB lost no opportunity to enlighten its readers of the chance of aligning themselves with the materiality of modern living. Taken together, tapping the mass consumer market and succumbing to fetishised novelty heralded the expression of individuality that characterised the so-called 'quest for personality'. 122 The BJTB noted that these 'numerous new inventions greatly facilitate the rational management of the sick and assist in the protection of the sound'. 123 This was another way of saying that the putative presence of these commodities in domestic space had the capacity to soothe the sickly body, normalise tubercular life, and produce a watchful self-carer. Just like the self-help manuals written by Crowe, Thomson and Bardswell, the selection, arrangement and interaction with these objects conditioned subjective identity. 124 They offered nothing less than a therapeutic toolbox for a materialised technology of the self that was deeply inflected with notions about the inherent morality of things. Conclusion Towards the end of his Advice to Consumptives, Noel Bardswell encapsulated his recommendations with this little pearl of wisdom: 'The best way, in short, of escaping consumption, is to live as if trying to cure it'. 125 In this sense one can interpret the rescaling and re-siting of the consumptive patient as essential ingredients in the emergence of a 'preventive therapy' for tuberculosis. Appropriation of domestic space transformed the sanatorium's 'rules of health' into 'rules for living'. The very title of Crowe's manualdConsumption: Treatment at Home and Rules for Livingdwas unambiguous on this point. 126 The term 'preventive therapy' also captures, but somewhat masks, what these experts sought to convey in terms of moral behaviour and to whom they were directing their efforts. The suppression of intimacy and avoidance of public houses made practical sense because it reduced exposure to infection. 127 But hygienists consistently reiterated the deeper moral purpose of these several forms of abstinence that were the minimum preconditions for creating a domestic environment in which tuberculosis could not flourish. This message was buttressed by an openly acquisitive approach that denoted all sorts of worthiness; not least, many things promoted homely pastimes that distracted the patient from potentially immoral behaviours and towards wholesome interests. It is impossible to deny the contradictions of the message and the medium. In the best traditions of nineteenth-century public health reform, an obvious intended target of these policy interventions, language, and images was the reckless moral behaviour of those working-class sufferers who constituted the main body of domiciled patients. Yet some of the risks (over-furnished homes for example) and some of the opportunities for recovery (such as the accumulation of appropriate possessions) were articulated through Victorian middle-class mores. They were the product of middleclass ideas about the typical patient's family and the presumed centrality of domesticity. Furthermore, the readership of the texts is not entirely clear. On the one hand, the knowledge promulgated in a specialist serial such as the BJTB was more likely to be passed on by doctors and activists verbally than fall into the hands of patients and carers directly. On the other hand, Crowe's Consumption was affordably priced at 1s for the working-class patient. He wanted GPs to buy the book who could then sell it on to their consumptive patients, thereby delivering the sanatorium regime into the home. 128 Of course, the purchase and reading of a tuberculosis selfhelp manual was itself a moral form of consumerism, no matter the class of the patient. Who actually read the books, and to what extent the advice contained in them was acted on, are elusive questions at the moment. Perhaps these paradoxes and ambiguities are best read as the inevitable result of moral regulation exercised in a market of consumer goods that was not itself regulated very much. Through the implementation of compulsory notification, disinfection, educational schemes, dispensary management and the home-based surveillance of patients and contacts, national and local government negotiated the legitimation and naturalisation of the home as a both a site and a scale of intervention. In this respect, these strategies can be seen as classics of their type: government administrations seeking ways to influence the self-conduct of individuals at a distance. It is important to emphasise that the formal power structures of the state prepared the ground on which the domiciliary self-regulation of tuberculosis patients gained traction. This paper has shown that the production and reproduction of the domestic scale of moral regulation constituted a complex set of processes that were connected to wider social transformations in public health, consumer liberalism, subjectivity and governance of the self. 129 State activities emerged from existing infectious disease policies; medical experts and activists worked inside and outside of formal government in ways that happened to complement the aims of the state; and, not least, the market was a crucial arena in which moral persuasion and regulation were played out.
2016-05-04T20:20:58.661Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "a6168100e473d0565f63517e616f13f0a911f67c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jhg.2012.12.007", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a6168100e473d0565f63517e616f13f0a911f67c", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
215725032
pes2o/s2orc
v3-fos-license
Using real-time online preprocessed mouse tracking for lower storage and transmission costs Pageview is the most popular webpage analytic metric in all sectors including blogs, business, e-commerce, education, entertainment, research, social media, and technology. To perform deeper analysis, additional methods are required such as mouse tracking, which can help researchers understand online user behavior on a single webpage. However, the geometrical data generated by mouse tracking are extremely large, and qualify as big data. A single swipe on a webpage from left to right can generate a megabyte (MB) of data. Fortunately, the geometrical data of each x and y point of the mouse trail are not always needed. Sometimes, analysts only need the heat map of a certain area or perhaps just a summary of the number of activities that occurred on a webpage. Therefore, recording all geometrical data is sometimes unnecessary. This work introduces preprocessing during real-time and online mouse tracking sessions. The preprocessing that is introduced converts the geometrical data from each x and y point to a region-of-interest concentration, in other words only heat map areas that the analyzer is interested in. Ultimately, the approach used here is able to greatly reduce the storage and transmission cost of real-time online mouse tracking. ecommerce, web design and evaluation, etc. Most websites today, use traditional web analytics such as page views, hits, and top exit pages [2]. However, for interactional analysis, further metrics are required. For example, traditional web metrics cannot tell where a user is directing his or her attention or how much interaction has occurred on a web page. In other words, pageview can determine what, when, who, and (to a limited extent) where a user is viewing, but it cannot determine which part of the webpage (the more detailed "where") and how a user is viewing it [3]. The best method available today for measuring user attention is eye tracking [4]. This method tracks eye ball movements to determine where the user is gazing. The most fundamental aspects of eye movements are fixation and saccade, where fixation is the process of fixing the gaze to a certain point of interest (POI), and saccade is the process of moving the gaze to another POI [5]. This method has been implemented in many fields, mainly in computer science, engineering, education, medicine, and psychology. However today, eye tracking requires expensive and specialized hardware that is not suitable for wide implementation [6]. Although eye tracking remains a tool for the laboratory, an alternative method has been invented, mouse tracking [7]. Instead of eye movements, mouse tracking tracks mouse movements and other helpful events. The fundamental strategy of mouse tracking is the recording of mouse clicks, mouse movements, and scrolls. Eye and mouse tracking have been implemented in the fields of education [8,9], reading patterns [10], search engines [11], visual navigation [12], web evaluation and usability [13,14]. Mouse tracking can be treated either independently [15] or as a correlation to eye tracking [16] in other words, as replacement. The biggest problem with default mouse tracking (as well as eye tracking) is the huge volume of data generated, which can be categorized as big data [17,18]. This high volume is due to the use of geometrical data where each event that occurs on each point of the webpage is recorded. If the distance between left and right is 1000 pixels, then a swipe from left to right will generate 1000 rows of tables. However, analyst may not need all of the mouse tracking data that is generated. Therefore, this paper proposes preprocessing the data based on the analyst's needs. The preprocessing in this case determines the region of interest in other words; which area the tracking should capture rather than capturing each point of interest. Furthermore, the preprocessing is conducted not only online, but also in real-time mouse tracking session. Implementations of eye and mouse tracking The use of eye [19] and mouse [20] tracking began in the early 20th century, and since then, there have already been many laboratory experiments conducted using these technologies. Today, there are many attempted implementations of eye and mouse tracking, but it is unclear how widespread and long-running they are. For eye tracking, there is no chance of implementation outside laboratory unless one of two requirements is met: (1) affordable and mainstream hardware [6] or (2) optimal usage of web cameras [21,22] on laptops and/or cameras on smartphones. By contrast, widespread implementation of mouse tracking is already possible because the required hardware is available by default in all computers and smartphones, but the problem is the generation of big data (the same is true of eye tracking as well). The following are selected attempts at implementation eye tracking: • Adaptive E-Learning via the Eye Tracking (AdELE) frame-work, adaptive, integrated, and real-time eyetracking during e-learning processes [8,23]. • Eye tracking based emphatic software agent (ESA), an eye tracking software that captures the state of awareness of the learners and responds accordingly [24]. • Enhanced exploitation of eyes for effective e-learning (e5Learning) [25]. • Eye tracking based adaptive and personalized e-Learning Systems (AeLS) [26]. • Eye tracking based Eye tracking based programming tutoring system (Protus) [27]. The following are selected attempts at implementation mouse tracking: • A mouse tracking web application developed by Zushi et al. [9] for their own specific learning management system (LMS). • Moodle LMS mouse tracking plugin [28][29][30]. • Mouse tracking web browser plugin and client side programming script [31,32]. Some commercial and open source software programs are as follows: • Open Gaze and Mouse Analyzer (OGAMA), an open-source software designed to analyze eye and mouse movements in slideshow study designs [33]. [35]. Mouse tracking in web development The core of mouse tracking in web development is document object model (DOM) which is an application programming interface (API) for Hyper Text Markup Language (HTML) and Extensible Markup Language (XML). It defines the logical structure of documents and the way a document is accessed and manipulated. Supposed a simple HTML page with the codes on Table 1, the DOM structure can be represented on Fig. 1. With the Document Object Model, programmers can build documents, navigate their structure, and add, modify, or delete elements and content. Anything found in an HTML or XML document can be accessed, changed, deleted, or added using the Document Object Model, with a few exceptions. DOM is designed to be used with any programming language. Currently, it provides language bindings for Java and ECMAScript (an industrystandard scripting language based on JavaScript and JScript) [36]. The implementation of mouse tracking is based on DOM events, specifically mouse, touch, and user interface (UI) events which are actions that occurs as a result of the user's mouse actions or as result of state change of the user interface or elements of a DOM tree [37]. Our previous work [31] uses jQuery to access the DOM API and receives information that are related to mouse, touch, and UI events. They can be stored into default dynamic variables or in an ArrayBuffer for enhanced performance. The list of events are as following: • Mousedown: when either one of the mouse buttons are pressed (usually left, middle, or right button) • Mouseup: when either pressed mouse buttons are released • Mousemove: when the mouse cursor moves • Mouseleave: when the mouse leaves an element (we only indicate when temporary leaving a webpage) • Mouseenter: when the mouse enters an element (we only indicate when temporary entering a webpage) • Beforeunload: when the webpage almost closes • Scroll: when the webpage scrolls • Touchstart: when a computer device screen is touching • Touchend: when a touch from touchstart is removed • Touchmove: when a touch is moving • Touchcancel: when a touch is interrupted • Resize: when the webpage is zoomed in or out Table 1. The html tag is the parent with head, body, and footer tag as the children. Head has a child tag title, body has a child tag p, and footer has a child tag p The information is then processed by adding important labels such as the date of the received information and duration by calculating the difference between the current and previous received events. Finally the information is either stored locally or sent to a server using hyper text transfer protocol (HTTP) post method. Traditionally, the information is transmitted all at once at the end of the session, but in our study [31], we found that it is better to transmit them in real-time without delay. The difference between offline, regular online, and real-time online mouse tracking is shown in Fig. 2. Default eye and mouse tracking generates big data Although eye and mouse tracking are not yet mainstream, rumors spread that due to large amounts of data generated, they could not be widely implemented other than at big corporations such as Google and Microsoft, which have gigantic data centers. University network and server administrators are hesitant to implement tracking technologies because they not only generate massive amounts of data, but also eye and mouse tracking are not replacements for existing systems but rather additions. Huang et al. [11] performed a mouse tracking experiment on Bing's search engine and immediately reduced the sampling rate because the data were too large. Leiva and Huang [38] believed that a swipe could generate a megabyte of data and the authors further investigated and proved that rumor to be true. While a half-year of Moodle log data with approximately 40 students is only approximately 300 kilobytes (kB) Fig. 2 Flow chart of traditional and current mouse tracking method [31]. The left flowchart is offline mouse tracking, the middle flowchart is regular online mouse tracking, and the right flowchart is real-time online mouse tracking [39], mouse tracking data and other event data generated by approximately 22 students reaches approximately 100 megabytes (MB) in only 2 h [31], and that figure will double if eye tracking is included. Imagine how much data would be produced by a university with a large number of students if mouse tracking were running on its website for years. According to an article by Adekitan et al. [40], Nigerian University Internet traffic can reach terabytes (TB) in a week and is regarded as big data. The authors' previous mouse tracking session [31] also reaches the same level of Internet traffic if over 100 students are present. Other than volume, mouse tracking met the other 5Vs criteria of big data [17]: velocity, the amount of data generated especially in real-time which is explained in further sections, veracity; meaning that data loss may often occur due to limited connectivity, which can lead to inconsistent data; variety, which is discussed in further sections and previous work [31]; and value, which is discussed in the next paragraph. It would be wise to start investing in eye and mouse tracking just as big companies today are investing in big data [41], as the data generated by eye and mouse tracking are valuable. By analyzing big data, interesting information can be derived that gives us the knowledge needed to make optimal decisions [42]. Just as companies study customers' data to find opportunities to increase their revenues [43], traders analyze historical trading data and current sentiments to find optimal positions [44], researchers study optimal prevention, diagnosis, and treatments in Medicare [45], and planners monitor smart cities [46], researchers can use eye and mouse tracking to identify online viewers' attention, behavior, their evaluation of web contents, etc. Reducing eye and mouse tracking data During high-intensity activities, a user may generate an average of 70 or 70 events per second [47], meaning that 70 rows per second will be generated on a table. The traditional way of reducing the size of these tracking data is by reducing the sampling rate [11]. Furthermore, the sampling rate should be adaptive and not static. In other words, snapshots should only be taken when an event occurs such as when the mouse cursor moves or a click occurs, and snapshots should not be taken during idle sessions. Performing transmission in real-time helps distribute the transmission burden across time, avoiding bottlenecks. In other words, the tracking data are immediately transmitted to the server at each event occurrence rather than transmitting the mouse tracking data all at once at the end of the session. Compression methods can also be utilized as demonstrated in Leiva and Huang's work [38], but their transmission method is still likely not real-time and is suspected to transmit the compressed tracking data all at once in the end of each session. On the other hand, the preprocessing technique presented in this paper is designed to work in real-time. Not only does it reduce the data cost but also distributes the transmission burden across the time domain. The cause of the enormous data generation is the geometrical data or tracking of each location where the events occur, in other words the x and y coordinates. Tracking these coordinates provides rich data but sometimes all of that data is not needed. For example, Rodrigues et al. [28] only analyzed the amount of key up, key down, mouse down, mouse up, mouse wheel, and mouse movement to measure students' stress, and Li et al. [48] only needed the time spent on each page. In such cases, the geometrical data can be omitted. At other times, geometrical data are needed; however, it is not each precise x and y point that is needed but rather each area of the page (multiple points) [49]. Preprocessing is common in any data analysis to derive useful data prior to transmission and storage of the collected data. However, the preprocessing presented in this work is performed on the client before transmission and storage to reduce the server's burden. Unlike typical preprocessing which is performed to filter redundant data, the preprocessing in this work is specifically based on the demands of the administrators or analyzers; in this case, preprocessing omits the geometrical data or groups them to represent certain areas. This study is a complete work of one the author's previous works [50]. System overview The overall system is the same as in the authors' previous work [31] with the concept discussed in "Mouse tracking in web development" section. In this section, the implementation of the concept to system is discussed. As shown on Fig. 3, mouse and other event tracking are performed on the client. The tracking codes can be injected internally, for example, as a browser plugin, or externally, for example, where the codes are retrieved alongside the webpage content [HTML and cascading style sheets (CSS)]. Then, the client sends the tracking the data to the server to be stored. The code itself Fig. 3 System overview of mouse tracking data transaction [31]. The framework is divided into two sides: one side is the client and the other side is the server. The client and the server are connected via the Internet. The webpage is in the server consisting of HTML, CSS, and JavaScript. The mouse tracking codes, which are event handling and capturing the command to post its data to the server, are inserted in the JavaScript section. When the client accesses the webpage, it will view the contents that consist of HTML and CSS. The mouse tracking codes within JavaScript are run in the background. The mouse tracking data are sent to the server and processed using server programming language, in this case PHP. Finally, the data are sent to the database; in this case, in SQL for this work is written in jQuery, which is a simpler coding format of JavaScript for DOM manipulation. The external code can be written as a plugin if desired; for example, the authors wrote a Moodle plugin. The server side can be in any programming language, but in this work, PHP was used, and the database used MySQL. The codes are available on GitHub [32]. Each web framework may developed their own bindings to access the DOM API. However, the most fundamental implementation is still injecting the mouse tracking code into the script section no matter which web framework is used, which is the default option if the web framework did not developed their own bindings. Below is a list of few web frameworks: • NodeJS is browser JavaScript made into server side programming language. Also, NodeJS is written based on the criticism in 2009 about how Apache HTTP server handled huge concurrent user, sequential programming, and blocking functions [51] while NodeJS is asynchronous and is designed to build scalable network application. Additionally, its runtime is built on Chrome's V8 Engine which implements C++ features such as hidden classes and inline caching to make JavaScript runs much faster [52]. The popular web framework for NodeJS is Express which is a fast, unopinionated, minimalist web framework for Node.js [53]. The choice for implementing mouse tracking code can either be using developed module available on Node Package Manager (NPM), use TypeScript, or the default option. TypeScript is a typed superset of JavaScript developed by Microsoft that compiles to plain JavaScript. The advantage for developers are defining interface between software components, and interactive static checking and code refactoring during development [54]. The default option is to call the scripts in the webpage layout which is usually written in Jade or Pug. • Django is a web framework written in Python that uses model-view template (MVT) [55]. Like Python almost every module is available, Django prides itself as a batteries-included framework, meaning that it comes with many modules unlike other frameworks, it is not necessary for a developer to write a module from a scratch. Although it is powerful for building huge web applications, the difficulty in building huge application doesn't change when building small applications. For mouse tracking, there is a choice to use Python modules but it is not yet known whether it can interact with the DOM elements in the webpage. Most documentation suggests to use vanilla JavaScript in Django. • Rails is a model-view controller (MVC) web framework written in Ruby. Its philosophies are "convention over control" and "don't repeat yourself " where in the 2000s introduced seamless database table creations, migrations, and scaffolding of views to enable rapid application development, even other web frameworks took ideas from Rails. There are few options other than JavaScript in implementing mouse tracking code which are coffee script (JavaScript coding made simpler) and jQuery which can be installed from Ruby's package manager GEM. They are one of the first to introduce unobtrusive JavaScript where it should not be mixed in the HTML file [56]. • Laravel is also an MVC web framework but written in PHP and based on Symfony. Laravel values elegance, simplicity, and readability. The mouse tracking code can be written in JavaScript and placed in the asset directory. Laravel Mix is the tool for compiling those assets but the default method is also available [57]. • Spring is an application framework and inversion of control container for the Java platform. The framework's core features can be used by any Java application, but there are extensions for building web applications on top of the Java Enterprise Edition platform. Java is one of the earliest programming language used to make an application and it's still popular today. Java has its own bindings to connect to the DOM API. • ReactJS is a JavaScript library for building UI which are maintained by Facebook and community. Unlike the previous back-end web framework, ReactJS is a front-end web framework. ReactJS have its own mouse event library which is to be injected on each UI [58]. • Angular is a complete rewrite to TypeScript based from the same team that built AngularJS. It is a web framework mainly maintained by Google and by a community of individuals and corporations to address many of the challenges encountered in developing single-page applications. It is one of the most popular framework to build web applications on mobile. The mouse events can be added on the components or templates [59]. Three techniques of mouse tracking For convenience, the techniques of mouse tracking are divided into three types, as shown in Table 2. They are called default mouse tracking, whole page tracking, and ROI tracking. The default mouse tracking precisely records the geometrical data of the event occurrence such as the horizontal x and vertical y of left clicks, right clicks, middle clicks, mouse movements, scrolls, zooms, and if desired keyboard types. The duration between each event is also measured. Whole page tracking omits the geometrical data and summarizes the number of left clicks, right clicks, middle clicks, mouse movements, scrolls, zooms, and if desired keyboard types that occurred on the webpage. In other words, the amount of activity is measured but not where or when it occurs, and only the total amount of time that the user spends on a webpage is recorded. The most complicated task is ROI tracking, which is a gray area between default mouse tracking and whole page tracking. ROI tracking defines the areas of a webpage to be tracked, for example how many left clicks, right clicks, middle clicks, mouse movements, scrolls, zooms, and keyboard types occurred and how long they occurred on a header, menu, content, footer, etc. This method is ideal because it meets the analyst's requirements and reduces unnecessary resource costs, but the drawback is the heavy labor required to manually define the areas of each webpage. Automatic area definition is possible to certain degree. One way is by attaching "mouseenter" DOM event listener to every element and using "offset" DOM HTML to return the position of the element. Offset DOM HTML returns the left and top element distance from the outermost of the webpage, and by using "width" and "height" DOM HTML to calculate the element's size, it is possible to find the bottom and right as well. However, the limitation is that it cannot perform smart labelling where it can only extract attributes, texts, and values of the element. An illustration comparing the three types of mouse tracking is shown in Fig. 4. The flowchart for each implementation of real-time and online mouse tracking is shown in Fig. 5. For default mouse tracking, the information on the event is transmitted to the database each time an event occurrs. For example, when a click occurs, the client immediately transmits the information on where and when it occurs. For whole page tracking, the information is summarized as the number of events that occurred, and they are transmitted to the server when the client closes the webpage. The size of the transmitted data is only slightly larger than that of transmitting single click data when using default mouse tracking. Last, for ROI tracking, the information on the webpage Fig. 4 Whole page vs region of interest vs default mouse tracking illustration. The left scroll illustrates whole page tracking that summarizes the number of events occurring on the whole page; the middle scroll illustrates ROI tracking that summarizes the number of events occurring in defined areas, and the right scroll illustrates default mouse tracking that records every event and the precise point where it occurs, forming a trajectory area is summarized and transmitted after the mouse cursor leaves the area, and the process repeats on each movement between areas until the webpage is closed. Simulation The three mouse tracking method are tested on the client and server. Since the author lacks subjects to perform an implementation, a simulation based on previous mouse tracking data was conducted on the server. The mouse tracking data contain mouse tracking records from two quiz sessions in Moodle. They were conducted on the 3rd of January 2019 between approximately 12:00 and 14:30 Japan standard time. There are 2 sessions, with each session lasting approximately an hour and including 22 students (44 total students participating) from the School of Engineering and Applied Sciences, National University of Mongolia accessing the Moodle server at the Human Interface and Cyber Communication Laboratory, Kumamoto University. The data were preprocessed to exclude nonstudents and webpages other than the quiz page data. In other words, the simulation is purely a mouse tracking data transmission, which excludes most of the process, such as accessing the server and navigating the whole Moodle page. This approach shows lower resource consumption than the previous work [31]. The setup can be seen in Fig. 6 where a laptop functioning as a client is peer to peer connected to a personal computer functioning as a server. The mouse tracking data are converted into page tracking and ROI tracking data based on Table 2. Three sessions were conducted: the first session was the sending of mouse tracking data to the server, the second session was the sending of page tracking data to the server, and the third session was the sending of ROI tracking data to the server. Since the mouse tracking data contain time interval information between the sending of each event, it is possible to capture the scenario almost exactly. During these sessions, the data rate is observed, and the central processing unit (CPU) and random access memory (RAM) usages are measured on the server. Figure 6 shows that one laptop serves as a client to send the data to the server which is a personal computer. The client is an MSI Laptop with i7-7820 HK 2.9 gigahertz (GHz) x8 32 gigabyte (GB) RAM while the server is an i7-6850 HK 3.6 GHz x12 32 GB RAM personal computer and the peer to peer connection is a 10 megabyte per second (MBps) network. For the client testing, the author performs the quiz session recorded by mouse recording software GhostMouse in order to replay the exact mouse events for the three mouse tracking types and for different browsers. The testing time are short around a minute due to the limited profiling time of the browsers. The performance which is only the JavaScript total running time of four different browsers are measured: Default mouse tracking It is well known that the advantage of default mouse tracking is the detailed and precise data it generates. An example is shown in Table 3. The exact x and y points of the locations of event occurrences, such as left clicks, right clicks, middle clicks, mouse movements, scrolls, zooms, and keyboard types, are recorded, including when and for how long each event occurs. Those geometrical data (x,y) make it possible to reproduce the Fig. 6 Peer to peer simulation illustration. The client is a laptop connected via direct channel to the server. the mouse tracking data is sent in timely order from client to server, based on the real session Table 3 Example default mouse tracking data that can be seen in the database table. Column 1-3 are labels added using JavaScript, column 4 is the duration calculated from the difference between dates, and the rest of the columns are data retrieved from the DOM API Fig. 7, and adding the time information enables the trajectory's replay. The rumored disadvantage is the huge transmission and storage cost, and this seems to be true judging from Figs. 8, 9 and 10. For the 22 students in each session, the transmission resource cost statistics are shown in Table 4. The average data rate was 28 kilobytes per second (kBps) and was able to peak to 228 kBps. For the two sessions totalling 44 students, the data size was approximately 100 MB and Table 3 has 286511 rows. The CPU usage was hightly consumptive as well, while the RAM usage was not as consumptive. Even worse, mouse tracking is not a replacement for the existing logging method but rather an addition; in other words, it is expected to add an additional burden to the existing system if mouse tracking is implemented. These data were generated from a 2 1/2 h mouse tracking session; thus, imagine how much resource mouse tracking would consume if it were run on a university scale with thousands of students for 24 h daily. On the client the side, this method also shows the highest total JavaScript running in Fig. 11 among the other methods. It is suspected due to the large amount of HTTP Post to the server. Webpage summarized mouse tracking By omitting the geometrical data (x,y) and summarizing the numbers of events that occurred, the data became as small as possible, as shown in Figs. 8, 9, 10 (although they can be further reduced slightly by compression and removal of unnecessary characters and variables). The table was reduced to one row per webpage visit; in this case, Table 3 with 286511 rows was reduced to 26 rows, as shown in Table 5. As shown in Table 4, the data size was reduced from 100 MB to 16 kB. The average data rate was reduced from 28 kBps to 10 Bps. Although there is still RAM usage, CPU usage is slightly visible. Among the three mouse tracking methods mentioned in this work, this technique is the most advantageous in terms of resource cost. On the client the side, this method also shows the lowest total JavaScript running in Fig. 11 among the other methods. It is suspected due to the few amount of HTTP Post to the server. However, the disadvantage compared to the three mouse tracking techniques is that it provides the poorest information that makes it impossible to create any visualizations, as shown in Fig. 7. The information tells only how many events (such as left clicks, right clicks, middle clicks, mouse movements, scrolls, zooms, and keyboard types) occurred and the length of time that the user spends on the webpage. Nevertheless, the information is richer than traditional logs, as shown in Table 5. Region of interest mouse tracking This technique is the best of the three, as the desired information is based on the analyst's preferences, and there are lower resource costs than in the default mouse tracking shown in Figs. 8,11. Analysts chooses the areas to be analyzed. In this case, the authors defined the following areas for the quiz session: header, title, menu, footer, and each question section. It can be seen in Fig. 7 that it is possible to create heatmaps of high activity areas, although it is not possible to create precise mouse trajectories as default mouse tracking, but it is possible to capture amounts of movement between areas. The duration is also based on each area. The data size is 5.4 MB with 19062 rows shown in Table 6. As shown in Table 4, the average data rate is 2.28 kBps and the average CPU and RAM usage are 0.87% and 1.85 MB which are lower than in default mouse tracking. Based on the algorithm of this method, the resource cost should be based on the number of defined areas, where the more areas, the larger the resource cost (note that default mouse tracking cost the largest because the webpage has been divided into the smallest possible areas, which are the x and y points of a webpage). The disadvantage currently is the lack of smart area definition and labelling. The possibility of area definition is restricted to the parent elements. The possibility of labelling is only information available on elements' attribute, text, and value. To perform custom area definition and labelling, tha analysts must define them manually that requires considerable time and labor. Conclusion and future work Preprocessing mouse tracking data during real-time and online sessions helps reduce the storage and transmission costs and unexpectedly the JavaScript total running time on the client's browser as well. The techniques presented in this work are whole page Table 6 Example ROI tracking data that can be seen in the database table. Column 1 -3 are labels added using JavaScript, column 4 is the duration calculated from the difference between dates, column 5 is the area manually labelled by the administrator and the rest of the columns are total value of data retrieved from the DOM API tracking and ROI tracking. Although the amount of reduced data is very dependent, there are fixed theories. The fixed theories are as follows: whole page tracking reduces the mouse tracking data into one row of tables per webpage visit, and ROI tracking reduces the data into one row of tables per area visit. Selecting the right technique can help reduce the storage and transmission costs while still obtaining the necessary data. Although this concept works perfectly, but there are still problems with execution. Whole page tracking transmits the data only when the user leaves the page, and if the problems lie with the browser, there is currently no way to tell the user to wait before the transmission process finishes. There will definitely be cases where data are not fully transmitted. The problem for ROI tracking are that it cannot perform smart area definition and labelling. Normally, they are performed by humans. Therefore, one solution is to develop an artificial intelligence for this matter in the future.
2020-02-20T09:11:38.888Z
2020-02-15T00:00:00.000
{ "year": 2020, "sha1": "90cbbcaacac4f4d8ff483f33c847f22ba3179786", "oa_license": "CCBY", "oa_url": "https://journalofbigdata.springeropen.com/track/pdf/10.1186/s40537-020-00304-x", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ebddbbfee11b6116d24bed2993871c514f192bef", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
104388604
pes2o/s2orc
v3-fos-license
Ag / Ag 2 O as a Co-Catalyst in TiO 2 Photocatalysis : Effect of the Co-Catalyst / Photocatalyst Mass Ratio Mixtures and composites of Ag/Ag2O and TiO2 (P25) with varying mass ratios of Ag/Ag2O were prepared, employing two methods. Mechanical mixtures (TM) were obtained by the sonication of a suspension containing TiO2 and Ag/Ag2O. Composites (TC) were prepared by a precipitation method employing TiO2 and AgNO3. Powder X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) confirmed the presence of Ag(0) and Ag2O. The activity of the materials was determined employing methylene blue (MB) as the probe compound. Bleaching of MB was observed in the presence of all materials. The bleaching rate was found to increase with increasing amounts of TiO2 under UV/vis light. In contrast, the MB bleaching rate decreased with increasing TiO2 content upon visible light illumination. XRD and XPS data indicate that Ag2O acts as an electron acceptor in the light-induced reaction of MB and is transformed by reduction of Ag+, yielding Ag(0). As a second light-induced reaction, the evolution of molecular hydrogen from aqueous methanol was investigated. Significant H2 evolution rates were only determined in the presence of materials containing more than 50 mass% of TiO2. The experimental results suggest that Ag/Ag2O is not stable under the experimental conditions. Therefore, to address Ag/Ag2O as a (photo)catalytically active material does not seem appropriate. Introduction Environmental problems related to water and air contamination, due to increasing world population and the resulting tremendous growth of industry and fuel combustion, have become a major concern of advanced science.In order to deal with this important problem, photocatalytic processes with employment of semiconductors are the most conventional approaches for water and air purification, along with alternative energy storage (e.g., H 2 ) [1][2][3][4]. To date, different semiconductor nanoparticles such as TiO 2 , ZnO, Fe 2 O 3 , niobates, tantalates, and metal sulfides, and their underlying working mechanisms, have been investigated with the aim of increasing their photocatalytic activity.It is well known that, besides the ability to decontaminate polluted air and water, a photocatalyst should meet certain requirements such as cost efficiency, stability, non-toxicity, and broad range response towards incident light.TiO 2 is reported as the most durable photocatalyst, responding to all the above-mentioned requirements apart from broad range response to incident solar light due to its wide bandgap energy, (3.2 eV for anatase, 3.0 eV for rutile) which accounts for no more than 5% of the entire solar spectrum [1].This lack of photocatalytic activity under visible light illumination allows the use of TiO 2 as a UV blocker in sunscreens [5].The tremendous interest in modification of titanium dioxide with different metals and oxides, to enable absorption of lower energy states and increase stability, has been rising over the last 20 years.Nonetheless, the range of visible-light photocatalysts is still restricted.Thus, it is essential to discover new and efficient photocatalytic materials that are sensitive to visible light. Ag 2 O nanoparticles have been broadly utilized in various manufacturing areas as stabilizers, cleaning agents, electrode supplies, dyes, antioxidants, and catalysts for alkane activation and olefin [6,7].Several papers have been published reporting the photocatalytic activity of Ag 2 O, Ag/Ag 2 O, Ag 2 O/semiconductors, and Ag/Ag 2 O/semiconductor composites, and some reviews are available .Ag 2 O is reported to be a visible light active photocatalyst.However, due to its photosensitive and labile properties under incident light illumination, Ag 2 O is infrequently employed alone as a main photocatalyst rather than as a co-catalyst [8]. Wang et al. investigated the photocatalytic performance of Ag 2 O on the photocatalytic decolorization of methyl orange, rhodamine B, and phenol solution under fluorescent light irradiation, and concluded that the stability and high photocatalytic activity of Ag 2 O is maintained by the partial formation of metallic Ag on its surface during the photodecomposition of organic compounds [9].Jiang et al. also reported the decomposition of methyl orange under visible light, ultraviolet light, near-infrared (NIR) light, and sunlight irradiation, using silver oxide nanoparticle aggregation.The superb photo-oxidation performance of Ag 2 O is kept almost constant after repeated exposure to light due to its narrow band gap, high surface area, and numerous crystal boundaries supplied by Ag 2 O quantum dots [13].Several authors have claimed that an Ag/Ag 2 O structure exhibits 'self-stability' [9,10] during a photocatalytic run, due to rapid electron transfer from the excited Ag 2 O to Ag(0) [12,20]. Visible light active nanocomposites of Ag/Ag 2 O/TiO 2 have been synthesized using different methods, such as a microwave-assisted method [28], a low-temperature hydrothermal method [32], a one-step solution reduction process in the presence of potassium borohydride [22], a simple pH-mediated precipitation [23], and a sol-gel method [27].Moreover, Su et al. developed a novel multilayer photocatalytic membrane, consisting of an Ag 2 O/TiO 2 layer stacked on a chitosan sub-layer immobilized onto a polypropylene [31].Light-induced hydrogen production via photoreforming of aqueous glycerol has been scrutinized, employing Ag 2 O/TiO 2 catalysts prepared by a sol-gel method with varying content of Ag 2 O (0.72-6.75 wt %) [30].Hao et al. have reported that TiO 2 /Ag 2 O nanowire arrays forming a p-n heterojunction are applicable for enhanced photo-electrochemical water splitting [33].Hu et al. reported the photocatalytic degradation of tetracycline under UV, visible, NIR, and simulated solar light irradiation with the Z-scheme between visible/NIR light activated Ag 2 O and UV light activated TiO 2 , using reduced graphene oxide as the electron mediator.They also investigated the stability of Ag 2 O, Ag 2 O/TiO 2 , and Ag 2 O/TiO 2 in combination with reduced graphene oxide as an electron mediator.A large amount of Ag(0) was formed into Ag 2 O and Ag 2 O/TiO 2 after four cycles of tetracycline photodegradation under UV, visible, and NIR illumination [23].Ren et al. also observed the light-induced reduction of Ag 2 O during dye degradation in Ag 2 O/TiO 2 suspensions.The authors suggested that the formation of Ag(0) contributed to the high stability of their photocatalyst [29].The stabilization of Ag 2 O/TiO 2 photocatalysts by Ag(0) formed at an initial stage of an experimental run has already been proposed earlier [11].The photocatalytic stability of Ag-bridged Ag 2 O nanowire networks/TiO 2 nanotubes, which were fabricated by a simple electrochemical method, revealed only an insignificant loss in performance, with respect to photocatalytic degradation of the dye acid orange 7, under simulated solar light [15].On the other hand, Kaur et al. reported a decrease of the degradation efficiency from 81% to 54%, after the third experimental run employing AgO 2 /TiO 2 as the photocatalyst and the drug levofloxacin as the probe compound [24].Very recently, Mandari et al. synthesized plasmonic Ag 2 O/TiO 2 photocatalysts, which could absorb visible light by the resonant oscillation of the conduction band electrons under visible light illumination.With this method, they were able to improve the efficiency of TiO 2 as a photocatalyst for hydrogen production by H 2 O splitting under natural solar light.The authors observed the formation of Ag(0) by light-induced reduction of Ag 2 O [26].Light-induced reduction of Ag(I) to Ag(0) has also been reported for an Ag(0)/Ag(I) co-doped TiO 2 photocatalyst [34]. The preceding discussion of published experimental results provoked doubt on the stability of Ag 2 O-containing photocatalysts under UV/vis illumination.Therefore, visible light harvesting Ag/Ag 2 O ⁄⁄ TiO 2 photocatalysts for water treatment and photocatalytic hydrogen generation were synthesized.To the best of our knowledge, physical Ag/Ag 2 O ⁄⁄ TiO 2 mixtures synthesized by the sonication of a suspension containing TiO 2 (P25) and a self-prepared Ag/Ag 2 O were investigated for the first time.Ag/Ag 2 O ⁄⁄ TiO 2 composites, prepared in situ by a simple precipitation method employing TiO 2 and AgNO 3 , were also prepared, in order to evaluate the effect of the synthesis method on the photocatalytic activity.Additionally, the effect of the mass ratio of Ag/Ag 2 O was studied.The as-prepared mixtures and composites showed improved visible light activity for methylene blue (MB) bleaching, compared to blank TiO 2 , and high photocatalytic H 2 production from a methanol-water mixture under artificial solar light illumination. The TiO 2 containing mixtures (TM) and composites (TC) exhibit diffraction peaks at 25 , which are attributed to the tetragonal phase of anatase TiO 2 , whereas one peak at 27.8 • corresponds to the tetragonal phase of rutile TiO 2 .Figure 1a presents the patterns of the TM mixtures, where two phases of titania were present.The two strongest peaks of Ag 2 O become more prominent, with the Ag 2 O mass ratio increasing from TM 14 to TM 41.The small diffraction peaks situated at 44.4 • , 64.2 • , and 77.5 • are indexed to the (200), (200), and (311) plane of metallic Ag(0) (JCPDS 04-0783) [20].The strongest peak of Ag(111) might likely be masked by the TiO 2 peak at 2θ = 38 • .The diffraction peaks in the TM mixture patterns correspond to the cubic structure of Ag 2 O and the cubic structure of Ag [35,36].Figure 1b illustrates the XRD patterns of the TC composites.As the figure shows, no significant difference between the two preparations methods was observed, except that in TiO 2 -rich composites TC 11 and TC 14 no Ag 2 O diffraction peaks were observed, suggesting a complete reduction of Ag 2 O to metallic silver Ag(0) during the preparation of these composites.The XRD pattern of TiO 2 is presented for comparison.The diffractogram clearly indicates the presence of two TiO 2 phases with predominance of the anatase phase (JCPDS 21-1272).In order to investigate the oxidation states of the silver species present on the materials, X-ray photoelectron spectroscopy (XPS) was performed.The results of the XPS analysis for all samples are shown in Figure S3.The deconvolution of the high-resolution spectra for Ag 3d reveals that silver was present in more than one oxidation state in all samples.The binding energies of Ag 3d at 367.5 and 373.5 eV are assigned to the Ag 3d5/2 and Ag 3d3/2 photoelectrons respectively, indicating the presence of silver in the +1 oxidation state.The other two peaks of Ag 3d5/2 and Ag 3d3/2, at 368.3 and 374.3 eV respectively, confirm the existence of silver in the Ag(0) state.These binding energies are in good agreement with the values reported for Ag(I) in Ag2O and Ag(0) [16,37,38].The peaks for O 1s, located in the ranges of 528.9-530.1 eV and 530.5-531.2eV, are ascribed to O 2− in Ag2O and TiO2 respectively (Figure S3).From the Ti 2p core-level spectrum, two peaks at about 464.3 and 458.7 eV can be assigned to the Ti 2p1/2 and Ti 2p3/2 spin-orbital components respectively, which correspond to the characteristic peaks of Ti 4+ . The SEM images of blank TiO2, Ag/Ag2O, TM mixtures, and TC composites are presented in Figure 2. Ag/Ag2O showed well-defined particles with particle sizes ranging from 100 nm to 500 nm (Figure 2a).The small particles that contrast as white spots correspond to the metallic silver Ag(0) distributed on the surface of silver oxide, which is in agreement with the XRD results.The EDX reveals that the sample contained Ag and O without any other impurities (Figure S1). Figure 2b-d shows SEM images of the physical mixtures of Ag/Ag2O with TiO2.It becomes obvious from these images that Ag/Ag2O changed its shape during preparation of the mixtures by sonification of aqueous suspensions of the oxides.The increasing loading of the Ag2O platelets with TiO2 is also clearly recognizable in these figures.In the Ag/Ag2O ⁄⁄ TiO2 mixture with the highest mass fraction of TiO2 (TM 14), the appearance was apparently determined by the titanium dioxide distributed over the underlying surface of the Ag2O platelets (Figure 2d).This was also reflected in the specific surface area (SSA) of the materials.The TiO2 (P25) used in this work is known to have an average diameter and specific surface area of 21 nm and about 50 m 2 g −1 , respectively [39].The specific surface area of the Ag/Ag2O synthesized in this work was determined to be 2.7 m 2 g −1 .As expected, the specific surface area of the Ag/Ag2O ⁄⁄ TiO2 mixtures was found to increase with increasing TiO2 content (Table 1), resulting in a SSA of 38.5 m 2 g −1 for TM 14.In order to investigate the oxidation states of the silver species present on the materials, X-ray photoelectron spectroscopy (XPS) was performed.The results of the XPS analysis for all samples are shown in Figure S3.The deconvolution of the high-resolution spectra for Ag 3d reveals that silver was present in more than one oxidation state in all samples.The binding energies of Ag 3d at 367.5 and 373.5 eV are assigned to the Ag 3d 5/2 and Ag 3d 3/2 photoelectrons respectively, indicating the presence of silver in the +1 oxidation state.The other two peaks of Ag 3d 5/2 and Ag 3d 3/2 , at 368.3 and 374.3 eV respectively, confirm the existence of silver in the Ag(0) state.These binding energies are in good agreement with the values reported for Ag(I) in Ag 2 O and Ag(0) [16,37,38].The peaks for O 1s, located in the ranges of 528.9-530.1 eV and 530.5-531.2eV, are ascribed to O 2− in Ag 2 O and TiO 2 respectively (Figure S3).From the Ti 2p core-level spectrum, two peaks at about 464.3 and 458.7 eV can be assigned to the Ti 2p 1/2 and Ti 2p 3/2 spin-orbital components respectively, which correspond to the characteristic peaks of Ti 4+ . The SEM images of blank TiO 2 , Ag/Ag 2 O, TM mixtures, and TC composites are presented in Figure 2. Ag/Ag 2 O showed well-defined particles with particle sizes ranging from 100 nm to 500 nm (Figure 2a).The small particles that contrast as white spots correspond to the metallic silver Ag(0) distributed on the surface of silver oxide, which is in agreement with the XRD results.The EDX reveals that the sample contained Ag and O without any other impurities (Figure S1). Figure 2b-d shows SEM images of the physical mixtures of Ag/Ag 2 O with TiO 2 .It becomes obvious from these images that Ag/Ag 2 O changed its shape during preparation of the mixtures by sonification of aqueous suspensions of the oxides.The increasing loading of the Ag 2 O platelets with TiO 2 is also clearly recognizable in these figures.In the Ag/Ag 2 O ⁄⁄ TiO 2 mixture with the highest mass fraction of TiO 2 (TM 14), the appearance was apparently determined by the titanium dioxide distributed over the underlying surface of the Ag 2 O platelets (Figure 2d).This was also reflected in the specific surface area (SSA) of the materials.The TiO 2 (P25) used in this work is known to have an average diameter and specific surface area of 21 nm and about 50 m 2 g −1 , respectively [39].The specific surface area of the Ag/Ag 2 O synthesized in this work was determined to be 2.7 m 2 g −1 .As expected, the specific surface area of the Ag/Ag 2 O ⁄⁄ TiO 2 mixtures was found to increase with increasing TiO 2 content (Table 1), resulting in a SSA of 38.5 m 2 g −1 for TM 14. SEM images of the TC composites are presented in Figure 2f-h.The image of the TiO 2 -poor composite TC 41 clearly shows the large Ag/Ag 2 O particles covered with TiO 2 (Figure 2f).The specific surface area of this composite was determined to be 8.4 m 2 g −1 , thus being equal within the limits of the experimental error to the surface area of the corresponding physical mixture TC 41 (SSA = 9.7 m 2 g −1 ).The images of the composites richer in TiO 2 (TC 11 and TC 14) seemed to be dominated by aggregates or agglomerates of small TiO 2 particles. The optical properties of TiO 2 and the as-prepared Ag-containing mixtures and composites were investigated by UV/vis diffuse reflectance spectroscopy (Figure 3).Ag/Ag 2 O, as well as the TM, and TC materials, had a dark brown to black color.They displayed strong absorption over the whole UV and visible range (200 nm-800 nm).TiO 2 showed only the absorption band below 405 nm, which matches the band gap energy of 3.06 eV calculated from the formula λ = 1239.8/Ebg due to the charge transfer from O (valence band) to Ti (conduction band).SEM images of the TC composites are presented in Figure 2f-h.The image of the TiO2-poor composite TC 41 clearly shows the large Ag/Ag2O particles covered with TiO2 (Figure 2f).The specific surface area of this composite was determined to be 8.4 m 2 g −1 , thus being equal within the limits of the experimental error to the surface area of the corresponding physical mixture TC 41 (SSA = 9.7 m 2 g −1 ).The images of the composites richer in TiO2 (TC 11 and TC 14) seemed to be dominated by aggregates or agglomerates of small TiO2 particles. The optical properties of TiO2 and the as-prepared Ag-containing mixtures and composites were investigated by UV/vis diffuse reflectance spectroscopy (Figure 3).Ag/Ag2O, as well as the TM, and TC materials, had a dark brown to black color.They displayed strong absorption over the whole UV and visible range (200 nm-800 nm).TiO2 showed only the absorption band below 405 nm, which matches the band gap energy of 3.06 eV calculated from the formula λ = 1239.8/Ebgdue to the charge transfer from O (valence band) to Ti (conduction band).Ag/Ag2O exhibited a band gap energy < 1.5 eV, which is in agreement with the reported value of 1.3 ± 0.3 eV [40].The scattering of the reported values might be due to different particle diameters, as shown for TiO2 [41].Electrochemical measurements in suspensions yielded flat band potentials of −0.4 V and +0.3 V vs. NHE for TiO2 and Ag2O, respectively.The value measured here for the flat band potential of Ag2O is also in reasonably good agreement with published values [42,43]. Photocatalytic Performance of the Materials The photocatalytic activity of all materials described above was investigated, employing methylene blue (MB) as the probe compound.The materials in aqueous suspensions were excited by the full output of a xenon lamp (UV/vis illumination), and by Xe light after passing a UV cut-off filter (≥410 nm, vis illumination).Figure 4 illustrates the bleaching of an aqueous solution of MB and the MB-containing suspensions.Photolysis of MB (initiated by the direct excitation of the probe compound) was observed under both UV/vis and visible light illumination.The bleaching of MB was significantly accelerated by the presence of Ag/Ag2O.Under UV/vis illumination, Ag/Ag2O was found to be nearly as active as TiO2 (P25), which is well known to be a very efficient photocatalyst suitable to degrade MB [44] (Figure 4a).In the presence of Ag/Ag2O, MB was bleached very rapidly even when exposed to visible light.As expected, TiO2, having a bandgap energy of 3.1 eV, was found to be inactive under vis illumination (Figure 4c). In the presence of mixtures of Ag/Ag2O with TiO2, MB was bleached under UV/vis illumination only in the presence of the TiO2-rich TM 14, with a significantly increased rate compared to the rate of MB photolysis.In suspensions containing TM 41 and TM 11, the rate of bleaching was almost the same as the rate of photolysis (Figure 4a).Exposure to visible light in the presence of the Ag/Ag2O- Ag/Ag 2 O exhibited a band gap energy < 1.5 eV, which is in agreement with the reported value of 1.3 ± 0.3 eV [40].The scattering of the reported values might be due to different particle diameters, as shown for TiO 2 [41].Electrochemical measurements in suspensions yielded flat band potentials of −0.4 V and +0.3 V vs. NHE for TiO 2 and Ag 2 O, respectively.The value measured here for the flat band potential of Ag 2 O is also in reasonably good agreement with published values [42,43]. Photocatalytic Performance of the Materials The photocatalytic activity of all materials described above was investigated, employing methylene blue (MB) as the probe compound.The materials in aqueous suspensions were excited by the full output of a xenon lamp (UV/vis illumination), and by Xe light after passing a UV cut-off filter (≥410 nm, vis illumination).Figure 4 illustrates the bleaching of an aqueous solution of MB and the MB-containing suspensions.Photolysis of MB (initiated by the direct excitation of the probe compound) was observed under both UV/vis and visible light illumination.The bleaching of MB was significantly accelerated by the presence of Ag/Ag 2 O.Under UV/vis illumination, Ag/Ag 2 O was found to be nearly as active as TiO 2 (P25), which is well known to be a very efficient photocatalyst suitable to degrade MB [44] (Figure 4a).In the presence of Ag/Ag 2 O, MB was bleached very rapidly even when exposed to visible light.As expected, TiO 2 , having a bandgap energy of 3.1 eV, was found to be inactive under vis illumination (Figure 4c). In the presence of mixtures of Ag/Ag 2 O with TiO 2 , MB was bleached under UV/vis illumination only in the presence of the TiO 2 -rich TM 14, with a significantly increased rate compared to the rate of MB photolysis.In suspensions containing TM 41 and TM 11, the rate of bleaching was almost the same as the rate of photolysis (Figure 4a).Exposure to visible light in the presence of the Ag/Ag 2 O-rich TM 41 resulted in bleaching of MB with a slightly increased rate.In contrast, the TiO 2 -rich mixtures TM 11 and TM 14 were virtually inactive under this illumination condition (Figure 4c).In the presence of the composites TC, MB was bleached with significantly faster reaction rates than the rate of photolysis when exposed to UV/vis and visible light.The rates were, however, lower than the rate of bleaching in the presence of the bare TiO2 (Figure 4b,d).Interestingly, while increasing the amount of TiO2 in the TC composites, the visible light activity of the materials seemed to decrease, thus confirming the essential influence of Ag/Ag2O on MB bleaching under illumination with wavelengths ≥ 410 nm. As a second test reaction for the activity of the materials, the UV/vis light-induced evolution of molecular hydrogen by reforming of aqueous methanol was used.Figure 5 shows the amount of H2 vs. illumination time in the presence of TiO2, Ag/Ag2O, and the prepared mixtures and composites.No H2 evolution was observed in the presence of Ag/Ag2O and the Ag/Ag2O-rich TM 41.In the presence of all other materials, the evolution of H2 was detected.However, large amounts of H2 were only evolved with the materials TM 14 (104 µmol/6 h) and TC 11 (174 µmol/6 h).In the presence of the composites TC, MB was bleached with significantly faster reaction rates than the rate of photolysis when exposed to UV/vis and visible light.The rates were, however, lower than the rate of bleaching in the presence of the bare TiO 2 (Figure 4b,d).Interestingly, while increasing the amount of TiO 2 in the TC composites, the visible light activity of the materials seemed to decrease, thus confirming the essential influence of Ag/Ag 2 O on MB bleaching under illumination with wavelengths ≥ 410 nm. As a second test reaction for the activity of the materials, the UV/vis light-induced evolution of molecular hydrogen by reforming of aqueous methanol was used.Many authors have reported that the kinetic behavior of photocatalytic reactions can be described by a Langmuir-Hinshelwood rate law, with the two limiting cases of zero-order and firstorder kinetics [45,46].To calculate the initial rates r0 of the bleaching of methylene blue, first-order kinetics have been assumed (r0 = kC0).To determine the rate constant k, the data given in Figure 4 have therefore been fitted with C = C0 exp(−kt).The initial rates are given in Table 1. Table 1.Brunauer-Emmet-Teller (BET) surface area, initial rates of methylene blue (MB) bleaching and H2 generation in the presence of Ag/Ag2O, TiO2, the TM mixtures and the TC composites. The Photocatalytic Activity of Ag/Ag2O It is well known that methylene blue is photocatalytically oxidized in the presence of TiO2 under illumination with photons having an energy equal to or larger than the bandgap energy of the semiconductor.The photocatalytic degradation of methylene blue in the presence of molecular oxygen is reported to follow Equation (1) [44]. The energetic positions of the valence and conduction bands of TiO2 and Ag2O, and the reduction potentials of some species (possibly) present in the surrounding electrolyte are shown in Figure 6.As becomes obvious from this Figure, the conduction band electrons generated by UV illumination of TiO2 are able to reduce O2 adsorbed at the semiconductor surface.From a thermodynamic point of view, valence band holes at the TiO2 surface have an energy suitable to oxidize H2O/OH − , yielding OH radicals.These OH radicals are generally assumed to be the oxidizing species in photocatalytic MB degradation.Many authors have reported that the kinetic behavior of photocatalytic reactions can be described by a Langmuir-Hinshelwood rate law, with the two limiting cases of zero-order and first-order kinetics [45,46].To calculate the initial rates r 0 of the bleaching of methylene blue, first-order kinetics have been assumed (r 0 = kC 0 ).To determine the rate constant k, the data given in Figure 4 have therefore been fitted with C = C 0 exp(−kt).The initial rates are given in Table 1. The Photocatalytic Activity of Ag/Ag 2 O It is well known that methylene blue is photocatalytically oxidized in the presence of TiO 2 under illumination with photons having an energy equal to or larger than the bandgap energy of the semiconductor.The photocatalytic degradation of methylene blue in the presence of molecular oxygen is reported to follow Equation (1) [44]. The energetic positions of the valence and conduction bands of TiO 2 and Ag 2 O, and the reduction potentials of some species (possibly) present in the surrounding electrolyte are shown in Figure 6.As becomes obvious from this Figure, the conduction band electrons generated by UV illumination of TiO 2 are able to reduce O 2 adsorbed at the semiconductor surface.From a thermodynamic point of view, valence band holes at the TiO 2 surface have an energy suitable to oxidize H 2 O/OH − , yielding OH radicals.These OH radicals are generally assumed to be the oxidizing species in photocatalytic MB degradation.With the assumption that the flat band potential of Ag2O, which has been determined to be + 0.3 V vs. NHE at pH 7, was equal to the conduction band edge of this semiconductor, and a bandgap energy Eg = 1.5 eV, the valence band position was calculated to be +2.0V vs. NHE.Xu and Schoonen reported a value of +0.2 V vs. NHE for the energy of the Ag2O conduction band [49].As becomes obvious from Figure 6, excited Ag2O was neither able to reduce O2 nor to oxidize H2O/OH − .Consequently, the mechanism of MB bleaching observed in the presence of Ag/Ag2O (Figure 4 and Table 1) was different from the MB degradation mechanism in the presence of TiO2.A possible explanation for the decolorization of MB in the presence of Ag/Ag2O is that MB is excited by light of suitable wavelength (Equation (2), MB* = MB S and or MB T ), which is subsequently followed by electron injection into the conduction band of Ag2O (Equation ( 3)). MB ⎯⎯⎯ MB* (2) As an alternative to these reactions, the direct oxidation of MB by valence band holes according to Ag2O Ag2O{h + + e − } (4) has to be considered.Both mechanisms require an electron transfer between Ag2O and MB.Despite the low surface area available for this reaction, the electron transfer between the solid and the probe compound appears to be very efficient. It is well known that Ag2O is sensitive to light and decomposes under illumination.However, it has been suggested that Ag(0) being present in Ag/Ag2O acts as an electron sink and accepts the conduction band electron of Ag2O, thus inhibiting the reduction of Ag + and stabilizing the Ag2O [9,10,12,20].However, the possibility cannot be excluded that Ag + is reduced during the processes given in the Equations (2)−( 5), yielding Ag(0), since no other suitable electron acceptor is available.Regardless of whether the electrons reduce Ag + or become stored in Ag(0), Ag/Ag2O is not acting as a photocatalyst, because the material changes irreversibly during the reaction.With the assumption that the flat band potential of Ag 2 O, which has been determined to be + 0.3 V vs. NHE at pH 7, was equal to the conduction band edge of this semiconductor, and a bandgap energy E g = 1.5 eV, the valence band position was calculated to be +2.0V vs. NHE.Xu and Schoonen reported a value of +0.2 V vs. NHE for the energy of the Ag 2 O conduction band [49].As becomes obvious from Figure 6, excited Ag 2 O was neither able to reduce O 2 nor to oxidize H 2 O/OH − .Consequently, the mechanism of MB bleaching observed in the presence of Ag/Ag 2 O (Figure 4 and Table 1) was different from the MB degradation mechanism in the presence of TiO 2 .A possible explanation for the decolorization of MB in the presence of Ag/Ag 2 O is that MB is excited by light of suitable wavelength (Equation (2), MB* = MB S and or MB T ), which is subsequently followed by electron injection into the conduction band of Ag 2 O (Equation ( 3)). MB hυ As an alternative to these reactions, the direct oxidation of MB by valence band holes according to has to be considered.Both mechanisms require an electron transfer between Ag 2 O and MB.Despite the low surface area available for this reaction, the electron transfer between the solid and the probe compound appears to be very efficient. It is well known that Ag 2 O is sensitive to light and decomposes under illumination.However, it has been suggested that Ag(0) being present in Ag/Ag 2 O acts as an electron sink and accepts the conduction band electron of Ag 2 O, thus inhibiting the reduction of Ag + and stabilizing the Ag 2 O [9,10,12,20].However, the possibility cannot be excluded that Ag + is reduced during the processes given in the Equations (2)−( 5), yielding Ag(0), since no other suitable electron acceptor is available.Regardless of whether the electrons reduce Ag + or become stored in Ag(0), Ag/Ag 2 O is not acting as a photocatalyst, because the material changes irreversibly during the reaction. The potential of the Ag 2 O conduction band electron is more positive than the reduction potential of the H + /H 2 couple (Figure 6).Consequently, light-induced proton reduction yielding H 2 is thermodynamically impossible in suspensions containing only Ag/Ag 2 O.This is in accordance with the experimental results reported in Section 2.2. Bleaching of Methylene Blue When irradiated with light at wavelengths ≥ 410 nm, methylene blue was found to be bleached in the presence of Ag/Ag 2 O, and mixtures of this material with TiO 2 .The rate of MB bleaching decreased with increasing amounts of TiO 2 .Of course, TiO 2 itself was found to be photocatalytically inactive, since it was not excited under this illumination condition (Figure 4c and Table 1).The electron transfer reaction resulting in the observed bleaching of the MB solution occurred at the surface of the Ag 2 O, as discussed in Section 3.1.According to the SEM images (Figure 2a-d), the surface of the Ag 2 O was increasingly covered by TiO 2 as the content of this oxide in the mixture increased.The interfacial electron transfer was inhibited by this TiO 2 layer (Figure 7).The reaction rates suggest that this inhibition increased with increasing amounts of TiO 2 on the Ag/Ag 2 O surface.Consequently, the TiO 2 -rich mixtures TM 11 and TM 14 exhibited rates of bleaching almost the same as the rate of photolysis in homogeneous solution (Table 1).Interfacial electron transfer from excited MB to TiO 2 (which is thermodynamically possible; cf. Figure 6) obviously did not contribute significantly, since no MB bleaching was observed under visible light illumination of suspensions containing only this photocatalyst. The potential of the Ag2O conduction band electron is more positive than the reduction potential of the H + /H2 couple (Figure 6).Consequently, light-induced proton reduction yielding H2 is thermodynamically impossible in suspensions containing only Ag/Ag2O.This is in accordance with the experimental results reported in Section 2.2. Bleaching of Methylene Blue When irradiated with light at wavelengths ≥ 410 nm, methylene blue was found to be bleached in the presence of Ag/Ag2O, and mixtures of this material with TiO2.The rate of MB bleaching decreased with increasing amounts of TiO2.Of course, TiO2 itself was found to be photocatalytically inactive, since it was not excited under this illumination condition (Figure 4c and Table 1).The electron transfer reaction resulting in the observed bleaching of the MB solution occurred at the surface of the Ag2O, as discussed in Section 3.1.According to the SEM images (Figure 2a-d), the surface of the Ag2O was increasingly covered by TiO2 as the content of this oxide in the mixture increased.The interfacial electron transfer was inhibited by this TiO2 layer (Figure 7).The reaction rates suggest that this inhibition increased with increasing amounts of TiO2 on the Ag/Ag2O surface.Consequently, the TiO2-rich mixtures TM 11 and TM 14 exhibited rates of bleaching almost the same as the rate of photolysis in homogeneous solution (Table 1).Interfacial electron transfer from excited MB to TiO2 (which is thermodynamically possible; cf. Figure 6) obviously did not contribute significantly, since no MB bleaching was observed under visible light illumination of suspensions containing only this photocatalyst.The situation was different when the TM mixtures were illuminated with UV/vis light.The rate of MB bleaching in the presence of the Ag/Ag2O ⁄⁄ TiO2 mixtures was found to increase with increasing TiO2 content.However, the rates were always lower than the rates determined for suspensions containing only Ag/Ag2O or bare TiO2 (Figure 4a and Table 1).These rates cannot be explained solely by the optical properties of the suspensions.Of course, as the Ag/Ag2O content increases, more UV photons are absorbed by Ag2O.They are thus no longer available for the excitation of the TiO2 that results in decreasing amounts of charge carriers in the TiO2 and, consequently, decreasing rates of MB degradation.However, the MB bleaching rate calculated for the TiO2-rich TM 14 mixture suggests that not all photogenerated charge carriers were used in the desired MB bleaching reaction, but some were lost by reactions between excited TiO2 and Ag/Ag2O, resulting in the reduction of Ag + . XRD measurements revealed the reduction of Ag + during the light-induced bleaching of MB under UV/vis illumination.The ratios of the peak intensities corresponding to Ag2O and TiO2 of the mixture TM 41 and the composite TC 41 were significantly lower after two experimental runs than before illumination (Figure 8).On the other hand, the ratios of the peak intensities attributed to metallic Ag and TiO2 obviously increased.In the case of the Ag/Ag2O ⁄⁄ TiO2 mixture TM 11, apart from the TiO2 peaks, the only visible XRD peaks could be assigned to AgCl and Ag(0) after illumination of a suspension containing MB (Figure S2).The new peaks in the diffractogram, which are indexed to AgCl, were possibly formed by a reaction between Ag + and Cl − known to be present at the surface of TiO2 P25 [39].This reaction certainly explains the decrease of the Ag2O peaks in the The situation was different when the TM mixtures were illuminated with UV/vis light.The rate of MB bleaching in the presence of the Ag/Ag 2 O ⁄⁄ TiO 2 mixtures was found to increase with increasing TiO 2 content.However, the rates were always lower than the rates determined for suspensions containing only Ag/Ag 2 O or bare TiO 2 (Figure 4a and Table 1).These rates cannot be explained solely by the optical properties of the suspensions.Of course, as the Ag/Ag 2 O content increases, more UV photons are absorbed by Ag 2 O.They are thus no longer available for the excitation of the TiO 2 that results in decreasing amounts of charge carriers in the TiO 2 and, consequently, decreasing rates of MB degradation.However, the MB bleaching rate calculated for the TiO 2 -rich TM 14 mixture suggests that not all photogenerated charge carriers were used in the desired MB bleaching reaction, but some were lost by reactions between excited TiO 2 and Ag/Ag 2 O, resulting in the reduction of Ag + . XRD measurements revealed the reduction of Ag + during the light-induced bleaching of MB under UV/vis illumination.The ratios of the peak intensities corresponding to Ag 2 O and TiO 2 of the mixture TM 41 and the composite TC 41 were significantly lower after two experimental runs than before illumination (Figure 8).On the other hand, the ratios of the peak intensities attributed to metallic Ag and TiO 2 obviously increased.In the case of the Ag/Ag 2 O ⁄⁄ TiO 2 mixture TM 11, apart from the TiO 2 peaks, the only visible XRD peaks could be assigned to AgCl and Ag(0) after illumination of a suspension containing MB (Figure S2).The new peaks in the diffractogram, which are indexed to AgCl, were possibly formed by a reaction between Ag + and Cl − known to be present at the surface of TiO 2 P25 [39].This reaction certainly explains the decrease of the Ag 2 O peaks in the diffractogram.However, this explanation does not exclude that Ag 2 O is also transformed by a light-induced reduction reaction, yielding Ag(0).diffractogram.However, this explanation does not exclude that Ag2O is also transformed by a lightinduced reduction reaction, yielding Ag(0).The conclusion from the XRD data, that Ag(I) was reduced yielding Ag(0) during the lightinduced bleaching of MB in the presence of the mixture TM 41, is supported by the results of the analysis of XPS data taken before and after two experimental runs (Figures 9a,b and S3).It becomes obvious from Figure 9a that the Ag 3d5/2 and Ag 3d3/2 peaks of Ag2O in the mixture TM 41 decreased in intensity and broadened, while the Ag(0) 3d5/2 and Ag(0) 3d3/2 peaks increased in intensity after two photocatalytic reactions.Furthermore, the deconvolution of the O 1s peaks denotes that the peak corresponding to the Ag-O bond had a lower intensity compared to the same peak observed before the reaction, indicating significant changes occurred during the light-induced MB bleaching reaction (Figure 9b).These changes were mainly due to the light-induced reduction of Ag + yielding Ag(0).Again, the condition of stability of a catalyst was not satisfied.The conclusion from the XRD data, that Ag(I) was reduced yielding Ag(0) during the light-induced bleaching of MB in the presence of the mixture TM 41, is supported by the results of the analysis of XPS data taken before and after two experimental runs (Figure 9a,b and Figure S3).It becomes obvious from Figure 9a that the Ag 3d 5/2 and Ag 3d 3/2 peaks of Ag 2 O in the mixture TM 41 decreased in intensity and broadened, while the Ag(0) 3d 5/2 and Ag(0) 3d 3/2 peaks increased in intensity after two photocatalytic reactions.Furthermore, the deconvolution of the O 1s peaks denotes that the peak corresponding to the Ag-O bond had a lower intensity compared to the same peak observed before the reaction, indicating significant changes occurred during the light-induced MB bleaching reaction (Figure 9b).These changes were mainly due to the light-induced reduction of Ag + yielding Ag(0).Again, the condition of stability of a catalyst was not satisfied.diffractogram.However, this explanation does not exclude that Ag2O is also transformed by a lightinduced reduction reaction, yielding Ag(0).The conclusion from the XRD data, that Ag(I) was reduced yielding Ag(0) during the lightinduced bleaching of MB in the presence of the mixture TM 41, is supported by the results of the analysis of XPS data taken before and after two experimental runs (Figures 9a,b and S3).It becomes obvious from Figure 9a that the Ag 3d5/2 and Ag 3d3/2 peaks of Ag2O in the mixture TM 41 decreased in intensity and broadened, while the Ag(0) 3d5/2 and Ag(0) 3d3/2 peaks increased in intensity after two photocatalytic reactions.Furthermore, the deconvolution of the O 1s peaks denotes that the peak corresponding to the Ag-O bond had a lower intensity compared to the same peak observed before the reaction, indicating significant changes occurred during the light-induced MB bleaching reaction (Figure 9b).These changes were mainly due to the light-induced reduction of Ag + yielding Ag(0).Again, the condition of stability of a catalyst was not satisfied. Light-Induced Hydrogen Evolution From a thermodynamic point of view, excited TiO2 is able to transfer a conduction band electron to a proton present at the photocatalyst surface (Figure 6).This electron transfer is, however, known to be a kinetically inhibited process.Therefore, it is necessary to deposit an electrocatalyst at the TiO2 surface, which accelerates the interfacial electron transfer.Ag(0) is known to be a suitable, though relatively inactive, electrocatalyst [50,51].In this work as well, pure TiO2 showed only a very low photocatalytic activity with regard to H2 evolution from aqueous methanol.When using the TM materials, a significant increase in the amount of H2 evolved (consequently corresponding with an increase in the reaction rate) during six hours of illumination of the mixture was observed with increasing TiO2 content (Figure 5a and Table 1).On the one hand, this can be explained by the fact that a significant portion of the UV photons was absorbed by Ag2O being inactive under this illumination condition, and thus was not available for the desired H2 evolution reaction.However, this portion decreased with increasing TiO2 amount of the mixture.On the other hand, some of the TiO2 conduction band electrons were transferred to the Ag2O, where they were consumed to reduce Ag + to Ag(0).These electrons were therefore also not available for the desired reaction.Obviously, these undesired electron losses are lower the higher the mass fraction of TiO2 in the physical mixture, resulting in increasing H2 evolution rates with increasing mass fraction of TiO2. Bleaching of Methylene Blue When irradiated with light at wavelengths ≥ 410 nm, methylene blue was found to be bleached in the presence of the three TC composites (Figure 4d and Table 1).All TC composites exhibited a higher activity than the corresponding TM mixtures.As in the case of the TM materials, the rate of MB bleaching decreased with increasing amounts of TiO2.The increased reaction rates for MB bleaching in the presence of Ag2O containing solids, compared to the rate of photolysis under visible light illumination, were explained in Section 3.2.1 with an interfacial electron transfer from (excited) MB to Ag2O (cf. Figure 7).However, the experimental result is surprising when it is considered that the surfaces of the composites were smaller than the surfaces of the corresponding TM mixtures.A possible explanation may be due to the preparation method.For the TC materials, the Ag/Ag2O was prepared in a TiO2 suspension.Therefore, the Ag/Ag2O was attached on the surface of the TiO2 Light-Induced Hydrogen Evolution From a thermodynamic point of view, excited TiO 2 is able to transfer a conduction band electron to a proton present at the photocatalyst surface (Figure 6).This electron transfer is, however, known to be a kinetically inhibited process.Therefore, it is necessary to deposit an electrocatalyst at the TiO 2 surface, which accelerates the interfacial electron transfer.Ag(0) is known to be a suitable, though relatively inactive, electrocatalyst [50,51].In this work as well, pure TiO 2 showed only a very low photocatalytic activity with regard to H 2 evolution from aqueous methanol.When using the TM materials, a significant increase in the amount of H 2 evolved (consequently corresponding with an increase in the reaction rate) during six hours of illumination of the mixture was observed with increasing TiO 2 content (Figure 5a and Table 1).On the one hand, this can be explained by the fact that a significant portion of the UV photons was absorbed by Ag 2 O being inactive under this illumination condition, and thus was not available for the desired H 2 evolution reaction.However, this portion decreased with increasing TiO 2 amount of the mixture.On the other hand, some of the TiO 2 conduction band electrons were transferred to the Ag 2 O, where they were consumed to reduce Ag + to Ag(0).These electrons were therefore also not available for the desired reaction.Obviously, these undesired electron losses are lower the higher the mass fraction of TiO 2 in the physical mixture, resulting in increasing H 2 evolution rates with increasing mass fraction of TiO 2 . Bleaching of Methylene Blue When irradiated with light at wavelengths ≥ 410 nm, methylene blue was found to be bleached in the presence of the three TC composites (Figure 4d and Table 1).All TC composites exhibited a higher activity than the corresponding TM mixtures.As in the case of the TM materials, the rate of MB bleaching decreased with increasing amounts of TiO 2 .The increased reaction rates for MB bleaching in the presence of Ag 2 O containing solids, compared to the rate of photolysis under visible light illumination, were explained in Section 3.2.1 with an interfacial electron transfer from (excited) MB to Ag 2 O (cf. Figure 7).However, the experimental result is surprising when it is considered that the surfaces of the composites were smaller than the surfaces of the corresponding TM mixtures.A possible explanation may be due to the preparation method.For the TC materials, the Ag/Ag 2 O was prepared in a TiO 2 suspension.Therefore, the Ag/Ag 2 O was attached on the surface of the TiO 2 particles.In contrast, in the TM mixtures large Ag/Ag 2 O particles were covered by TiO 2 , hindering the electron transfer from excited MB to the Ag 2 O, as discussed in Section 3.2.2. The rate of MB bleaching in the presence of TC composite was significantly higher under UV/vis than under visible light illumination.As observed for the TM materials, the bleaching rates were lower in suspensions containing the composites than in suspensions containing only Ag/Ag 2 O or TiO 2 (Figure 4b and Table 1). XRD and XPS data indicate that Ag(I) was reduced, yielding Ag(0), during the light-induced bleaching reaction of MB in the presence of the composite TC 41.A stabilization of Ag 2 O by metallic silver, as claimed by several authors [9][10][11][12]20,29], was not observed.No XRD peaks that can be attributed Ag 2 O, were observed after two experimental runs of the composite.However, the ratios of the peak intensities due to metallic Ag and TiO 2 obviously increased (Figure 8b).No Ag 3d 5/2 and Ag 3d 3/2 peaks, which can be attributed to Ag(I), were present either in the deconvoluted XPS spectra obtained after two experimental runs (Figure 9c and Figure S3).The XPS peak, which was attributed to the presence of Ag-O, also disappeared during the light-induced reaction (Figure 9d and Figure S3). These observations support the statement made above that Ag/Ag 2 O cannot be called a photocatalyst.The XRD pattern shown in Figure 8b as well as the XPS data presented in the Figure 9c,d clearly evince that the Ag:Ag 2 O ratio changed during the light-induced bleaching of MB.Thus, the condition for a catalyst to exit a chemical reaction unchanged is not satisfied. Light-Induced Hydrogen Evolution The three TC composites were found to be able to promote light-induced H 2 evolution from aqueous methanol.The calculated reaction rates were significantly larger than those of the corresponding TM mixtures.The highest H 2 evolution rate was observed in the presence of TC 11 (Figure 5b and Table 1), which was also characterized by a high MB bleaching rate under UV/vis illumination.A possible mechanistic explanation for the high activity of the TC 11 composite is based on the assumption of synergistic effects, due to the presence of both Ag(0) and Ag 2 O at the TiO 2 surface (Figure 10).TiO 2 is excited by UV photons.The photogenerated conduction band electrons migrated to the Ag(0) attached to the TiO 2 surface.In a subsequent step, interfacial electron transfer from Ag(0) to protons present in the surrounding electrolyte occurred, thus yielding molecular hydrogen.The valence band hole inside the TiO 2 particle was filled by an electron from an attached Ag 2 O particle.Methanol was oxidized by this hole in the valence band of the Ag 2 O.According to this mechanism, Ag(0) acts as an electron sink, thus decreasing the electron-hole recombination, and as electrocatalyst for the hydrogen evolution reaction, while Ag 2 O is an electrocatalyst for the oxidation reaction of methanol yielding methanal.The supposition made here, that the methanol oxidation occurs at the Ag 2 O surface via electron transfer to the valence band of the excited TiO 2 , has already been proclaimed earlier [16,19,23,26].It should be emphasized again that the energy of an electron in the conduction band of the Ag 2 O employed in this study is insufficient to reduce a proton (Figure 6).Consequently, excitation of TiO 2 is a prerequisite for photocatalytic reforming of methanol.TiO 2 is known to be a relatively inactive material for the photocatalytic reduction of protons.High evolution rates of molecular hydrogen are observed only in the presence of a co-catalyst.Ag 2 O was found here to be an unsuitable co-catalyst for the hydrogen evolution reaction, since electron transfer from the excited TiO 2 can only occur into the conduction band of this material.The photocatalytic activities of the composites and mixtures discussed here are thus determined to a considerable extent by the competition between interfacial electron transfer to protons in the surrounding electrolyte, and to silver ions in Ag 2 O.The mechanism of the photocatalytic hydrogen evolution by reforming of organic compounds in the presence of the mixtures and composites employed in this study does not contradict the mechanism discussed for Ag/Ag 2 O ⁄⁄ TiO 2 samples, which contain Ag 2 O with a significantly more negative conduction band energy than TiO 2 [17,24,26,33].Changes in the respective mass fractions of TiO2, Ag, and Ag2O at constant total mass of the solid in suspension may have several impacts on the rate of hydrogen evolution.Increasing mass fractions of UV absorbing and scattering Ag and Ag2O reduces the number of photons to be absorbed by the TiO2, thus reducing the H2 evolution rate.A reduction of the mass fraction of metallic Ag may possibly slow down the interfacial electron transfer to the proton, while a reduction of the mass fraction of Ag2O might negatively affect the oxidation reaction.It should also be noted that Ag2O can act as a sink for a TiO2 conduction band electron (cf. Figure 6).These partially opposing effects may be responsible for the observed differences in the H2 evolution rates in the presence of the various TC composites (and TM mixtures). Preparation of Ag/Ag2O An amount of AgNO3 was dissolved in 50 mL of distilled water.The obtained solution was stirred for 30 min.Subsequently, 50 mL NaOH (0.2 M) was added dropwise.The resulting suspension was stirred for another 30 min to promote hydrolysis, and centrifuged, washed with distilled water three times, and dried at 70 °C for 24 h. Preparation of TM Mixtures The samples were obtained by mixing the self-prepared Ag2O with TiO2 at mass ratios of 4:1 (20 mass% TiO2), 1:1 (50 mass% TiO2), and 1:4 (20 mass% TiO2) with water.The suspensions were sonicated for 1.5 h and dried at 70 °C for 24 h.The Ag/Ag2O ⁄⁄ TiO2 with 20%, 50%, and 80% of TiO2 were nominated as TM 41, TM 11, and TM 14, respectively.For purpose of comparison, a TiO2 sample was prepared by the same procedure without the addition of Ag/Ag2O. Preparation of TC Composites The TC composites were prepared by a published precipitation method [25,29].A measured amount of TiO2 was suspended in 50 mL of distilled water, and the calculated amount of AgNO3 corresponding to the desired mass ratio of Ag2O was added to the solution.The obtained suspension 6).These partially opposing effects may be responsible for the observed differences in the H 2 evolution rates in the presence of the various TC composites (and TM mixtures). Preparation of Ag/Ag 2 O An amount of AgNO 3 was dissolved in 50 mL of distilled water.The obtained solution was stirred for 30 min.Subsequently, 50 mL NaOH (0.2 M) was added dropwise.The resulting suspension was stirred for another 30 min to promote hydrolysis, and centrifuged, washed with distilled water three times, and dried at 70 • C for 24 h. Preparation of TM Mixtures The samples were obtained by mixing the self-prepared Ag 2 O with TiO 2 at mass ratios of 4:1 (20 mass% TiO 2 ), 1:1 (50 mass% TiO 2 ), and 1:4 (20 mass% TiO 2 ) with water.The suspensions were sonicated for 1.5 h and dried at 70 • C for 24 h.The Ag/Ag 2 O ⁄⁄ TiO 2 with 20%, 50%, and 80% of TiO 2 were nominated as TM 41, TM 11, and TM 14, respectively.For purpose of comparison, a TiO 2 sample was prepared by the same procedure without the addition of Ag/Ag 2 O. Preparation of TC Composites The TC composites were prepared by a published precipitation method [25,29].A measured amount of TiO 2 was suspended in 50 mL of distilled water, and the calculated amount of AgNO 3 corresponding to the desired mass ratio of Ag 2 O was added to the solution.The obtained suspension was stirred for 30 min.A volume of 50 mL 0.2 M NaOH was added dropwise.The resulting suspension was stirred for another 30 min to promote hydrolysis and centrifuged, washed with distilled water three times and dried at 70 • C for 24 h.The Ag/Ag 2 O ⁄⁄ TiO 2 with 20 mass%, 50 mass%, and 80 mass% of TiO 2 were denoted as TC 41, TC 11, and TC 14, respectively. Characterization of the Materials The crystalline structure of the catalysts was measured by powder X-ray diffraction XRD (D8 Advance system, Bruker, Billerica, MA, USA), using a Cu Kα radiation source with a wavelength of λ = 0.154178 Å over a 2θ range from 20 • to 100 • , with a 0.011 • step width.The morphology of the prepared materials was determined using a scanning electron microscope (SEM), employing a JEOL JSM-6700F field emission instrument (Tokyo, Japan) with a resolution of 100 nm and 1 µm using an EDXS detector.Measurements of X-ray photoelectron spectra were carried out using a Leybold Heraeus (Cologne, Germany) with X-ray source, Mg & Al anode, nonmonochromatic, hemispherical analyzer, 100 mm radius.Data analysis was performed using XPSPEAK 4.1 software (Hong Kong, China).The energy of the C1s-line was set to 284.8 eV and used as reference for the data correction.Diffuse reflectance UV-Vis spectroscopy was employed using a spectrophotometer (Varian Spectrophotometer Cary-100 Bio, Agilent technologies, Santa Clara, CA, USA) at room temperature.Barium sulfate was used as a standard for 100% reflection.The specific surface area (SSA) of the samples was calculated by N 2 adsorption-desorption measurements, employing the Brunauer-Emmet-Teller (BET) method using a FlowSorb II 2300 apparatus from Micromeritics Instrument Company (Corp., Norcross, GA, USA).Prior to these measurements, the samples were evacuated at 180 • C for 1 h.Measurements of photocurrents and flat band potentials were performed with an electrochemical analyzer using three electrodes employing an Iviumstat potentiostat (Ivium Technologies bv, Eindhoven, The Netherlands).Films of the samples were used as the working electrode, after being coated on cleaned fluorine doped tin oxide (FTO) coated glass using the doctor blade method and calcinated at 400 • C for 2 h.These working electrodes were prepared by grinding 100 mg of the photocatalysts and 50 mg polyethylene glycol with one drop of Triton, followed by addition of 200 µL of deionized water and a sufficient amount of ethanol.An Ag/AgCl electrode (3 M NaCl, +209 mV vs. NHE) and a platinum coil were used as the reference electrode and the counter electrode, respectively.Potassium nitrate aqueous solution (0.1 M) was used as the electrolyte.The impedance spectra were recorded in the range between the chosen potential from −1 V to +1 V at frequencies of 10, 100, and 1000 Hz with 20 mV amplitude vs. Ag/AgCl.The capacitance was plotted against V, and the flat band was calculated from the intercept of the plot.(i.e., a plot of C −2 vs. V, where C was the capacitance and V was the potential across the space charge layer). Methylene Blue Degradation The apparatus used for carrying out of the photocatalytic degradation reactions consisted of a double jacket cylindrical reactor with a 230 mL volume, which circulated with cold water to maintain the ambient reaction.A volume of 200 mL of aqueous solution of methylene blue (MB, 10 mg L −1 ) and 200 mg of photocatalysts were used for each reaction experiment.A 300 W Xenon arc lamp (Müller Electronik-Optik, Moosinning, Germany) was used both as the UV/vis light source and as the vis light source by placing a UV cut-off filter (≥410 nm) in the light path.The lamp was started 30 min before the degradation experiments to ensure maximum emission.Aliquots (1.5 mL) of the suspensions were collected at given time intervals (0, 2, 4, 6, 8, 10, 15, and 30 min), centrifuged to remove the solid, and analyzed immediately with the UV-Vis spectrophotometer. Photocatalytic Hydrogen Formation The photocatalytic H 2 generation experiments were conducted in quartz vials (capacity of 10 mL) under illumination with a 1000 W Xenon lamp (Hönle UV Technology, Gräfelfing, Germany; Sol 1200 solar).An amount of 6 mg of the photocatalyst was suspended in 6 mL aqueous methanol (10 vol% methanol).The suspension was purged with argon for 20 min to remove the air, and the quartz vial was sealed with a specially made rubber septum degassed for sampling.The amount of H 2 gas evolved during the photocatalytic reaction was quantified every two hours using a gas chromatograph (Shimadzu GC-8A, Shimadzu Deutschland GmbH, Duisburg, Germany) equipped with thermal conductivity detector (TCD) and 60/80 molecular sieve 5 Å column. Conclusions Ag/Ag 2 O was found to enhance the rate of light-induced bleaching of aqueous methylene blue under both UV/vis and vis illumination, in comparison to the bleaching in homogeneous solution.Even in suspensions containing mixtures and composites of Ag/Ag 2 O with TiO 2 (P25), with varying mass ratios of Ag/Ag 2 O (20%, 50%, and 80%), the reaction rate was slightly increased under these illumination conditions.However, the bleaching rate of methylene blue was lower in the presence of the composites and mixtures than the rate measured for bare Ag/Ag 2 O.It is therefore suggested that the bleaching of methylene blue is initiated by an interfacial electron transfer from the excited organic probe compound to Ag 2 O. TiO 2 layers covering the Ag 2 O seem to inhibit this electron transfer.Since Ag 2 O can transfer an electron neither to dissolved molecular oxygen nor to a proton for thermodynamic reasons, it is assumed that Ag + is reduced to Ag(0) in the processes investigated here.Results of XRD and XPS measurements support this assumption, and indicate that Ag/Ag 2 O is not stable under the experimental conditions employed in this study.A stabilization of Ag 2 O by metallic silver, as occasionally claimed, was not observed.Therefore, to address Ag/Ag 2 O as a (photo)catalytically active material does not seem appropriate. Catalysts 2018, 8 , x FOR PEER REVIEW 7 of 19 rich TM 41 resulted in bleaching of MB with a slightly increased rate.In contrast, the TiO2-rich mixtures TM 11 and TM 14 were virtually inactive under this illumination condition (Figure4c). Figure 4 . Figure 4. Bleaching of MB in the presence of Ag/Ag2O, TiO2, the TM mixtures and the TC composites under UV/vis (a,b) and under vis light only (c,d). Figure 4 . Figure 4. Bleaching of MB in the presence of Ag/Ag 2 O, TiO 2 , the TM mixtures and the TC composites under UV/vis (a,b) and under vis light only (c,d). Figure 5 shows the amount of H 2 vs. illumination time in the presence of TiO 2 , Ag/Ag 2 O, and the prepared mixtures and composites.No H 2 evolution was observed in the presence of Ag/Ag 2 O and the Ag/Ag 2 O-rich TM 41.In the presence of all other materials, the evolution of H 2 was detected.However, large amounts of H 2 were only evolved with the materials TM 14 (104 µmol/6 h) and TC 11 (174 µmol/6 h). Figure 5 . Figure 5.The amount of H2 evolved from aqueous methanol under UV/vis illumination of Ag/Ag2O, TiO2, (a) TM mixtures and (b) TC composites vs. illumination time. Figure 5 . Figure 5.The amount of H 2 evolved from aqueous methanol under UV/vis illumination of Ag/Ag 2 O, TiO 2 , (a) TM mixtures and (b) TC composites vs. illumination time. Figure 6 . Figure 6.The electrochemical potentials (vs.NHE) of the valence and conduction bands of TiO2 and Ag2O, and the reduction potentials of some species (possibly) present in the surrounding electrolyte.MB, MB •− , MB •+ , MB T , and MB S denote the MB ground state, the semi-reduced MB, the oxidized MB, the excited triplet state, and the excited singlet state of MB, respectively.The one electron reduction potentials have been calculated with data given in References[44,47,48]. Figure 6 . Figure 6.The electrochemical potentials (vs.NHE) of the valence and conduction bands of TiO 2 and Ag 2 O, and the reduction potentials of some species (possibly) present in the surrounding electrolyte.MB, MB •− , MB •+ , MB T , and MB S denote the MB ground state, the semi-reduced MB, the oxidized MB, the excited triplet state, and the excited singlet state of MB, respectively.The one electron reduction potentials have been calculated with data given in References[44,47,48]. Figure 7 . Figure 7. Possible mechanism of MB bleaching by Ag2O and Ag2O-containing mixtures and composites under visible light illumination. Figure 7 . Figure 7. Possible mechanism of MB bleaching by Ag 2 O and Ag 2 O-containing mixtures and composites under visible light illumination. Figure 8 . Figure 8. XRD patterns of (a) TM 41 and (b) TC 41 after two cycles of MB bleaching employing UV/vis light. Figure 8 . Figure 8. XRD patterns of (a) TM 41 and (b) TC 41 after two cycles of MB bleaching employing UV/vis light. Figure 8 . Figure 8. XRD patterns of (a) TM 41 and (b) TC 41 after two cycles of MB bleaching employing UV/vis light. Figure 9 . Figure 9. High-resolution XPS spectra of the Ag 3d and O 1s signals of TM 41 (a,b) and TC 41 (c,d) before and after two experimental runs. Figure 10 . Figure 10.Mechanism of hydrogen evolution from aqueous methanol under UV/vis illumination. Figure 10 . Figure 10.Mechanism of hydrogen evolution from aqueous methanol under UV/vis illumination.Changes in the respective mass fractions of TiO 2 , Ag, and Ag 2 O at constant total mass of the solid in suspension may have several impacts on the rate of hydrogen evolution.Increasing mass fractions of UV absorbing and scattering Ag and Ag 2 O reduces the number of photons to be absorbed by the TiO 2 , thus reducing the H 2 evolution rate.A reduction of the mass fraction of metallic Ag may possibly slow down the interfacial electron transfer to the proton, while a reduction of the mass fraction of Ag 2 O might negatively affect the oxidation reaction.It should also be noted that Ag 2 O can act as a sink for a TiO 2 conduction band electron (cf.Figure6).These partially opposing effects may be responsible for the observed differences in the H 2 evolution rates in the presence of the various TC composites (and TM mixtures). Figure Table 1 . Brunauer-Emmet-Teller (BET) surface area, initial rates of methylene blue (MB) bleaching and H 2 generation in the presence of Ag/Ag 2 O, TiO 2 , the TM mixtures and the TC composites.
2019-04-10T13:12:49.815Z
2018-12-10T00:00:00.000
{ "year": 2018, "sha1": "6ca6b1a1b574ba7c1dc1e65cd58ca088a7826a94", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/8/12/647/pdf?version=1544425024", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6ca6b1a1b574ba7c1dc1e65cd58ca088a7826a94", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
164555308
pes2o/s2orc
v3-fos-license
Extraction of Neodymium (III) from Neodymium Concentrate Using Synergistic Solvent D2EHPA, TOPO and TBP Solvent extraction was performed on Nd (III) concentrate by using synegistic solvents D2EHPA, TOPO and TBP. The aim of this research is to know the effect of synergistic solvent on the extraction result. The extraction process of the Nd (III) concentrate was carried out at the feed pH of 1.0. The first extraction was carried out using a mixture of D2EHPA and TOPO extractants while the second extraction used a mixture of D2EHPA and TBP at various concentrations (10: 0, 8: 2, 6: 4, 4: 6, 2: 8 and 0:10) and variation of feed and solvent volume ratio (0.5: 1.0; 0.75: 1.0; 1.0: 1.0; 1.0: 0.5; 1.0: 0.75) at constant stirring speed of 250 rpm and extraction time of 30 minute. The results showed that the concentration and volume of synergistic solvents greatly influenced the distribution coefficient (Kd), extraction efficiency (E) and separation factor (α). From all parameters studied the highest separation factor was obtained using the synergistic % ratio of D2EHPA-TOPO 4: 6 solvent and the volume ratio of feed and solvent 1: 0,5 ie α Nd-Pr = 2.55 and α Nd-Sm = 1.47 but the distribution coefficient and efficiency of the three elements are low so they are not selected. The recommended condition is to use a 4% -6% D2EHPA-TOPO synergistic solvent and a 0.5: 1.0 solvent and solvent ratio. In this condition obtained Kd Nd = 0.46 Kd Pr = 0.27 Kd Sm = 0.72, E Nd = 31.5% E Pr = 21.3% E Sm = 41.8 with the separationfactor α Nd-Pr = 1.7and αNd-Sm = 0.84. Introduction Rare-earth elements (REEs) are a group of 17 elements, consisting of the 15 lanthanides plus scandium and yttrium. REEs are used in a wide range of products, such as fluorescent lamps, magnets, superconductors, lasers, ceramics, semiconductors, catalysts, and thermal neutron absorbents. REEs occur together in nature in some minerals, e.g. bastnasite, monazite and xenotime [1]. The rare earth metals (REMs) are progressively establishing themselves as ecisive industrial materials, with inimitable applications in copious fields, such as the permanent magnets, electronics, superconductor, hydrogen storage, medical and nuclear technologies [2] [3]. Different rare earths are needed to supply the required functionality in these applications. In some cases, a single rare earth element may be required, such as La for nickelmetal hydride batteries, but other applications require a mixture of rare earths, for example Nd and Pr for rare earth magnets and Eu (or Tb) and Y for rare earth phosphors. [4] [5]. A mixture of neodymium, iron, and boron metal is used in the manufacture of permanent magnets. This magnet is part of the vehicle component, also used on the loudspeaker and for data storage on the computer. Neodymium is an important technological metal due to its widespread use in IOP Conf. Series 2 neodymium−iron−boron permanent magnets (NdFeB magnets or neomagnets). The ever-increasing use of NdFeB magnets and possible supply risk of neodymium makes recycling of neodymium from end-of-life NdFeB magnets an important economic issue [6] [7]. Since these metals are found in resource as a mixture, they should be separated into each other. This separation is difficult especially in the case between neighbouring elements in the periodic table, because they have the similar chemical features due to the same electric configuration of outermost husks. Solvent extraction is presently one of the commercialized techniques for the separation [8]. The development of sustainable recycling schemes for neodymium is a technological challenge. An important technique for the recovery of neodymium and other rare earths is solvent extraction (SX), because this technique allows the separation of rare earths from other metals as well as the separation of mixtures of rare earths into the individual elements [4] [8]. The separation of Nd from the Nd (OH) 3 concentrate of the processed monasite sand product is necessary conducted considering the usefulness and the expensive price of Nd. The separation of Nd from Nd(OH) 3 concentrate was conducted by using solvent extraction process. The separation by using extraction is due to the greater advantages of this process, among of which is time saving/efficient, and that the equipment used is simpler. Based on the description, from this investigation it is expected to know the optimum condition of the extraction process to separate neodymium (Nd) from Nd (OH) 3 concentrate [3]. Solvent extraction involves the distribution of a solute between the two immiscible liquid phases. Extraction techniques are useful for rapid and "clean" separations both for organic and inorganic substances. This way could be used for macro and micro analysis. Through the extraction process, the metal ion in the water solvent is pulled out by an organic solvent (organic phase). In general, extraction is the process of withdrawing a solute from its solution in water by another solvent which is immiscible with water phase [3] [4]. Researcher M.Setyadji et al previously conducted the solvent extraction of neodymium concentrates of monazite sand processed product using extractants of trioctylamine (TOA). tryibuthyl phosphate (TBP). trioctylphosphine oxyde (TOPO) and di-ethyl hexyl phosphoric acid (D2EHPA) in kerosene [4]. Konaem Wasamonet al., studied the modeling of neodymium ions extraction fromthe mixture of D2EHPA and TOPO by the hollow fiber supported liquid membrane [6]. J.Kraikaew et al., studied the application of isomolar mixtures of tributylphosphate(TBP) and di-(2ethylhexyl)phosphoricacid(D2EHPA) in kerosene at room temperature for extractive separation of individual rare earths from mixed rare earth nitrate feed solution compared with 50%TBP in kerosene [9]. M.O. Andropov et al., carried out according to the method by shaking the organic and aqueous phases in separating funnels with subsequent slicing. REE were extracted in the presence of salting-out agents by extraction with100 % tributyl phosphate [10]. According to Nerst's distribution law, when into two immiscible solvents was entered the soluble solutes in both solvents there would be a solubility division. Both solvents are generally organic solvents and water solvents. In practice the solute would be distributed by itself into the two solvents after being shaken and left separately. The ratio of the solute concentration in both solvents remains constant, and it would be a constant at a fixed temperature. Those constants are called distribution constants or distribution coefficients. The distribution coefficient is expressed by the following formula [4][5] [9]: where Kd = distribution coefficient and C 1 , C 2 , C o , and C a respectively are solute concentrations in solvents 1, 2, organic, and water. In accordance with the agreement, the solute concentration in the organic solvent is written above and the solute concentration in the water solvent is written below. From the formula if the value ofKd is large, the solute will quantitatively tend to be more distributed into organic solvents, as well as the opposite.The distribution coefficient can also be expressed as the distribution ratio. The distribution ratio (D) is the most important parameter involved in the solvent extraction along with the separation factor (α). These are determined using Eqs. 1 where [A]org and [A]aq are the equilibrium concentrations of the metal of interest in all its existing species in the organic and aqueous phases, respectively. Consequently, the separation factor is the ratio between the distribution ratios of the metal 1 and 2 that are of interest. The separation factor represents the selectivity between these two metals in the extraction. The extractability of neodymium ions in this research can be calculated by the percentage of extraction [12]: where [C] f,in denotes the initial concentration in the feed phase and [C] f,out denote the outlet concentration in the feed. Materials and methods Materials used were: Monazite sand derived of side products tin industrial processing Bangka-Belitung Island, technical chemicals H 2 SO 4 , NaOH, ammonia, H 2 C 2 O 4 and HNO 3 . The solvents used are D2EHPA, TBP, TOPO, kerosen solution, aquadest and pure RRE material for standard analysis. All chemicals are obtained from Process Technology Laboratory, PSTA-BATAN Yogyakarta. Instrumentations used were: Glass beaker, ball mill, sieve, analytical scales, glassware, porcelain crucible, muffle furnace, magnetic stirrer and heater, various size flasks, spray bottles, volumetric pipette, propipette, small bottle size of 10 mL vials, spex film, pH meter, X-ray or XRF (Ortec 7010) spectrometer. Experimental procedure Preparation of Nd concentrate REOH concentrate of 100 grams plus 50 g of NaOH and 200 mL of water was heated at 200 °C for 2 hours. After washing with hot water until the pH is neutral, filtered, dried and weighed. The resultant REOH concentrate was 60 g dissolved in 80 mL of HNO 3 plus KBrO 3 2 g, heated to boiling while stirring for 30 minutes. The solution is then precipitated by adding ammonia to pH of 2.4.The precipitate formed is the Ce concentrate (Ce hydroxide) subsequently filtered, the filtrate RE nitrate (low Ce content). The Ce concentrate is dried, weighed and analyzed using XRF. The filtrate of RE nitrate was precipitated by increasing the ammonia 15% to pH to 6.5. The precipitate formed is filtered, dried, weighed and analyzed using XRF. The filtrate produced from the deposition of pH 6.5 plus ammonia 15% to pH to 8. The precipitate formed was filtered, dried, weighed and analyzed using XRF. The precipitate formed is Nd concentrate. Synergistic solvent extraction Solvent extraction of Nd concentrate using synergistic solvents D2EHPA and TOPO, the solution of Nd (OH) 3 with Nd concentration of 100 g / L was included in erlenmeyer and then added D2EHPA of 5 mL (variation of concentration 0,8, 0,6, 0,4 and 0,2 M) and added TOPO of 5 mL (variation of concentration 0,2, 0,4, 0,6, and 0,8 M). The extraction was carried out at 250 rpm stirring speed and reaction time 30 min. In the same way, solvent extraction of Nd concentrate using a synergistic solvent of D2EHPA and TBP. After equilibration, the mixture was transferred to a separation funnel where the phases were allowed to separate. The concentrations of metal ions in the aqueous solutions were determined by XRF and the concentrations of metal ions in the organic phase were calculated by a mass balance. The final volumes of aqueous and organic solutions after the complete separation of phases were measured and no changes in volumes were detected during the extraction. All the experiments were performed at room temperature. Results and discussions The overall research investigating the effect of % and volum ratio of synergistic solvent D2EHPA-TOPO and D2EHPA-TBP on distribution coefficients, extraction efficiency and separation factors of Nd, Pr and Sm.The variables investigated were % ratio of synergistic solvent : 10:0, 8:2, 6:4, 4:6, 2:8 and 0:10 andthe variation of synergic solvent volume ratio was 0.5: 1; 0.75: 1; 1: 1; 1: 0.5; 1; 0.75. The graph of the correlation between the distribution coefficient (Kd), the extraction efficiency (E) and separation factor (α)to the % ratio of synergistic solvent, as well as the graph of the correlation of Kd, % E and αto the volum ratio of synergistic solvent, are shown in Fig. 1 to Fig. 12. In Figure 1 and Figure 2 it can be seen that the distribution coefficient and extraction efficiency of Nd and Sm are greatest when using synergistic solvent D2EHPA-TOPO of 0:10. In the synergistic solvent D2EHPA-TOPO of 0:10, Nd has Kd = 0.6 and the extraction efficiency = 36%, Sm has Kd = 2.5 and the extraction efficiency = 70%. While the coefficient of distribution and the extraction efficiency of Pr is greatest when using the synergistic solvent D2EHPA-TOPO 4: 6, Kd = 0.9, the extraction efficiency = 50%. In Figure 3 shows the tendency of increasing the separation factor value to be proportional to the % ratio D2EHPA-TOPO increase. The maximum separation factor of Nd-Pr [13] conducted an extraction of Nd ion using toluene extractant at various process conditions obtained the highest extraction efficiency of 33,56%. Another researcher, Gergoric, M. et al [14] carried out Nd extraction by using a 30% D 2 EHPA extractant in hexane with the highest efficiency obtained was 50%. While the results of solvent extraction research on the REE solution feed using 100% tributyl phosphate solvent carried out by .O. Andropov et al., it was obtained the extraction results shown by the REE distribution coefficient between 1 -3 [10]. The results of this study are also better when compared with previous research by the same researchers. In the previous study, Setyadji, M. et al [4] optimized the neodymium concentrate extraction process resulting from the processing of monazite sand using various solvents, it can be concluded that Nd extraction can be done by using TBP or TOA solvent. The optimum condition of Nd extraction using TOA solvent in HNO 3 of 2 M concentration, in this condition obtained the distribution coefficient of Ndis 0,65, Nd extraction efficiency is 37,10%, the contentNd 2 (C 2 O 4 ) 3 is 67,14%, Ce 2 (C 2 O 4 ) 3 is 1, 79%, La 2 (C 2 O 4 ) 3 is 1.37% and Y 2 (C 2 O 4 ) 3 is 24.70%. While Nd extraction using TBP at optimum condition at concentration HNO3 1 M, Kd Nd obtained = 0,20, efficiency extraction Nd = 17%. Conclusion Neodymium extraction of the neodymium concentrate using two synegistic solvents D2EHPA-TOPO and D2EHPA-TBP did not provide much improved extraction results, either on the distribution coefficient, extraction efficiency and separation factors. The results showed that the concentration and volume of synergistic solvents greatly influenced the distribution coefficient, extraction efficiency and separation factors. From all parameters studied the highest separation factor was obtained using the synergistic % ratio of D2EHPA-TOPO 4: 6 solvent and the volume ratio of feed and solvent 1: 0,5 ie α Nd-Pr = 2.55 and α Nd-Sm = 1.47 but the distribution coefficient and efficiency of the three elements are low so they are not selected. The recommended condition is to use a 4% -6% D2EHPA-TOPO synergistic solvent and a 0.5: 1.0 solvent and solvent ratio. In this condition obtained Kd Nd = 0.46 Kd Pr = 0.27 Kd Sm = 0.72, E Nd = 31.5% E Pr = 21.3% E Sm = 41.8 with the separation factor of Nd-Pr = 1.7and the separation factor of Nd-Sm = 0.84
2019-05-26T13:55:11.374Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "0c4df0fbd9c67a56b3660ea5a3e6e33ca9d16838", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1198/3/032001/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3804a3c657d77fe3e3b5e792ce20a89072d1f9e7", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
204806954
pes2o/s2orc
v3-fos-license
Fault interpretation in seismic reflection data: an experiment analysing the impact of conceptual model anchoring and vertical exaggeration The use of conceptual models is essential in the interpretation of reflection seismic data. It allows interpreters to make geological sense of seismic data, which carries inherent uncertainty. However, conceptual models can create powerful anchors that prevent interpreters from reassessing and adapting their interpretations as part of the interpretation process, which can subsequently lead to flawed or erroneous outcomes. It is therefore critical to understand how conceptual models are generated and applied to reduce unwanted effects in interpretation results. Here we have tested how interpretation of vertically exaggerated seismic data influenced the creation and adoption of the conceptual models of 161 participants in a paper-based interpretation experiment. Participants were asked to interpret a series of faults and a horizon, offset by those faults, in a seismic section. The seismic section was randomly presented to the participants with different horizontal–vertical exaggeration (1 : 4 or 1 : 2). Statistical analysis of the results indicates that early anchoring to specific conceptual models had the most impact on interpretation outcome, with the degree of vertical exaggeration having a subdued influence. Three different conceptual models were adopted by participants, constrained by initial observations of the seismic data. Interpreted fault dip angles show no evidence of other constraints (e.g. from the application of accepted fault dip models). Our results provide evidence of biases in interpretation of uncertain geological and geophysical data, including the use of heuristics to form initial conceptual models and anchoring to these models, confirming the need for increased understanding and mitigation of these biases to improve interpretation outcomes. Abstract. The use of conceptual models is essential in the interpretation of reflection seismic data. It allows interpreters to make geological sense of seismic data, which carries inherent uncertainty. However, conceptual models can create powerful anchors that prevent interpreters from reassessing and adapting their interpretations as part of the interpretation process, which can subsequently lead to flawed or erroneous outcomes. It is therefore critical to understand how conceptual models are generated and applied to reduce unwanted effects in interpretation results. Here we have tested how interpretation of vertically exaggerated seismic data influenced the creation and adoption of the conceptual models of 161 participants in a paper-based interpretation experiment. Participants were asked to interpret a series of faults and a horizon, offset by those faults, in a seismic section. The seismic section was randomly presented to the participants with different horizontal-vertical exaggeration (1 : 4 or 1 : 2). Statistical analysis of the results indicates that early anchoring to specific conceptual models had the most impact on interpretation outcome, with the degree of vertical exaggeration having a subdued influence. Three different conceptual models were adopted by participants, constrained by initial observa-tions of the seismic data. Interpreted fault dip angles show no evidence of other constraints (e.g. from the application of accepted fault dip models). Our results provide evidence of biases in interpretation of uncertain geological and geophysical data, including the use of heuristics to form initial conceptual models and anchoring to these models, confirming the need for increased understanding and mitigation of these biases to improve interpretation outcomes. Geoscientists employ mental models (or "conceptual models") that integrate their observations and that conform to their understanding of the world (Shipley and Tikoff, 2016). When confronted with geological data, interpreters need to apply different conceptual models, acquired during their training and past experience (through learning), together with robust interpretation methodologies, in order to produce interpretations that honour the data, particularly in areas of great uncertainty (Bond et al., 2007. Interpreters need to be able to identify the key elements (e.g. growth geometries, regional level) and employ different validation techniques (e.g. balancing or restoration) that allow differentiating between (a priori similar) conceptual models (Bond, 2015). The conceptual models therefore incorporate all the elements that shape the knowledge of the geologist of a certain aspect of the geology; for example, the conceptual model of a turbidite system will include characteristics about their origin and evolution, common stratigraphic sequences, lithological composition, and associated stratigraphic structures. These conceptual models are dynamically modified or renewed with the arrival of new observations (input information) and are used to produce predictions (inferences) that can help to answer questions about the world (Shipley and Tikoff, 2017). Conceptual models are therefore the basis of the interpretation, as they provide the necessary criteria to make sense of the data (Frodeman, 1995). To deal with uncertainty, interpreters employ heuristics (or "rules of thumb") in the process of generating the conceptual models, and that makes them subject to a broad range of cognitive biases (Kahneman et al., 1982). One of these biases is related to the capability of interpreters to adjust their interpretations from their initial ideas or conceptual models. This type of bias, called anchoring, has been identified in many decision-making processes since it was first described by Tversky and Kahneman (1974), and it takes place in the seismic interpretation process. Rankey and Mitchell (2003) investigated the effect of anchoring in an interpretation experiment by asking interpreters to reassess their seismic interpretations after being provided with additional well data. Their work shows that most interpreters did not feel that their interpretations needed to change substantially, in spite of data showing changes in porosity and net-to-gross predictions that did not fit with their initial interpretations. Their results suggest that interpreters were anchored to their initial conceptual models and that they were reluctant to change their mind in light of new data. In a different experiment, Bond et al. (2007) observed that participants asked for the geographical location of the section and suggested that interpreters could use this information to build their conceptual models, by using geographically specific knowledge of, for example, the relevant tectonic setting to anchor their interpre-tation. Hence, an interpreter knowing a seismic section was from the North Sea may assume a conceptual model based on an extensional tectonic regime and will consciously and unconsciously look for normal faults in the seismic data. However, if the conceptual model is wrong (e.g. there is significant inversion in the seismic section), the interpretation could be compromised. So although conceptual models can be dynamically modified or renewed with the arrival of new observations, as described by Shipley and Tikoff (2017) and others, anchoring bias often results in limited adjustment from initial models. Thus, although conceptual models are needed to develop geologically sound interpretations, they can also create anchors to potentially erroneous outcomes. The use of tectono-sedimentary conceptual models in seismic interpretation has been extensively documented in the literature (e.g. Strecker et al., 1999;Nielsen et al., 2008;Alcalde et al., 2014). Understanding what elements influence conceptual model development and application in seismic interpretations is useful to better grasp how interpretations are made. Applying the appropriate conceptual models requires assessment, by the interpreter, of objective uncertainty, such as considering errors in data processing or acquisition, and of subjective elements, such as the potential biases they bring to the interpretation from their background and experience (Bond, 2015). Alcalde et al. (2017c) argue that image presentation also has a subdued effect in the way seismic image data are perceived and interpreted. Here, we develop this theme investigating how presentation of vertically exaggerated seismic image data influences conceptual model choice and application, and the subsequent interpretation outcome. Modern computer-based methods provide important advantages to the interpretation of seismic data, such as the generation of 3-D models, attribute analysis or the easy access to multiple display options (e.g. change in scales, colour palettes). However, the use of computers generally results in the on-screen interpretation of a vertically exaggerated seismic image, due to the conflicting ratios of a 1 : 1 seismic image with screen dimensions (Bond, 2015). Furthermore, most 2-D seismic cross sections published in the literature are displayed vertically exaggerated (Stewart, 2011), although it is likely that multiple displays were employed during the interpretation stage. Vertical exaggeration of seismic image data creates images with apparent reflection continuity and exaggerates dips of structures and horizons. Conscious application of seismic image stretching is used in the seismic interpretation process because it helps to enhance certain aspects of the display that ease the interpretation (Stewart, 2011). It helps for instance to amplify low-relief structures, which appear otherwise compressed and difficult to differentiate (Feagin, 1981;Bertram and Milton, 1996). For example, Brothers et al. (2009) report that vertical exaggeration helped them to delineate small changes in stratal geometry, otherwise imperceptible, in their seismic interpretation study of the Salton Sea. Vertical exaggeration can also be used to mitigate the difference between vertical and horizontal sampling, which can be considerable depending on the acquisition parameters, the impact of which is to make images appear stretched (Stewart, 2011). These examples highlight the usefulness of scale variation during interpretation. However, changes in appearance of seismic image data through, subconscious or conscious, vertical exaggeration change an interpreter's perception of an image. The change in image character is often unintentional and can result in unwanted perceptual bias during interpretation. This can subsequently lead to misinterpretations, particularly if the interpreted geological structures are complex (Stone, 1991). Vertical exaggeration can also make features, like gas escape chimneys, appear narrower than they are (Horozal et al., 2009). Black et al. (1994) noticed that vertically exaggerated seismic sections can result in gently dipping reflections being perceived as more steeply dipping, which may lead to the erroneous conclusion that migration of the seismic data is required. Similarly, Stewart (2012) investigated the impact of vertical exaggeration on fault dip and observed that structural restoration of interpretations conducted in exaggerated sections lead to unrealistic subsurface models. Thus, vertical exaggeration in seismic interpretation can have positive and negative influences on interpreter perception of the image and interpretation outcome. Here we test the theory that the presentation of seismic image data in a vertically exaggerated format impacts the perceptions of interpreters, influencing the conceptual models they apply in their interpretation and their outcome. We focus on analysis of fault and horizon interpretations in a clipped seismic image. Interpreters were randomly presented with different vertical exaggerations (1 : 2 and 1 : 4) of the same seismic image. Statistical analysis of fault and horizon placement -fault dip angle, fault dip direction and fault type -allows us to draw conclusions on the effect of vertical exaggeration on interpretation. Experiment setup The interpretation experiment consisted of a ca. 15 km long clipped portion from a 2-D seismic image from the Browse Basin, NW Australia ( Fig. 1) available on the Virtual Seismic Atlas (https://www.seismicatlas.org, last access: 30 September 2019). This seismic image has been interpreted as a series of normal faults dipping to the NW (left-hand side of the section) overlain by post-tectonic sediments, These faults could potentially have been formed in the Late Carboniferous to Early Permian rifting event (Struckmeyer et al., 1998;Keep and Moss, 2000). The area has undergone different stages of reactivation since the Early Triassic, so inversion structures can also be found (Keep and Moss, 2000). The section used in this experiment was originally downloaded with no vertical exaggeration (i.e. with an approximate horizontal to vertical ratio of 1 : 1), according to the Virtual Seismic Atlas information. In a series of interpretation experiments, the seismic image was presented to participants with horizontal to vertical exaggeration of 1 : 4 ( Fig. 2a) or 1 : 2 ( Fig. 2b), hereafter called "1 : 4" and "1 : 2" sections. The sections were presented in two-way travel time (TWT), and no information about the actual depth of the sections was provided. The participants were asked to "interpret the main faults crossing the section as deep as possible", as well as to add a "sedimentary horizon to mark the displacement", and they were given 15-30 min to complete their interpretations. The experiment as presented to the participants can be found in the Supplement. The participants also completed an anonymous questionnaire designed to collect information about their background, training, knowledge, and experience in structural geology and seismic interpretation. The interpretation experiment was completed by 161 students of which 126 participants (78 % of the total) were undergraduate students and 35 participants (22 % of the total) were postgraduate students, from different universities in the UK, France and Spain. The participants have mostly geology (72.5 %) and geophysics (12.5 %) backgrounds and considered themselves as having basic to good proficiency in structural geology and seismic interpretation (> 93 % of the participants). We focused this experiment only on students to observe the potential variability in interpretation of the same section in a group of people with similar experience and background. Interpretation results The two vertically exaggerated seismic images were assigned randomly to the participants: the 1 : 2 section was inter-preted 88 times (55 %) and the 1 : 4 section 72 times (45 %). The interpretation results were digitised manually and then converted to a 1 : 1 vertical exaggeration (VE = 1 : 1) for comparison; therefore, the fault dip angles presented in this work are VE = 1 : 1 in time. As the sections were interpreted in TWT, the analysed dips of the faults are not true dips (i.e. these observed in sections in depth), but their relative differences are still comparable. Individual examples of the interpretation results after digitisation from both the 1 : 2 and 1 : 4 sections are shown in Fig. 3. Initially, interpretations were grouped based on fault dip direction, to the left or to the right. Fifteen interpretations (9.4 % of the total) corresponding to equivocal, blank, or interpretations with faults dipping in both directions (e.g. systems of faults and their conjugates) were not included in further analysis. Of the remaining 119 interpretations, most participants interpreted faults dipping to the right (67 interpretations, 56 % of the total interpretations), rather than to the left (52 interpretations, 44 % of the total) (Fig. 4). The relative proportion is greater in the 1 : 4 sections (39 interpretations to the right, 59 %) compared to the 1 : 2 sections (28 interpretations to the right, 53 %). These two groupings were identified as it was apparent that participants interpreting faults dipping to the right and those interpreting faults dipping to the left had employed two different conceptual models to the data. This resulted in four datasets with two pairs of properties (i.e. 1 : 2-left, notified as "1 : 2L", 1 : 2-right or "1 : 2R", 1 : 4-left or "1 : 4L", and 1 : 4-right "1 : 4R") that were further analysed in detail. This subdivision allows us to study whether the potential differences can be attributed to the section interpreted (i.e. 1 : 2 or 1 : 4) or to the conceptual model used in the interpretation. We analysed the fault type (i.e. normal or reverse) and measured the fault dip angle interpreted by the participants. The fault type results do not show significant differences between the 1 : 2 and 1 : 4 section interpretations, with 32 %-33 % of the participants interpreting reverse faults and 67 %-68 % interpreting normal faults (Fig. 4). However, difference in fault type can be correlated to the dip direction of the fault (Fig. 5). Only one participant (3 %) amongst the leftwarddipping datasets (i.e. 1 : 2L and 1 : 4L) interpreted the fault motion as reverse, while the vast majority (35 participants, 97 % of the total) interpreted leftward-dipping normal faults. In contrast, most rightward-dipping faults were interpreted as reverse (31 interpretations, 56 %) instead of normal (24 interpretations, 44 %). This result is more pronounced in the 1 : 4R, with 61 % of faults interpreted as reverse (14 inter-pretations), compared to the 53 % in the 1 : 2R (17 interpretations). The dip angles of the faults were calculated by drawing a horizontal line at the approximate mid-depth point (1.1 ms TWT) of the seismic section, with the aim of crossing the majority of the faults around their midpoint. Similar numbers of fault interpretations were made on the 1 : 4 section (a total 300 faults interpreted by 72 participants, over 4 faults interpreted per participant) and the 1 : 2 section (272 faults by 88 participants, over 3 faults interpreted per participant) (Fig. 6). The fault dip angle analyses were compared across the four datasets (Fig. 7). The largest difference between the 1 : 4 and 1 : 2 sections is highlighted here, with the median dip angle of faults of 22 • in the rightward-dipping, reverse 1 : 4 section vs. 16 • in the 1 : 2 section ( Fig. 7c and d). The differences in normal interpretations, either leftward-dipping ( Fig. 7a and b) or rightward-dipping faults ( Fig. 7e and f), show only differences of 2-3 • , and therefore are less conclusive. The fault dip of the only participant interpreting leftward-dipping, reverse faults was 23 • on average, slightly higher than the other two groups. To check whether other factors -specifically, educational background and experience -influenced interpretation outcome, we also analysed the data for disparities between different university cohorts and between undergraduate and postgraduate students. There are no major differences in the analysed results across student cohorts from different universities or between undergraduate and postgraduate students. For the latter cohort the difference in numbers (undergraduate (126) vs. postgraduate (35) students) is large and does not allow easy comparison; despite this the ratios of leftwardand rightward-dipping faults and the sense of offset are consistent across the cohorts. The effect of level of education and experience in seismic interpretation has been raised in the past (e.g. Bond et al., 2012;Alcalde et al., 2017b), and we suggest that this is still an area of interest for future work. Conceptual model anchoring Analysis of participants' interpretations shows that fault interpretations in the seismic image fall into three main categories (Fig. 3): (1) leftward-dipping normal faults Fault dips interpreted at a vertical exaggeration of (a) 1 : 4, (b) 1 : 2, (c) 1 : 4 dipping rightward ("R"), (d) 1 : 2 dipping rightward, (e) 1 : 4 dipping leftward ("L") and (e) 1 : 2 dipping leftward. The "n" marks the number of faults analysed. "SD" stands for standard deviation. with right-dipping horizons (Fig. 3b), (2) rightward-dipping thrust faults with right-dipping horizons (Fig. 3c), and (3) rightward-dipping normal faults with left-dipping horizons (Fig. 3d). Only one interpretation showed leftwarddipping faults with left-dipping horizons and marked the motion of the faults as reverse (Fig. 5). In addition, this interpretation did not show any evidence of correlating horizons across the fault and simply used arrows to mark the motion instead. The low number of interpretations of this type (one) and the difficulty in correlation suggests that interpreting leftdipping faults with reverse fault motions is largely impossible, given the reflection seismic characteristics of the data. Faults and horizons (red and blue lines in Fig. 3, respectively) are interpreted in three ways: (1) along leftdipping discontinuous and chaotic reflections, these align with breaks in rightward-dipping reflections that together give the appearance of a leftward-dipping chaotic seismic fabric (Fig. 3b); (2) along "packages" of right-dipping reflections with greater continuity (Fig. 3c); and (3) at an angle to these right-dipping reflections where reflection continuity is less strong (Fig. 3d). Irrespective of the vertical exaggeration of the seismic image interpreted, most participants interpreted faults dipping rightward instead of leftward (Fig. 4). At the same time, the majority of rightward-dipping faults (56 %) were interpreted as reverse, in contrast to leftwarddipping faults, which are mostly interpreted as normal (97 %) (Fig. 7). We suggest that this is a consequence of the seismic reflection characteristics of the different features that are being interpreted as faults and horizons. The continuity of the rightward-dipping reflections makes them a more "certain" interpretation than the leftward-dipping fabric. When the rightward-dipping reflections are interpreted as horizons, leaving the left-dipping fabric to be interpreted as faults, this invariably leads to interpretation of faults with normal offsets due to the angular relationship between the fault and horizon interpretations and potentially due to the participants interpretation, consciously or subconsciously, of the nature and geometries of the basin sediments above (Fig. 3b). When the rightward-dipping reflections are interpreted as faults, the sedimentary packages are harder to interpret and horizon interpretations are often forced to cut reflections (Fig. 3d). When participants have interpreted faults at an angle to the rightward-dipping reflections, where reflection continuity is less strong, this results in steeper fault dip angles, and interpreters often interpret the rightward-dipping reflections as sedimentary packages in horsts between reverse faults (Figs. 3c and 7). In summary, from the analysis of the fault and horizon interpretations of participants, three conceptual models are identified (Fig. 3) that have been applied in interpretations of the data. What we do not know is how the individual participants honed onto their "chosen" conceptual model. The participants were prompted to interpret the faults as their main task in the experiment instructions and as a secondary element to interpret a horizon to show fault motion; an interpretation sequence is shown in Fig. 9. We should state that we cannot be sure that all participants followed this workflow, but we have no evidence to suggest that they did not. Irrespective of the exact interpretation sequence, we suggest that once participants started interpreting certain "features" in the reflection seismic image data as faults or horizons, they became anchored to an initial conceptual model and fitted the rest of their interpretation to this model. Consequently, we suggest that interpreters were likely anchored to their initial thoughts on the direction of dip of the faults; the rest of their interpretation is determined by this initial fault model, irrespective of whether later interpretative elements conform to the data (e.g. horizons cutting reflections), as seen in Fig. 3; this has previously been reported by Rankey and Mitchell (2003) and Torvela and Bond (2011). However, there appears to be a threshold of tolerance for data dis-confirmation. Note that no leftward-dipping faults with a reverse sense of motion have been interpreted, in which horizons would very distinctively have cut seismic reflectors (see Fig. 9d). Experience and knowledge are expected to have played a key role in informing the initial observations that led to selection of a conceptual model at the beginning of the interpretation. We purposely chose a student-only cohort to mitigate against the competing effects of experience and knowledge with other factors we wanted to test. To ensure this was the case we have analysed the data for differences in interpretation outcome between students from different universities and between undergraduate and postgraduate students. This analysis shows no strong evidence that experience had an effect on interpretation outcome. Fault dip variability Although we purport that the impact of conceptual model application and anchoring to models has the greatest influence on the interpretation outcomes of this experiment, the experiment results show certain differences in fault dip direction and dip angle between the 1 : 2 and 1 : 4 vertically exaggerated section interpretations (Figs. 4, 6 and 7). Figure 8 shows a projection of the interpreted fault dip angles and their median values for both the 1 : 2 and 1 : 4 sections on a graph of exaggerated vs. unexaggerated dip angles. The interpreted dip angles are projected onto the corresponding curves of vertical exaggeration to show the equivalent unexaggerated dip angle. The same faults interpreted in sections with differing vertical exaggeration should have the same unexaggerated dip angle (x axis) but a differing exaggerated dip angle (y axis). This is the case for the median of the rightwardand leftward-dipping normal fault interpretations (magenta and dark blue circles in Fig. 8, respectively). In contrast, the median fault dip angles of the rightward-dipping reverse interpretations in the 1 : 2 and 1 : 4 sections (dark pink circles in Fig. 8) are not aligned vertically, indicating that the two cohorts, i.e. participants interpreting the 1 : 2 and 1 : 4 sections, did not interpret the same dipping features as reverse faults. Interpretations of rightward-dipping faults (at least these interpreted as reverse motion) show an apparent impact of vertical exaggeration on interpretation outcome, whereas the leftward-dipping normal fault interpretations do not. In the 1 : 2 section, interpretations of leftward-dipping faults have higher dip angles on average than those interpreted in the 1 : 4 section and a greater spread in fault dip angle ( Fig. 6e and f). The observations of fault dip angle and motion consistency suggest that those interpreting normal faults (either rightward-or leftward-dipping) were unaffected by vertical exaggeration. Note that the interpreted median rightwarddipping fault dip angles are low, 15-17 • ; when these are separated into normal and reverse faults, the rightward-dipping normal faults have a very low angle of 10-13 • (Fig. 7e-f), with the reverse faults having higher average dip angles of 16-22 • (Fig. 7c-d). We did not provide the velocity model for the section used, but just for comparison, we converted the faults from TWT to depth assuming a seismic velocity of 3000 m s −1 for the area (following the assumptions and caveats outlined in Stewart, 2011) (Table 1). For the reverse motion faults, the resulting dip angles in depth (31-33 • ) are closer to an Andersonian-predicted reverse fault dip of 30 • and falling within the range of common reverse fault dips of 10-30 • (Anderson, 1905(Anderson, , 1951. The normal fault angles (14-30 • ), however, do not conform to predicted Andersonian fault dips of 45-60 • , which are predominant in teaching materials (Alcalde et al., 2017a). The participants did not have access to the regional seismic line, which would have provided context for such low-angle normal faults, or to the actual depth of the sections, so participants may have been expected to attempt to interpret faults with higher dip angles to conform to accepted dip models of normal faults. We see no evidence of this and interpret this observation as data and conceptual model co-confirmation acting dominantly over other reasoning (if any took place). For the interpretations of leftward-dipping faults, the extent of the vertical exaggeration of the interpreted seismic image appears to have an impact on interpretation outcome. Analysis of fault dip angle from the leftward-dipping fault interpretations of the 1 : 2 seismic section shows a greater range in fault dip angle (standard deviation SD = 16 • ) and a higher median fault dip angle of 29 • , compared to the 1 : 4 section interpretations with an median dip angle of 21 • , SD = 13 • (Fig. 6e-f), that is, an 8 • higher median fault dip in the 1 : 2 section. If we now consider only the participants' interpretations that had also interpreted a horizon showing fault motion ( Fig. 7a and b), the difference in fault dip angle between the 1 : 2 and 1 : 4 sections decreases to only 2 • , with similar standard deviations of 14 and 13 • . We suggest that the differences observed between the 1 : 2 and 1 : 4 sections are dominated more by erroneous seismic interpretations than by vertical exaggeration, with those making "dubious" leftward-dipping fault interpretations not completing horizon interpretations. Similarly, for the rightward-dipping fault interpretations normal fault dip have low angles of 24-27 • but not as low as those interpreted to the right, suggest- ing that the faults are defined more by their seismic character than by any effects of vertical exaggeration. Testing with more display options (e.g. 1 : 6 or 1 : 8 vertical exaggeration) could be helpful to confirm this finding and would be interesting lines for further enquiry. If we consider the observations described in the light of our knowledge of the perceptual impact of vertically exag-gerated seismic images (e.g. Stone, 1991;Black et al., 1994;Horozal et al., 2009;Stewart, 2012), the 1 : 4 section should perceptually have better reflection continuity due to data compression (Stewart, 2011). The higher apparent reflection continuity in the 1 : 4 section could make the rightwarddipping reflections appear more dominant and the discontinuities between the sediment packages less dominant and Table 1. Median values in two-way travel time and their depth-converted equivalent of the 1 : 2 and 1 : 4 sections, divided by dip direction and fault motion. The dips were depth-converted using a uniform velocity model of 3000 m s −1 (as per Stewart, 2011 narrower. The smaller range in dip angles for the 1 : 4 section compared to the 1 : 2 section (SD = 14 vs. 16 • , respectively, Fig. 6a, b) may be the result of this perceptual change. But the lack of consistency in this observation when the data are split between rightward-and leftward-dipping faults (Fig. 6) and also into normal and reverse faults (Fig. 7) leads us to conclude that vertical exaggeration has a subdued impact. Our interpretation of these observations is that the seismic data and conceptual model have a more dominant influence on interpretation than any perceptual bias resulting from vertical exaggeration. Our work does not provide evidence, in this case, to support the conclusions of Stone (1991), Black et al. (1994) and Stewart (2011Stewart ( , 2012) that vertically exaggerated seismic sections cause perceptual bias, compared with the dominant effect of anchoring to conceptual models. We still suggest, however, that multiple visualisations of the data should be made, including at a scale of 1 : 1, and that care should be taken when interpretations of seismic image data have been made in a vertically exaggerated form. Other experimental work (Alcalde et al., 2017b) showed that interpreters and interpretation outcomes were influenced by seismic reflection contrast and continuity, factors that can be enhanced in vertically exaggerated seismic images. We suggest that future work should further investigate the effect of vertical exaggeration on seismic image properties and interpretation outcomes. Conclusions and recommendations The interpretation exercise with 161 participants showed the following: 1. Conceptual models have greater dominance on the interpretation outcome than perceptual bias from interpreting vertically exaggerated seismic sections. 2. Initial conceptual models are anchored, and there is no evidence for reassessment by participants when data do not conform to their initial model. 3. When conceptual models are confirmed, at least initially, by the data, there is no evidence that accepted models, for example in fault dip, have an impact on interpretation outcome and that variability in interpreta-tion (e.g. fault dips in our experiment) is minimal even if it does not conform to accepted models (e.g. Andersonian dips). Instead, the data drive the interpreted fault dip, and the conceptual model and data co-confirm each other. Our results support the conclusions of other researchers (Rankey and Mitchell, 2003;Bond et al., 2007Bond et al., , 2008) that seismic interpreters need to be aware of potential biases when interpreting seismic image data particularly in the application of conceptual models; they also need to be aware of the high likelihood of anchoring to initial conceptual models even when data do not confirm or conform to the model. Research has shown that awareness of biases (e.g. George et al., 2000) can help mitigate the potential impacts of bias. Thus, seismic interpreters and their employers should employ bias awareness in their interpretation workflows and obtain multiple opinions to test a broader range of conceptual models (see Bond et al., 2008, for workflow ideas; for reasoning tests to avoid anchoring see Bond, 2015, andMacrae et al., 2016; and for the potential impact of single conceptual models on decision making see Richards et al., 2015). Research into the effectiveness of different bias awareness techniques and their impact in geological interpretation is an obvious focus for future research. The work presented here and that of many of the authors referenced provides evidence for biases in interpretation of geological and geophysical data. The resultant interpretation outcomes are not only based on uncertain data, but these uncertainties are compounded by interpretation biases including using heuristics to form initial conceptual models and anchoring to these. Understanding how to better mitigate bias in interpretation and the competing impacts on outcomes of different biases remains a significant challenge in the geosciences. Data availability. The seismic image used in the experiment is available on the Virtual Seismic Atlas (https://www.seismicatlas. org, last access: 30 September 2019). The questionnaire presented to the participants is available in the Supplement. Interpretations and statistical analyses are available upon request. Author contributions. Both JA and CEB conceptualised and designed the interpretation experiment. CEB, AK, OF, RB and PA conducted the experiments at the University of Aberdeen, the University of Grenoble, University of Barcelona, Imperial College of London and University of Salamanca. JA was responsible for the project administration and data analyses. CEB and JA were responsible for the writing, reviewing and editing of the manuscript with help from GJ, AK, OF, RB and PA.
2019-10-10T06:30:20.269Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "c73d6310659c2a0863e9989292d2585bc4abed76", "oa_license": "CCBY", "oa_url": "https://se.copernicus.org/articles/10/1651/2019/se-10-1651-2019.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "023ca75bb8b0546ace504520a83b78126f2d78ee", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Computer Science" ] }
210830534
pes2o/s2orc
v3-fos-license
Role of teleophthalmology to manage anterior segment conditions in vision centres of south India: EyeSmart study-I Purpose: To study the role of teleophthalmology (TO) in the diagnosis and treatment of anterior segment conditions (including adnexal conditions) in rural areas. Methods: This is a pilot study of 5,604 patients, who visited primary vision centres (VCs) for 1 week from 1-7 September 2018. The patients were examined by a vision technician (VT) to identify those who may need teleconsultation. The centres were located in 16 districts of four Indian states of Andhra Pradesh, Telangana, Odisha, and Karnataka. The demographic profile, along with the role of teleconsultation was reviewed. Results: Teleconsultation was advised in 6.9% of the patients, out of which 59.6% were referred to a higher level of care, and 40.4% were treated directly at the VC. Teleconsultations were higher among males (7.0% as compared to 6.6% in females), though not statistically significant (P = 0.55). Teleconsultation was higher in the older population, that is, 60 years and above (14.5%); those with severe visual impairment (VI) (21%) and blindness (31.1%); and in the states of Telangana (11%) and Andhra Pradesh (6.3%). It was noted that 45% of the patients who underwent teleconsultation had pathologies related to ocular surface, cornea and lid, and adnexa-related conditions. Conclusion: Teleconsultation has a significant role in the management of anterior segment conditions in bridging the gap between the patients and ophthalmologists in rural India. TO can also play an important role in the diagnosis and management of anterior segment, lid, and adnexa-related pathologies. "Tele" is a Greek word meaning "distance" and "mederi" is a Latin word meaning "to heal." The World Health Organization (WHO) defines telemedicine as, "the delivery of healthcare services, where distance is a critical factor, by all healthcare professionals using information and communication technologies for the exchange of valid information for diagnosis, treatment and prevention of disease and injuries, research and evaluation and for the continuing education of healthcare providers, all in the interests of advancing the health of individuals and their communities." [1] The main aim of teleophthalmology (TO) is to minimize unnecessary referrals to an advanced centre and provide quality care closer to the communities. As more than half of the population needing eye care in middle-and low-income countries reside in the rural areas, it is the need of the hour to use the current technology, communication devices, and imaging capabilities to allow a broad spectrum of coverage in rural communities, providing screening and necessary referrals. The cost inflation in health care, a substantial gap in the availability of medical care between the high and the low socioeconomic countries and the disparity in quality of care between the middle-and low-income and high-income nations makes it even more difficult for rural communities to access health care. [2] Therefore, it is essential to look at the various advantages of TO such as cost reduction for both the patients and the ophthalmologists; comprehensive patient evaluation to minimize test replications; and avoiding unnecessary referrals. [2][3][4] In India, TO consultations have a great potential in reaching the remote rural populations; and addressing the challenges of distance and access to quality eye care. The usefulness of TO has been demonstrated in the diagnosis of retinal conditions, such as diabetic retinopathy (DR). [5] In addition to retinal conditions, TO has also been used for neuro-ophthalmology, emergency teleconsultations, suspicious nerve cuppings, and uveitis. [4] However, the use of TO in adnexa-related conditions or anterior segment conditions like infections is limited. A study by Rayner et al. showed that certain adnexal conditions such as congenital and involutional ptosis could be accurately assessed using telemedicine. [6] Cite this article as: Misra N, Khanna RC, Mettla AL, Marmamula S, Rathi VM, Das AV. Role of teleophthalmology to manage anterior segment conditions in vision centres of south India: EyeSmart study-I. Indian J Ophthalmol 2020;68:362-7. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. For reprints contact: reprints@medknow.com According to WHO, corneal diseases are a major cause of blindness in the world after cataract and glaucoma. [7] In India, 6.8 million people have a severe visual impairment in at least one eye due to corneal diseases and 1 million of these have bilateral involvement. [8] Gupta et al. noted that corneal blindness due to infectious keratitis was commonly reported in the rural population, particularly among those who are below poverty line, illiterate; and those using traditional methods to cure an eye condition (crushed plants, animal saliva, urine) which leads to ocular toxicity. [7] Prevention, early detection of the eye condition, and prompt treatment can help to control the burden of this dreaded anterior segment condition in the rural areas. India is a middle-income country with most of the population residing in rural areas; and corneal infections are a silent epidemic prevalent in these areas which have limited access to eye care. To address this situation and save the eyes, TO may be useful in seeking remote access to a specialist opinion, and early management of patients with corneal infections. TO was also found to be beneficial in reducing unnecessary outpatient appointments with a specialist when it was used to triage referrals. A recent study showed that TO could reduce between 16 and 48% of face-to-face appointments. [9,10] Most of the TO services are delivered through asynchronous methods (store-and-forward of images); while some use a combination of real-time and store-and-forward methods; and fewer use synchronous methods (videoconferencing). [9] At our institute, we have developed a teleconsultation platform, called "eyeSmart App," which is used for teleconsultation at the primary level of care. It is a tablet-based application and can be also be used with smartphones. This pilot study was designed with the following objectives 1. To understand the demographics and ocular profile of those undergoing teleconsultation, 2. To understand the role of TO in diagnosing and treating anterior segment conditions (including adnexal conditions), 3. To describe various anterior segment ocular conditions that can be diagnosed and treated. Methods Our pyramidal model for eye care delivery has a centre of excellence (CoE) at the top catering to 50 million population followed by tertiary centres (TC), each for 5 million population. At the next level, there are secondary centres (SC) covering 0.5-1 million population, followed by vision centres (VC) at primary level for 50,000 population, and vision guardians (VG) for 5,000 population. The functions at each level of the pyramid are clearly delineated and demarcated. CoE and TCs are located mainly in urban areas and the SCs and VCs are located in rural areas. A network of 100 VGs, 10 VCs, and 1 SC cover half million population, which is called a village vision complex (VVC). At present, the network covers four Indian states of Andhra Pradesh, Telangana, Odisha, and Karnataka and includes one CoE in Hyderabad, three TCs in Bhubaneswar, Visakhapatnam, and Vijayawada, 20 SCs, and 180 VCs. The SCs are run by one or two ophthalmologists who are trained at a TC or COE for a year. Patients from SCs are referred to as TCs or COE only for advanced care and management of complex problems. The VCs are manned by a vision technician (VT). VT is the local youth who have completed high school and are trained for 1 year to provide primary eye care, including eye examination, refraction, dispensing spectacles, and appropriate referrals among all age groups. We built an in-house electronic medical record (EMR) system known as eyeSmart EMR. Over the last 9 years, the EMR system has been implemented across the entire network. At the VC level, all VCs in the network have been digitized with the eyeSmart EMR app. Teleophthalmology and video calling are additional services provided by the eyeSmart EMR system. The eyeSmart EMR app is installed on an android tablet (iBall Slide Brace XJ) and connected to the slit-lamp biomicroscope (Carl Zeiss SL 115). The app helps in capturing the demographic data, clinical information, and images of the eye for a TO consultation through the cloud. This tablet has a good camera that can capture high quality images and hence can be used for TO consultation. Camera specification includes 8 MP AF rear camera with LED flash and 5 MP front camera for video chatting. The slit-lamp illumination is utilized for capturing the pictures and the illumination is 15 V, LED of the Carl Zeiss slit-lamp. The tablet coupled with Skypelite provides an excellent platform for a teleconsultation from the VCs. The internet connectivity is established on the tablet through a 3G network SIM card. The video conferencing tool, Skype, is used for all the TO consults (Skype, Microsoft Corp, Redmond, USA). For optimum patient management and communication with the higher centres, a referral system is put into place whereby the VT decides on either direct referrals or teleconsultations using the eyeSmart App. The VTs are trained to understand and follow the guidelines developed for teleconsultation. In brief, following conditions required teleconsultation-lid-related abnormalities, ocular surface abnormalities, red-eye, corneal pathologies, pupil and iris abnormalities, and lens-related pathologies. Based on the type of eye condition, the images can be of two types: an external image mainly for conditions that affect the eyelids and conjunctiva, and slit-lamp images for conditions that involve cornea and anterior segment of the eye. External images of the adnexa may also be required for conditions such as ptosis, squint, and lid abnormalities. All the TO consultations from these VCs are received at the TO command centre (TOCC) stationed in the CoE. Fig. 1 shows the flow of patients to VCs as well as how the teleconsultation process works. The patient's demographic details are first registered by the VT on the tablet by using the eyeSmart EMR app. The preliminary examination is then performed, which includes chief complaint, present and past illness, systemic history, family history, and previous surgical history. This is followed by a general examination, recording the visual acuity, objective and subjective refraction, slit-lamp examination, and finally spectacle prescription. Based on the diagnoses, the relevant images are captured using the tablet attached to the eyepiece [ Fig. 2]. The EMR of the patient along with ocular images are then synchronized online and shared with TOCC through the app for an ophthalmologist's opinion. The VT referring the patient also sends the information related to the ocular condition or any query to the TOCC. The patient is then connected to the ophthalmologist present at the TOCC through a video call using Skypelite services available on the tablet. The ophthalmologist at the TOCC reviews the clinical information and images and provides a diagnosis for the ocular condition. The nature of the disease and the possible interventions (medical or surgical) are discussed with the patient. The patient is then referred to the SC or TC, through the eyeSmart EMR app, for further medical or surgical management if needed. The advice given by the ophthalmologist is synchronized via the cloud to the eyeSmart EMR app for documentation. For our pilot study to assess the role of teleconsultation in diagnosing and treating anterior segment conditions (including adnexal conditions), we reviewed all the patients visiting all the VCs in one week from 1 st to 7 th September 2018. VCs that were well-equipped with EyeSmart tablets and good connectivity for communication were included in the study. The VCs are located in 16 districts, in four Indian states of Andhra Pradesh, Telangana, Odisha, and Karnataka. Results During the 1-week period, from 1 st to 7 th September, 2018, 5,604 outpatients visited the VCs. A total of 4,710 patients were seen in Andhra Pradesh, 577 patients in Telangana, 119 patients in Odisha, and 198 in Karnataka. Of the total number of patients screened, 3,099 (55.3%) were males and 2,505 (44.7%) were females. Of the total patients seen, 4,667 (83.3%) were treated at the VC level (including screening for refractive error); 384 (6.9%) patients had teleconsultation; and remaining 553 (9.9%) were directly referred to the next level of care. Out of the 384 teleconsultations, 229 (59.6%) were referred to a higher level of care and 155 (40.4%) were treated at the VC [ Fig. 3]. Table 1 shows the demographic difference between those who had teleconsultation versus those who did not have teleconsultation. Teleconsultation was higher in males (7.0%), compared to females (6.6%), though not statistically significant (P = 0.55). Teleconsultation was also higher in an older population, that is, 60 years and above (14.5%); those with severe visual impairment (VI) (21%) and blindness (31.1%); and in the states of Telangana (11%) and Andhra Pradesh (6.3%). Of the total patients seen, though 404 (7.2%) had a history of diabetes or hypertension, only 48 (11.9%) had teleconsultation. Table 2 shows the diagnosis for the 384 teleconsultations made by the ophthalmologist at the TOCC. For better understanding, ocular surface pathologies (conjunctivitis, limbitis, pterygium, conjunctival abrasions, and foreign bodies) and corneal conditions (keratitis, corneal foreign body, epithelial defects, corneal opacities, and erosions) are termed as anterior segment-related pathologies. The lid and adnexa-related conditions that were teleconsulted, included ptosis, acute and chronic dacryocystitis, globe-related pathologies, meibomianitis, and blepharitis. Lens-related conditions included cataract, pseudophakia, aphakia, and posterior capsular opacification. The most common diagnosis according to the ophthalmologist present at the TOCC was lens-related (38.3%) followed by ocular surface pathologies (30.2%), lid and adnexa-related pathologies (8.6%), and corneal pathologies (6.3%). One hundred and seventy three (45.1%) of the patients that were referred for a teleconsultation had anterior segment-related problems, and 8.5% were diagnosed as emmetropic. Ocular surface-related conditions, corneal pathologies, and lid and adnexa-related conditions could be easily managed with TO, by sending the images to the TOCC for diagnosis and management. This not only saves time for the patient but also for the allied health personnel and the ophthalmologist. Discussion In the past, a survey conducted by Woodword et al. at the University of Michigan Kellogg Eye Centre showed that although most providers did not practice telemedicine, over half of them were comfortable managing eye care consultations that included patient's photographs over the internet. [11] However, recently the use of telemedicine for providing care has increased. In a study conducted in Chennai, eye care screening camps were organized by a team of optometrists, social workers, administrative staff, technology experts, and ophthalmologists and teleconsultation was done. About 71% of the patients had refractive errors, 15% had cataract, 7% were detected with retina problems, and 7% had other ocular diseases. Some of them were referred to the base hospital to undergo specific tests to confirm the diagnosis. [12] Similar TO connections were established between Edendale Hospital in Pietermaritzburg (South Africa) and Moorfields Eye Hospital in London. There were 113 consultations over 12 months, and of these 90 patients were examined to determine the Unexplained vision loss 15 (3.90%) Corneal pathologies^24 (6.25%) *Lens-related pathology included cataract, aphakia, pseudophakia, and posterior capsular opacification, **Ocular surface pathologies included conjunctivitis, limbitis, abrasions, foreign body, and pterygium, ^Corneal pathologies include keratitis, epithelial defects, corneal foreign body, corneal opacities, corneal erosions, @ Lid and adnexa-related conditions included ptosis, acute and chronic dacryocystitis, globe related pathologies, meibomianitis, and blepharitis impact of TO. The impact was found to be definite in 24% of the cases, and possible in 22% cases, while there was no impact in 53%. A higher proportion of posterior segment and neuro-ophthalmology cases were seen whereas a low percentage of anterior segment pathologies were seen. [13] Verma et al. highlighted the role of TO in adnexal and orbital diseases with a 2.9% detection using teleconsultation. [14] It has been established that VT has an effective role at the primary level of eye care. Good levels of agreement in refraction, disease detection, and referral were achieved by VT in a study conducted by Paudal et al., where the clinical competency of 24 VTs at 24 VCs was assessed. [15] In another study, excellent agreement was found for the detection of cataract, refractive error, and corneal pathologies by the VT. [16] In our pilot study, the VT who is specially trained for a year at an SC or TC had screened 5,604 patients for various eye conditions including refractive errors. Of these, 4,667 (83.3%) were managed without teleconsultation or referrals. The remaining were either referred directly (9.9%) or had teleconsultation (6.9%). Of those with teleconsultation, 40.4% were diagnosed and treated at the VC level thus avoiding referral to the next level of care. Among all the patients that were sent for teleconsultation, 45% had pathologies related to ocular surface, cornea and lid, and adnexa-related conditions. This was followed by lens-related conditions (38.3%). Compared to a study by Varma et al., we found a higher prevalence of anterior segment pathologies (ocular surface pathology and corneal pathology) along with the lid and adnexa-related pathologies. [14] In terms of cost-effectiveness, Kumar et al. reported that the TO services are cost-effective for the patients in the rural areas as a substitute for regular eye care at the higher centres. [17] According to Newton, telemedicine proved to be less expensive compared to routine in-person examinations, for small clinical care facilities in New York. The reduced expenses on equipment more than compensated for an increased cost of skilled technicians. [18] Sharafeldin et al. did an economic review of TO screening for DR, glaucoma, and ARMD, which provided supportive evidence for the cost-effectiveness of TO; potentially increasing screening accessibility, especially for rural and remote populations. [19] Though we did not do any formal cost-effectiveness analysis, in our study we assumed that there was a significant saving on time and cost of travel, and avoiding loss of wages for these patients. In our study, teleconsultation was higher in an older population, that is, 60 years and above and in those with severe VI and blindness; whereas in a study conducted by Verma et al., most of the patients belonged to the economically productive age group of 21-40 years. [14] Verma et al. found that 25.7% of the patients had potentially sight-threatening conditions without access to ophthalmic care, [14] and our findings were also similar. Teleconsultation was also higher in the state of Andhra Pradesh possibly due to more footfalls in VCs located in this state. Of the total patients seen, 404 (7.2%) had a history of diabetes or hypertension, and only 48 (11.9%) had teleconsultation. This indicates that all patients who visit these VCs need screening for diabetes and hypertension, and also teleconsultation for further management. While 72% of India's 1.2 billion people live in rural areas, over 70% of the doctors practice in urban areas. [3] The aging population is increasing, with consequent increase in numbers of blindness and ocular comorbidities; however, there is a limited access to care for most middle-and low-income countries. As we move into the future, TO would help to bridge that gap. It will allow clinicians to detect eye-related morbidities and provide care to patients in rural, remote, and hard-to-reach locations. [20] One of the limitations of our study was that about 9.9% of the total patients who could have been managed through teleconsultation at the VC itself were referred directly by the VT to a higher centre for further review and management. We note that some of these patients could have been teleconsulted and the loop closed at the VC level itself, thus avoiding further referrals and making the intervention cost-effective for the patient. Hence, there is a need to revisit the referral criteria. Though the VT findings were validated in our previous studies, [15,16] there is also a need to revisit it again. Similarly, there were variations in TO consultation between three states that need further exploration. It is also likely that there was a compliance issue related to referral uptake from VCs to SCs or TCs/CoE. For a better understanding, it is necessary to further analyze the data on patients referred to advanced centres and the subsequent diagnosis and management. For those who did not comply to referral services, a different mechanism has to be designed to ensure consultation at a higher level as well as uptake of services. Conclusion EyeSmart app is a helpful tool in establishing an ocular diagnosis and providing timely intervention. It is useful in connecting patients in rural areas with ophthalmologists and to overcome the barriers of distance, time, and costs. Moving ahead, the scope of teleconsultations could include posterior segment problems also; increasing access to care and its cost-effectiveness. Apart from this, we need more studies from middle-and low-income countries to look at the efficacy of TO in screening for anterior segment conditions. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-01-21T14:02:30.748Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "eb62aac218c5f3fe3f7017e78412650808e78ef0", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijo.ijo_991_19", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "920d99d08f29a0e56199df7e3422782f5f4d8d0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233692607
pes2o/s2orc
v3-fos-license
The Treachery of Images: Redefining the Structural System of Havana’s National Art Schools : This paper illustrates the contribution that on ‐ site survey and graphical documentation offer to the structural comprehension of 20th century architectural and civil engineering heritage and, therefore, to its sustainable conservation. The research herein presented has identified the true structural system of Havana’s National Art Schools, an internationally well ‐ known architectural masterpiece that was recently investigated within the drafting of a comprehensive conservation management plan. This iconic complex was built right after the Castro’s revolution and was meant to embody Cuba’s newfound freedom. To this end, the complex was supposed to be built using Catalan vaulting, a technique loaded with significance due to its provenance, affordability, and flex ‐ ibility. While most of the literature, the architectural features, and the very designers assert that no concrete nor steel were employed during construction, recent studies suggested that a reinforced concrete core might be hidden behind the masonry ‐ like appearance of the five buildings. The struc ‐ tural analysis performed in order to draft a conservation and management plan for the school site thus became a hermeneutic opportunity to address this topic. Combining direct observation, docu ‐ mentary research, and nondestructive analyses (infrared thermography and magnetometer testing), it was possible to finally redefine the structural nature of these notorious architectures, which are indeed mostly made of reinforced concrete. Introduction Through the case study of Havana's National Art Schools, this paper aims to underline the role of graphical documentation within the process of knowledge, which sets the foundations for any conservation activity. Indeed, images "visible" (buildings in their current conditions), "beyond the visible" (thermographs), and "no-more visible" (photographs of the construction site and original blueprints) are hereby combined to redefine the structural system of this 20th century architectural icon. Contrary to what was commonly believed, the whole architectural complex was hardly built employing the Catalan vaulting technique, as the five school buildings were instead found to have a reinforced concrete core hidden behind the masonry-like appearance. Therefore, the presented research also configured as an occasion to re-evaluate the relationship between the field of construction history and restoration, demonstrating how in architecture, the hands-on conservation activity may increase knowledge together with literature study. The analyses and trials performed on the buildings to develop a conservation and management plan unexpectedly brought to light clues that denied what had been acknowledged by previous literature. After a brief overview of the history of the National Art Schools (Section 1.1), the paper thus examines the state of the art (Section 1.2), considering both the single case Given the extent of the task and the limited time available, work was split between the three architects, and each one of them was put in charge of one or two buildings: Garatti (Figure 1b). As the aim was however to create a homogeneous complex, to overcome physical distance between the pavilions and style differences owed to the architects' diverse education, the designers sought unity in architectural and structural features. From the very beginning, they identified a few shared principles to follow throughout construction, such as formal freedom, integration with nature, and the employment of the same construction systems and building materials. With regard to the latter, they also agreed to avoid the use of concrete and steel, arguing that such materials were lacking in the postrevolution phase, in favor of locally sourced materials and workmanship and, especially, of bricks and Catalan vaulting [1,2]. This technique, then unknown to Cuban builders, was originally developed in the Mediterranean area and, especially, in Spain [3]. The construction method consists in the overlaying of several layers of thin tiles (typically no less than two) bound together by mortar. The peculiarity of Catalan vaults is that they find stability in their own geometry: their light weight and calibrated shape enable them to avoid the use of reinforcements of any kind as well as to limit the use of fixed centering during construction [4,5]. Both these features contributed to making this technique particularly inexpensive. Thin-tile vaulting (or Catalan vaulting) landed in North America at the end of the XIX century with Spanish architect Rafael Moreno Guastavino [6,7]. While quickly catching on in the United States, it appears that by the 1960s, this construction technique still had not reached Cuba, making it a perfect fit for the construction of the National Art Schools: new, flexible, and low cost. Moreover, the arrival of Spanish mason Gumersindo on the island provided the chance to overcome the lack of expertise of local constructors. Both experiments to test the functioning of the structural system, and classes to supply the know-how were organized, as confirmed by several photographs (Figure 2). The Cuban Ministry of Construction (MiCons) even issued a booklet on tile constructions, including a few pages containing detailed instructions on how to build Catalan vaults [8]. As a result, although different from one another, the five schools look somewhat alike: they all display organic shapes and appear to be mostly made of masonry, and seemingly thin-tile vaults. Given these premises, for several decades the schools were believed to be outstanding examples of this peculiar building technique. This is widely asserted in literature [1,2,9,10] and confirmed by the very architects, who repeatedly failed to clear up the misunderstanding, when asked to talk about the schools [11][12][13]. Although they admitted to the use of iron tie rods [14], they never explicitly mentioned that the five buildings have a hidden reinforced concrete core, as pointed out by the results hereinafter presented. The abrupt interruption of the construction site occurred in 1965 left three out of the five pavilions uncompleted and thus unable to fulfill their original functional requirements. While Porro's schools have been up and running since 1964, the Schools of Ballet, Music, and Dramatic Arts were soon dismissed and left to perish for several decades. Despite physical abandonment, the utopian genesis of the National Art Schools was never forgotten, and the revolutionary facet of these iconic architectures was frequently analyzed in literature [2,[15][16][17]. If the formal and cultural-related aspects of the schools have been widely debated through the years, little has been said about more technical matters, such as the construction methods adopted or the structural behavior of these peculiar pavilions. In fact, the first truly scientific study on the topic was conducted by John Loomis in the 1990s: his book, A Revolution of Forms [9], offers a systematic description of the five buildings based on both historical documentation and direct observation. The State of the Art While representing an important milestone, Loomis' book contributed to feeding the myth that the architectures were indeed masonry structures. The international influence of the publication brought growing attention to the schools, finally reaching the engineering community. Picking up on the exceptional features of these buildings, Princeton University's Department of Civil and Environmental Engineering started an investigation aimed at clarifying the organization and behavior of these impressive thin-shell structures [18]. The study mainly focused on the domed pavilions of the School of Ballet, whose shape and dimensions appear particularly daring from a structural point of view, especially since, at the time, they were thought to be built employing the Catalan vaulting technique [9]. However, during an on-site inspection occurred in 2016, Princeton's former student Isabella Douglas detected some anomalies that allowed her to uncover the true structural system of the building. While analyzing the dance pavilions, she noticed recurrent delamination which had resulted in the fall-off of large portions of the outermost layer of tiles. Based on the knowledge of the construction process, the substratum was initially assumed to be a layer of mortar connecting two adjoining layers of tiles, but upon closer examination, it was revealed to be a rather thick layer of reinforced concrete [19][20][21][22]. Such a conclusion was drawn from two relevant observations: the first one being the presence of exposed reinforcing steel on one of the pendentives supporting the dome of the dance pavilions, and the second one the fact that tile delamination could most easily be explained as a consequence of concrete spalling. Indeed, rebar only became visible due to the detachment of a consistent block of concrete, which generated a 7 cm deep hole at the bottom of which it is possible to see tie wire ( Figure 3). Seeking further validation of these new findings, Princeton's scholars turned to the historical documentation at their disposal. In particular, they resorted to the oral testimony and photographic albums of school's builder José Mosquera, who carefully documented nearly all stages of the construction site. Piecing together information collected from direct and indirect sources, Douglas was finally able to redefine the construction system of Vittorio Garatti's School of Ballet, whose bearing structure turned out to be mostly made of reinforced concrete. The deceptive appearance of the building, which actually suggests otherwise, comes as a direct result of the construction process. Michele Paradiso had come to similar conclusions, after examining the very same pictures. In a paper published in 2014 [23], he stated that the extensive use of reinforced concrete shown by the building site's photographs testified to the use of a mixed technique, rather than Catalan vaulting. Moreover, the scholar had had the opportunity to participate to a meeting occurred at the Cuban Ministry of Construction (MICONS) in 2008, in the presence of some of the engineers and architects who had originally taken part to the construction of the schools. In that occasion, Arch. José Mosquera and Arch. Regino A. Gayoso Blanco affirmed that, in order to ensure the stability of the complex architectural shapes designed by the architects, it was decided to use reinforced concrete. Indeed, although rarely acknowledged, several structural engineers participated in the project. Among others, Edoardo Esenarro and Isabelita Wittmarch worked side by side with Vittorio Garatti and Ilda Fernandez with Ricardo Porro. As emerged from an attentive study of Mosquera's pictures, the main criterion adopted throughout the building site was that of covering the inside of the wooden formworks with one or more layers of clay tiles before placing rebar and pouring the concrete inside of them ( Figure 4). This way, once the concrete had set, the wooden formwork could be dismantled, leaving the tile coating in sight while hiding the core structure underneath. To complete the job, the extrados was also tiled, so that it would in fact appear as a thin-tile vaulted structure [22]. The side finishing of the arches is particularly misleading as it shows the typical pattern of this construction technique: alternating layers of tiles and mortar ( Figure 5). Overcoming appearances, Michele Paradiso was the first to point out the use of a mixed technique within the National Art Schools complex. Later on, Princeton's work group was able to confirm Paradiso's argument by identifying the structural system of the domes of the School of Ballet as concrete grid shells with adobe tile covering. The work hereby presented takes a further step forward, demonstrating how all five schools conceal concrete and steel reinforcements. As proven by the aforementioned works, since John Loomis' book, studies on Havana's National Schools of Art have consistently increased [24][25][26][27][28][29][30][31] and so have international recognitions of their significance. Starting from 2000, they have repeatedly been included in the World Monuments Fund's Watch List (2000,2002,2016) [32]; in 2003, they were listed in the UNESCO tentative list [33], and in 2010, they were declared a National Monument by the Cuban Government. The newfound prominence of the Schools of Art also highlighted the need for an improved management of the whole complex, fostering further initiatives aimed at preserving the site. In particular, in 2018, the schools were awarded a grant within the frame of the Getty Foundation's Keeping It Modern initiative [34], which was devoted to the development of a conservation management plan. The project, leaded by Politecnico di Milano, saw the participation of Assorestauro, Princeton University, Università di Parma and Universidad de las Artes de Cuba (ISA) and addressed several issues, taking into account the multiple aspects contributing to architectural heritage preservation. Picking up from where Princeton's studies left off, the occasion also offered a chance to further clarify the structural features of the five buildings, and thus to verify whether the use of reinforced concrete was limited to the School of Ballet or spread also to the other schools, configuring as the main construction system employed despite the original premises. The International Context: An Overview The vicissitudes of the National Art Schools should however also be considered in light of the contemporary international context. The construction of the schools occurred at a time of debate, when the architects' community had not yet taken a firm position toward reinforced concrete architectures. Indeed, if the use of reinforced stone-somewhat the ancestor of reinforced concrete-had been used for several centuries, it was not until a hundred years later that reinforced concrete (as currently intended) made its first appearance. For a few decades, however, its use was mostly limited to industrial or civil engineering applications. The tables turned with the advent of Auguste Perret (1874-1954), whose work legitimized the use of concrete for architectural purposes. In the following years, and until the 1960s, architecture critics found themselves divided on how to approach the extensive use of reinforced concrete: some defended classicism, whilst others supported modernism. In particular, Peter Collins sought for a continuity of classicism [35,36], showing his entanglement in the past. Reyner Banham and Robert Venturi also addressed the question. According to Colin Rowe, they represented "the polar extremes between which architecture now [1960s] oscillates" [37]. If on the one hand Banham wished for a functional and upfront architecture [38,39], on the other, Venturi made "ambiguity" his very motto, praising the richness of "nonstraightforward architecture" [40]. In this controversial panorama, it is rather easy to imagine how the three young architects of the National Art Schools might have had some difficulties deciding on how to proceed. After their plan to employ pure Catalan vaulting flunked, possibly due to structural engineers' concerns, they found themselves having to choose between form and significance. While reinforced concrete was "invented" as a building material at the end of the 21st century, the issue of its conservation only developed in recent years. In fact, the need to preserve the architectural heritage of the 20th century poses specific conservation challenges that need careful addressing. The literature in this regard has been flourishing over the last 20 years, encompassing many aspects of reinforced concrete damages and conservation [41][42][43][44][45][46][47][48][49][50][51][52][53][54][55]. Having the opportunity to participate to the Keeping It Modern initiative, it was interesting to examine how the topic of concrete conservation had been dealt with in other studies but also to check whether cases of "hidden concrete structures" had previously been recorded. While some of the developed conservation and management plans primarily focused on concrete pathologies [56][57][58], others provided all-round structural analyses of the architectures in question [59,60]. This of course entailed the need for a thorough interpretative process, comprising on-site inspections, documentary research, and instrumental investigations. The case of the Uruguayan Iglesia de la Parroquia de Cristo Obrero [60] has proven particularly interesting as it displayed astonishing similarities to the schools of Havana. The engineer who designed the church, Eladio Dieste, employed a technique he called "céramica armada", which basically combined the properties of Catalan vaults with those of steel reinforcement and, occasionally, concrete (or extremely thick layers of mortar). Unlike the case of the National Art Schools, however, Dieste was always quite transparent about the structural system of the church [60] (p. 48). Even though other conservation plans did not highlight discrepancies as consistent as those emerged for the Cuban case, they all provided compelling strategies, offering helpful suggestions to address the question of concrete analysis and preservation. Materials and Methods As mentioned, the presented study configures as a (so to speak) accidental consequence of a broader research, aimed at drafting a conservation and management plan to safeguard the whole National Art Schools complex. In the process, one of the primary objectives was the assessment of the stability of the buildings, which, to that end, could not be left aside. To correctly interpret the structural behavior of these peculiar architectures, an in-depth analysis of each construction element was however necessary. The efforts made to define their stratigraphy and construction technique, ended up uncovering unexpected data concerning the buildings' structural system. The original conservation purpose thus became a hermeneutic opportunity which enabled one to update the historic evaluation of the schools' site, providing an enhanced understanding of its buildings. This, however, is not an isolated case: while construction history upholds its prominent role in restoration, it is by now clear that there is a certain reciprocity between the two disciplines. As a matter of fact, conservation activities often bring to light forgotten pieces of information which, integrated with data collected before the intervention, allow one to refine the global discernment of the asset [61]. With regard to the specific topic addressed by this paper, the amendment of the structural knowledge of the buildings was achieved by adopting a combined approach which took into account several different sources of information. Direct observation, documentary research, and noninvasive analyses, all resulting in some sort of graphical documentation, equally contributed to the identification of the technique employed in the construction of the National Art Schools. In fact, it was only by fitting together evidence emerged from these three activities (original drawings, pictures of the construction site, and infrared thermographs) that reliable results were obtained. Due to the lack of archival documents regarding the Schools of Dramatic Arts, Plastic Arts, and Modern Dance, the described method was only adopted for the Schools of Ballet and Music. Nonetheless, infrared thermography was also used as the primary tool to collect information regarding the structural system of the Schools of Plastic Arts and Dramatic Arts. Because of its rather straightforward appearance, the School of Modern Dance was not included in the presented study. Indeed, the concrete ribs supporting the vaults are clearly visible here, even highlighted by the chromatic contrast between bricks and white-painted concrete. The structural layout of this specific building was hence one of the clues that motivated the research. Ultimately, the research employed a comparative method combining direct and indirect sources, and more specifically images from the past, and "visible" and "beyond the visible" images of the present that was able to reveal, once and for all, the true structural nature of Havana's National Schools of Art. The main techniques adopted throughout the study will be furtherly described in the following subsections. Direct Observation The first step consisted of the observation of the architectural object, which, as well known, is the first document of itself [62,63]. In particular, the severe decay affecting these long-time neglected architectures has uncovered features that were intended to stay hidden. On the one hand, the diffused loss of the finishing layers or of even more significant portions of material proved quite helpful in order to define the stratigraphy of the construction elements without resorting to microdestructive investigations. This was, for instance, the case for Douglas' studies [19][20][21][22], as she realized that the domes of the School of Ballet were not Catalan vaults by visually noticing the presence of concrete and steel rebar in one of the pendentives. On the other hand, the interpretation of the deterioration phenomena, intended as visible manifestation of internal actions (and interactions), also provided useful hints to define the construction system of the buildings. In particular, the analysis of crack patterns and deformations allowed one to clarify their structural behavior and hence to evaluate its compatibility with the supposed building technique and materials. Each of the four schools considered by the research was thus preliminary examined, searching for clues to either confirm or deny the use of reinforced concrete instead of Catalan vaulting. However, since mere visual inspection was not sufficient to dispel the doubts raised by Princeton's research, further studies were carried out. Documentary Research The close examination of the historical documents describing the construction process of the schools was also configured as a valuable source of information. In fact, within the Keeping It Modern initiative, one of the main objectives was the collection and reorganization of the heterogeneous materials regarding the site. The research uncovered a large number of documents of different typology, origin, format, and medium. The whole documentary heritage concerning the National Art Schools (from construction to present days) was estimated to comprise tens of thousands of items. This brought to the institution of a dedicated archival fond (ISA archive) which guaranteed greater access to documentation, thus allowing one to both reconsider the photographs that had already been taken into account by previous studies and to analyze yet unreleased evidence. It should however be noted that most of the gathered materials concern the work of Vittorio Garatti and hence mainly offer data about the Schools of Music and Ballet. Two types of documents proved fundamental to the scope of the research: the original blueprints by the architects and the pictures taken during construction. Original Blueprints The 1960s drawings allowed one to better understand the architects' original conception, and thus whether the idea of integrating or fully replacing the Catalan vaults with reinforced concrete arose from possible criticalities emerged during construction or, on the contrary, had been the builders' intention all along. The examined blueprints lean toward the latter option, leading to the hypothesis that the principle of using the Catalan Vaulting technique never materialized, not even on paper. About 509 digital copies of the 1960s blueprints for the Schools of Music and Ballet were collected thanks to the documentary research. Such files mainly consist of scans of the paper drawings and heliographies from the original designs, dating from 1961 to 1965. At the time of the scan, most of the drawing appeared to be in a fair state of repair, despite the presence of a few rips around the edges. Architect Vittorio Garatti is the author of most of the designs, although helped by several different draughtmen. The drawings have different paper sizes and scales of representation, as they concern many aspects, ranging from the general architectural layout of the buildings (plans and sections) to more technical issues such as construction details, plant design, and finishing. Most measurements are expressed according to the metric system, but in some cases, the US system was also used. To our purpose, the most relevant drawings were those concerning structural features. Indications regarding the stratigraphy of the main construction elements were later verified by means of on-site investigations. In particular, blueprints showing the intended position and dimensions of steel rebar proved fundamental to define a targeted diagnostic plan leading to concrete discovering and pinpointing. It should however be noted that the original drawings do not offer any certainty concerning the actual realization of the designs, since changes might have occurred throughout the building site. Photographic Materials The rich photographic apparatus, on the other hand, enabled it to partly overcome such uncertainties, offering more reliable data. In fact, the intrinsic documentary nature of photography contributed to the elevation of the camera to a scientific tool soon after its invention [64,65]. It was probably with this purpose that José Mosquera took frequent pictures of the building site of the School of Ballet, recounting the entire construction process through a sequence of images. Unfortunately, since Mosquera was mainly involved in the design and realization of the School of Ballet, photographs concerning the other buildings are sporadic and do not allow one to retrace the building activities in their con-tinuity. While Princeton's scholars already had the chance to see Mosquera's albums, further pictures were found among the ISA archive, which includes about 300 photographs taken by different authors between 1961 and 2020. In particular, the pictures concerning the construction phase were mostly taken by José Mosquera and Vittorio Garatti. The archive also stores the shots taken by photographer Paolo Gasparini after the schools' completion. This photographic survey, which had more of a celebratory purpose [66], somehow ended up fomenting the idealistic facet of these architectures and, along with it, the Catalan vaulting myth. After all, right after the construction site shut down, the appearance of the schools was impeccably deceitful, hiding all the clues that recently unveiled the presence of reinforced concrete. Several authorial and amateur surveys followed the one by Gasparini, documenting the gradual changes and decay of the schools, and, therefore, the slow resurfacing of the concrete core. Among these materials, the most relevant information was deduced from the building site's pictures by Vittorio Garatti, which had never been examined before. Indeed, a few of his photographs captured details that do not appear in Mosquera's albums (and viceversa). Such images offered new data that helped complete and refine previous theories, with particular reference to the extensive use of reinforced concrete in the School of Ballet. Noninvasive Analyses The technical details deduced from the iconographic records were crosschecked with information emerged from direct observation as well as with the outcome of few noninvasive on-site analyses, such as infrared thermography (IRT) and magnetometer testing. The former method has been widely adopted in cultural heritage preservation since the 1980s, as it enables one to highlight possible discontinuities and anomalies (e.g., cracks and delamination), in the substrate [67,68], even when they are not visible to the naked eye. In fact, IRT is a contactless test method that uses an infrared imaging system to measure the distribution of the emissive power of different surfaces at various temperature ranges [69,70]. The outcome is a series of two-dimensional images (thermographs), in which each color identifies a different surface's apparent temperature (ISO 6781:1983; ISO 18434-1:2008; UNI EN 13187:2000) [69]. The surveys were conducted using a FLIR T1020 thermal camera, while the IR inspection's acquisitions were processed through Grayess Stritch software. Finally, the resulting thermographic mosaics were composed using Adobe Photoshop. The second investigation technique adopted to identify the structural system of the buildings was magnetometer testing, which is broadly used to locate and estimate the diameter of steel rebar (BS 1881-204:1988). Of course, in the specific case, the aim was rather to verify whether rebar was present or not, than to determine its geometry and distribution. Since pure Catalan vaulting does not include metallic reinforcement, the detection of any iron elements would have implied the use of a different construction system, or-at least-of a mixed one. Pachometer testing was only carried out on the vaults and domes of the School of Ballet, using a HILTI PS 200 Ferroscan, whose maximum detection depth for object localization is approximately 18 cm. Results The following subsections illustrate the outcomes of the study, providing a detailed description of the aforementioned comparative method as applied to each one of the four analyzed schools. Choreography Theater and Dance Pavilions Moving from Princeton scholars' findings, the present work was able to further clarify the constructive system of the main domes of the School of Ballet. Thanks to targeted analyses and new pieces of information that emerged from the historical documentation, it was possible to highlight an even more extensive use of concrete than expected. In particular, the present research was able to retrace step by step the building process of the school. According to the ascertained criterion that work forms would be made-or at least covered-with clay tiles, the first stage of construction was the setting up of the wooden scaffoldings, which, given the dimensions of the pavilions, ended up being rather complex structures ( Figure 6). Once the scaffoldings were in place, masons proceeded to lay one or more layers of clay tiles to build the work forms. In particular, for the pendentives of the dance pavilions and the annular sector at the base of the choreography theater's dome, the intrados was shaped following the geometry of the supporting centering ( Figure 7). On the other hand, the work form of the rings and ribs composing the grid of the upper spherical cap were basically made of wood and were then coated with tiles on the inside (Figure 8). Reinforcing bars were subsequently positioned ( Figure 9) and finally the concrete was poured to create the main bearing framework (Figure 10). The next step was to fill the gaps of the concrete grid, which was achieved through the construction of narrow tile vaults. Although the "filling vaults", due to their limited span, could reasonably be taken for actual Catalan vaults, a newfound drawing by Garatti ( Figure 11) and a picture from the construction site ( Figure 12) prove otherwise, showing how even these elements were intended as work forms. stages of construction of the choreography theater were found, some photographs of the dance pavilions suggest that the same technique was adopted there. Figure 13 shows one of the latter after the second concrete casting, which covers the entire surface of the dome, creating a smooth and continuous shell. Although there are no records to confirm that the reinforcing net was also used in the dance pavilions, the upward steel connections linked to the ribs and rings' rebar seem to validate this theory ( Figure 14). Finally, the extradoses of the domes were covered with clay tiles, hiding the reinforced concrete from the sight and definitely simulating a masonry structure (Figure 15). The study of the construction process of the School of Ballet allowed one to further validate and slightly rectify Princeton's findings, contributing to the redefinition of the construction system of this iconic architecture. In particular, while Douglas and her work team had already identified the general method adopted in the construction of the school, the present research was able to pin down, one by one, the main phases of the building site. This methodical approach ended up highlighting the use of an additional layer of concrete casted above the concrete grid that had been previously discovered. Pasillo The sinuous corridor connecting the rooms of the School of Ballet (pasillo) was also taken into account. After a thorough examination, we can affirm that all its covering vaults include some kind of reinforcement. Indeed, even in the vaults that do not have a plain reinforced concrete slab, steel rebar was inserted to strengthen the masonry (like Dieste's ceramica armada [60]). As for the larger vaults, hiding a concrete core, the building process might have actually been quite similar to that of the above-described domes: laying a few layers of tiles to build the intrados and the sides of the vault in order to create a work form to set rebar and pour the concrete and finally covering the extrados with another layer of clay tiles. A few hints were taken from direct observation: Figure 16 shows the cavities left on the analyzed vaults by prior core testing, which confirm the presence of a concrete layer hidden below the extrados tile covering. The holes are about 10 cm deep, and within this depth, there is no sign of rebar. Further verification of rebar absence was offered by the outcomes of the pachometer testing performed at the extrados of the vault (Figure 16b), which did not highlight any iron presence within the detection range. These results do not come as a surprise since, as observed in the construction process of the previously described domes, rebar should be located in proximity of the intrados. Finally, a drawing by Vittorio Garatti reveals that the original intent of the architect was in fact to strengthen the pasillo's vaults with reinforced concrete (Figure 17). Even though the blueprint only focuses on the vaults covering the main entrance of the building, it is reasonable to think that the same technique was to be applied to all the larger vaults covering the pasillo. The thermographic survey performed on vaults with smaller spans also highlighted the presence of rebar ( Figure 18). Although in this case the actual structural system is not reinforced concrete, it is still not possible to talk about pure Catalan vaulting but rather of a mixed technique, as suggested by Michele Paradiso [23]. The outcomes of the thermographic survey were also confirmed by visual inspection: Figure 19 shows a reinforcing bar located above the lower layer of tiles of the pasillo's intrados, which was made visible by current decay. School of Music Despite the large amount of documentation concerning the original design of the School of Music, detailed information regarding the building technique to be employed in its construction is scarce. The most significant drawing in this regard is the cross-section reported in Figure 20, where the global organization of the structure can be observed. The same pattern repeats on two different levels with slight differences in terms of span, but seemingly identical stratigraphy. The illustration clearly distinguishes elements that are made of bricks from elements that are made of reinforced concrete, even providing a rather precise representation of rebar distribution. According to this drawing, the vaults are clear of any concrete or steel reinforcements, truly resembling the Catalan technique. The arches appear to be composed of three different layers, of which the middle one is by far the thickest, while the extrados one appears to be roughly half of the intrados one. The described ratio is faithfully reflected by the real building, as the vaults comprise, from top to bottom, one layer of clay tiles, one layer of cement, and three further layers of clay tiles held together by layers of mortar ( Figure 21). The consistency between the original design and the current situation seems to suggest that, at least in the School of Music, the principle of favoring Catalan vaulting over reinforced concrete was actually put into action [1,2]. Nonetheless, further analyses were carried out to validate this hypothesis. Infrared thermography was fundamental here to visualize what could not be seen by the naked eye. The outcomes of the survey seem in fact to indicate the presence of reinforcing bars hidden in the vaults covering the classrooms of the higher level ( Figure 22). It should be noted that such classrooms, designed to host group rehearsals, are much bigger than the individual rooms located on the ground floor, as well as than the corridors. The larger span of the vaults (approximately 5 m) clearly worried the structural engineers who promptly decided to install solid metallic tie rods to avoid their collapse ( Figure 20). It was probably for the same reason that the builders saw fit to strengthen the tile vaults with the rebars identified by means of thermal imaging. Although no surveys were conducted on the vaults of the cubicles on the ground floor, their good state of repair (no crack patterns were found, despite the absence of tie rods [71]) suggests that iron rebar might also have been used on the lower level. Unluckily, there are no photographic records of this stage of construction, and we were thus unable to further analyze this aspect. While we had to acknowledge that, regardless of the original designs, some of the vaults were not built according to the Catalan technique, uncertainties still remained with regard to the structural system of the corridors' covering, with particular reference to the one on the upper floor. The direct inspection of the building offered some helpful insights on the topic: firstly, it was understood that some sectors of the vault, which had probably collapsed, were completely replaced with similarly shaped reinforced concrete ones. However, the poor quality of the materials employed caused their quick deterioration, and today, due to the diffused delamination of the intrados tiles (which could very well be a consequence of concrete spalling), rebar is largely visible (Figure 23). On the other hand, the vaults that were not substituted are not affected by delamination phenomena but consistently show cracking along the crown ( Figure 24). The latter represents the typical crack pattern of segmental masonry vaults [72], leading one to believe that the external hallway could in fact be covered-at least in the authentic portions-by true Catalan vaults. This hypothesis is however denied by the outcomes of the thermographic survey, which seem to display the presence of steel rebar along the vaults ( Figure 25). The structural system should thus be similar to that of the smaller vaults of the pasillo of the School of Ballet, which, although not made of reinforced concrete, are strengthened by metallic bars, mostly located in proximity of the joints between adjacent sectors. As mentioned before, Cuban masons were not accustomed to the Catalan vaulting technique [9], and this may partly be read as an explanation to the fact that the construction system was not systematically employed. Indeed, given the large span of the domes and vaults of the five schools, adopting thin-tile vaulting would have been a great risk. Since the gusano is the building with the simplest geometries and most limited span to cover, it appears reasonable that, if somewhere, pure Catalan vaulting was tested here. The lack of expertise of designers and masons may nonetheless have brought the builders to add steel elements with the intention of increasing the structural safety of the building. School of Plastic Arts Ricardo Porro's School of Plastic Arts was also considered by the present research. Despite the lack of archival materials concerning this architecture, the thermographic survey was able to offer an unequivocal reading of the building's construction system. The first suspicions about the true structural nature of the domes covering the school's classrooms aroused during an on-site inspection. While examining the intrados of one of the smaller cupolas, a rather regular efflorescence pattern was noticed on the tiled surface, resembling a sort of grid ( Figure 26). With the aim of further analyzing the phenomenon, the few historical records available were scrutinized seeking for clues. The original blueprint collected, strictly regarding the architectural features of the building, did not provide any technical detail, whereas one of the pictures taken during the construction site ( Figure 27) seemed to suggest the presence of a hidden concrete framework supporting the domes. Although quite straightforward, the image was not exhaustive, as it only portrayed the summit portion of the dome. Indeed, the empty work forms that can be seen nearby the top could have been owed to the need to strongly fasten the skylight to the vault below and not expand to the whole structure. As anticipated, the decisive tool to finally identify the construction system of the domes of the School of Plastic Arts was infrared thermography. The concrete grid included in the domes' shells appears neatly in the thermal images ( Figure 28), dispelling any residual doubt. The survey was carried out both on the larger and on the smaller domes, detecting the same structural arrangement. However, conversely to what observed in the domes of the School of Ballet, Figure 27 hints at the possibility that the masonry filling the voids of the bearing framework is actually concrete free. As a matter of fact, the photograph seems to indicate that rather than thin tiles, the inner layers were built using full bricks. At the end of the construction process, both the intrados and the extrados of the dome must have been coated with one or more layers of clay tiles, to both hide the concrete ribs and rings to the sight and to increase the overall homogeneity of the structure. In addition, in this case, the research was hence able to correct previously theories, revealing the true construction system hidden behind the masonry appearance of the building. School of Dramatic Arts Finally, Roberto Gottardi's School of Dramatic Arts was also briefly examined. Given the little availability of archival materials, the analysis mainly consisted in a thermographic survey. As expected, the resulting images highlight the presence of concrete elements supporting the masonry-looking vaults ( Figure 29). In particular, the ribs appear to run crossway, delimiting and strengthening the strips of the vault where the skylights are located. Discussion and Conclusions Starting from Princeton's previous findings [19][20][21][22], the study was able to finally redefine the structural system employed in the construction of Havana's National Art Schools. Combining direct and indirect sources it was possible to set the record straight, proving that the revolutionary architectures promoted by Che Guevara and Fidel Castro were not built adopting the Catalan vaulting technique as believed for the last sixty years (Table 1) [2,9]. The cause of this long-lasting misapprehension kept alive by the very architects deserves to be further investigated on the historical and architectural side. In fact, the reason why Vittorio Garatti, Roberto Gottardi, and Ricardo Porro never explicitly admitted to the fundamental role of reinforced concrete is still unclear, although it is evident that without it, those iconic building would simply not exist. It might be a direct outcome of the social and political context within which the five buildings were conceived. Indeed, in this scenario, Catalan vaulting constituted rather a medium to convey an ideal than a mere construction technique. In the mind of the designers, it represented the maximum expression of novelty and independence, primary ideals of the Cuban revolution. Determined to deliver a message of freedom, the architects created an image of high significance and stuck to that significance for decades. Hence, the title of this paper, a tribute to René Magritte, who with his 1929 painting The Treachery of Images (also known as "This is not a pipe") cautioned about appearances' mendacity. In the same way, while the sinuous shapes of the coverings of Havana's iconic architectures look like Catalan vaults, they are not. After all, the story of the National Art Schools deals as much with the physical construction of the school buildings, as it does with the construction of a myth. While hidden concrete allowed the structures to stand, the image of the schools was built with false appearances, captivating pictures [66], and fascinating narrations. Reinforced Concrete Presence in Havana's National Art In this respect, Jorge Otero-Pailos offers two hints [73] which could help explain the unfolding of events concerning this iconic example of Modern Architecture. On the one hand, he reminds us that 1960s and 1970s architecture critiques, while aiming to demonstrate how architecture is more than just a building, ended up overshadowing its materiality. This is somehow what happened to the National Art Schools, where, for a long time, the literature focused on the contextual discourse rather than on the physical structure, nourishing the rhetoric of the architectures. On the other hand, he underlines that every work of architecture results from co-authorship, even though some of the co-authors remain hidden. Perhaps, future archival research will further clarify the role of the other designers who took part in the making of the National Art Schools. In particular, it would be interesting to assess the influence of the engineering component on the construction process and its turning to reinforced concrete. Edoardo Esenarro, Isabelita Wittmarc, and Ilda Fernandez are only a few of the actors involved, who contributed to the project by conceiving and calculating this ingenious "invisible concrete" [23]. Although silent and soon forgotten, they also played a relevant role in this story, and their authorship should be recognized [74]. In this scenario, the outcomes herein presented confirm the hermeneutic power of conservation and restoration activities, providing new, unexpected (and possibly unasked for) details to reinterpret the whole picture. This somehow recalls the quest for intertextuality, suggested by Bruno Reichlin [75]: the dismantling and reassembling of evidence to increase the understanding and knowledge of architecture. In the end, after a superficial use of images largely contributed to spreading the idea that the National Art Schools were an extraordinary example of Catalan vaulting, it is again through images that this belief is coming undone. This proves the high potential of graphic documentation when used in an aware and comparative way, to both recount architecture and investigate it, with relevant implications for conservation and restoration activities. Indeed, ensuring usability and safety is the way to a sustainable preservation of these iconic architectures. A thorough understanding of the structural system is essential, and, to this end, graphical documentation proved to play a central role.
2021-05-05T00:07:49.765Z
2021-03-29T00:00:00.000
{ "year": 2021, "sha1": "261ef3b0700a4617bc774d1458d6a45adb35fe74", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/7/3767/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cf3df4776c0b2ad0f9577c40adf72da4a5932d58", "s2fieldsofstudy": [ "Art", "History" ], "extfieldsofstudy": [ "Political Science" ] }
218503604
pes2o/s2orc
v3-fos-license
Novel Coronavirus Disease 2019 (COVID-19) Aerosolization Box: Design Modifications for Patient Safety Author(s): Girgis, Alexander M; Aziz, Merna N; Gopesh, Tilvawala C; Friend, James; Grant, Alex M; Sandubrae, Jeffrey A; Banks, Dalia A Letters to the Editor is an unprecedented global pandemic that has shaken the healthcare community. The transmission of COVID- 19 has not yet been fully elucidated, but we do know that the virus can be spread by respiratory droplets and aerosols, resulting in a severe lower respiratory tract infection and acute respiratory distress syndrome. 1 Along with other countries, the majority of the United States has issued "stay-at-home" orders to slow the spread of the disease. However, physicians and healthcare providers continue to go to work each day with great personal risk to themselves and their families. Furthermore, personal protective equipment (PPE) designed to protect healthcare workers has universally become short in supply. During the peri-intubation period, aerosolization can occur during spontaneous ventilation through a facemask for denitrogenation, endotracheal tube insertion, endotracheal extubation, and anytime a patient is ventilated with a bag-valve mask (BVM). Attempts to prevent disease transmission during the peri-intubation period have included standardization of rapidsequence intubations and the use of a video laryngoscope for all patients who are COVID-19 positive. However, these unique challenges can inspire unique innovations. In an attempt to combat the shortage of PPE and protect healthcare providers, a Taiwanese physician (Lai Hsien-yung (Mennonite Christian Hospital, Hualien, Taiwan, 2020) developed a simple, yet intuitive invention to reduce the spread of aerosols during endotracheal intubation. 2 Commonly referred to as the "aerosol box," this invention encloses a patient's head within an acrylic or polycarbonate rectangular barrier (Fig 1) in an attempt to reduce the direct spread of aerosols onto a provider during endotracheal intubation. A clear transparent box is rested on a mattress at the head of the bed. An opening along the caudal surface allows the patient's head to be placed inside the box. Two small circular openings on the cephalad surface of the box are designed for a provider to place his or her arms through to perform the intubation. The original box design was featured in the New England Journal of Medicine, which demonstrated the effectiveness of droplet containment during a simulated cough. 3 When attempting to use this design within our own institution, we quickly observed several pitfalls that could compromise patient safety. We found that the box was too wide and would not fit on a standard operating room table. The arm insertion holes were too small, making it difficult to maneuver inside the box. Obese patients cannot fit inside the box, and the roof of the box was too low, making maneuvering an endotracheal tube, double-lumen tube, or bougie difficult. The original box was heavy and unstable, and reverse Trendelenburg position could not be achieved. In addition, an assistant could not provide cricoid/laryngeal pressure without contaminating himself or herself, and a breathing circuit was challenging to use inside the box. Lastly, the several openings of the box made it unlikely that aerosols were maximally contained. In a collaborative effort between anesthesiology and engineering at the University of California San Diego, modifications to the original box design were made to address each of these problems (Fig 2). The dimensions of the box and circular arm insertion holes were modified, allowing the box to fit on standard operating room tables and provide better maneuverability when a provider's arms were inside the box. Shoulder cutouts were also created to accommodate obese patients. For stability, an adjustable L-bracket flange was designed to slide underneath any sized mattress, which also allowed for bed position changes and use in the intensive care unit. To improve the ease of BVM, a 3-cm slit was created on each side of the box to easily insert and remove a breathing circuit. An accessory arm hole was made on the right side of the box for an assistant to provide cricoid/laryngeal pressure while still providing maximal barrier protection. A switch from acrylic to polycarbonate (popularly known as LEXAN, its trade name) improved impact durability and relative ease of sterilization. 4 Lastly, to better ensure the containment of aerosols, all arm insertion hole openings and slits have removable covers reinforced with a silicone-based sealant. A reusable clear drape was added to seal the back of the box for additional protection. The box is constructed using a 3.175-mm clear polycarbonate sheet, precut to shape using a table router (Fig 3). A thermo-bender was used to apply heat along bending lines on the polycarbonate sheet forming the 3D shell. The edges were sealed with 90˚-angled polycarbonate strips and a plastic adhesive (Weld-On 16, Compton, CA). Polycarbonate hinges and circular polycarbonate discs were used to provide the shutters for the arm access holes at the front and on the sides. A flexible plastic sheet (Worbla, ePlastics, San Diego, CA) was cut to shape forming the side panels. Slits were made in the side panels to accommodate access and provide smooth passage for ventilator circuit tubing. An L-bracket measuring 15 cm £ 27 cm on each side was attached to the enclosure using 50-mm polycarbonate screws and hex nuts. A 25-mm thick acrylic This modified aerosol box also has the unique advantage of being used to care for cardiac patients. In addition to providing a barrier during the intubation of a cardiac patient, the box can be used while transporting intubated cardiac patients with BVM from the operating room to the intensive care unit, reducing aerosol spread and exposure. This would also free up transport ventilators, which have become standard to use while transporting patients who are COVID-19 positive. Furthermore, the increased height and larger arm holes also allow for more favorable conditions while intubating and extubating patients with double-lumen tubes for thoracic procedures. Compared with other barrier devices, the aerosol box has the advantage of being reusable if properly maintained and cleaned, which is becoming increasingly important with the shortage in PPE. Simulated training before clinical use is recommended. The box should be removed immediately in an emergency and not used in known or suspected difficult airways. COVID-19 has already claimed the lives of hundreds of healthcare providers. Anesthesiologists are particularly vulnerable given their unique skills in critical care and respiratory management. Continuing to create, innovate, and share medical devices such as the aerosol box can help protect health care providers during this global pandemic.
2020-05-06T13:03:58.793Z
2020-05-06T00:00:00.000
{ "year": 2020, "sha1": "a80cf881c55affd4c49e39ed98bf805c4f535294", "oa_license": null, "oa_url": "http://www.jcvaonline.com/article/S1053077020304213/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "18819b099f92f0b52276b6dbefeefde2b9af44a7", "s2fieldsofstudy": [ "Medicine", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
270320527
pes2o/s2orc
v3-fos-license
Efficacy of oral versus vaginal progestogens for early pregnancy maintenance in women with recurrent miscarriages: a randomized controlled trial OBJECTIVE: To compare the effectiveness of oral and vaginal progestogens in the maintenance of early pregnancy in women with recurrent miscarriages. METHODS: This randomized controlled trial was conducted at Lady Reading Hospital, Peshawar, Pakistan, from April to September 2021. Pregnant women aged 16–40 years with a history of at least three recurrent miscarriages presenting at or before 7 weeks of gestation were enrolled. A total of 108 patients were randomly assigned to two groups: group A received oral progestogens (10 mg twice daily), and group B received vaginal progestogens (200 mg twice daily). Treatment lasted for 12 weeks, with successful outcomes defined as no vaginal bleeding and pregnancy continuing beyond 12 weeks. Data analysis was conducted using SPSS-20 software. RESULTS: The mean age of patients was 29±3.88 years in group A and 27±3.12 years in group B. Oral progestogens (group A) were effective in 48 (88.9%) patients, whereas vaginal progestogens (group B) were effective in 36 (66.7%) patients (p=0.03). Oral progestogens showed significantly greater efficacy compared to vaginal progestogens in individuals aged 20-30 years (p=0.04) and those with fewer than four previous miscarriages (p=0.03). However, there was no significant difference in efficacy between the two groups for participants aged 31-40 years or those with 4 or more previous miscarriages. CONCLUSION: Oral progestogens are more effective than vaginal progestogens in preventing recurrent miscarriages, especially in participants aged 20–30 years and with fewer than 4 previous miscarriages. More research needed to validate and explore underlying mechanisms. INTRODUCTION ecurrent miscarriage (RM) is the R occurrence of three or more consecutive pregnancy losses before fetal viability, presenting a significant challenge in obstetrics and 1 gynecology.It encompasses primary RM, where viable pregnancy has never been achieved, and secondary RM, characterized by a history of live births preceding miscarriages.Secondary RM typically carries a more favorable [2][3][4] prognosis for successful pregnancy.The prevalence of RM has been reported to range between 1% and 5 2%.In India, RM has been observed in 6 7.46% of women.Approximately 70% of pregnancies are lost before live birth: 30% due to failure to implant, 30% after implantation but before a missed 7 period, and 10% as clinical miscarriage.RM complicates 15 However, controversy exists regarding the optimal route of administration, with s o m e s t u d i e s s u g g e s t i n g o r a l administration while others find no The study enrolled pregnant women aged 16-40 with a history of at least three recurrent miscarriages who presented at or before 7 weeks of gestation.Written consent was obtained from each participant after explaining the procedures, potential effects and side effects of drugs, and ensuring confidentiality.Patients with threatened miscarriage, structural uterine abnormalities distorting the cavity, absence of fetal cardiac activity (missed abortion), contraindications to progestogen use (such as allergy to progesterone or patients with breast carcinoma), chronic medical conditions (including thyroid diseases, diabetes, and hypertension), and inadequate treatment compliance were excluded from the study. The patients were randomly divided into two equal groups, labeled as group A and group B, using computergenerated numbers.Each group comprised 54 patients.Group A r e c e i v e d o r a l p r o g e s t o g e n s (dydrogesterone) at a dose of 10 mg twice daily, while group B received vaginal progestogens (micronized natural progesterone) at a dose of 200 16 mg twice daily for 12 weeks (Figure 1).The efficacy of the treatments was assessed by the continuation of pregnancy beyond 12 weeks.All data were recorded using a pre-designed proforma.Transvaginal ultrasound examinations were conducted at 7, 9, and 12 weeks of gestation to assess the 17 presence of fetal cardiac activity. After data collection, it was entered and analyzed using SPSS 20 software.Mean and standard deviation were calculated for qualitative variables such as age.Frequency and percentage were calculated for categorical data, such as efficacy for group A and B. The efficacy of drugs between the two groups (A and B) was compared using the chi-square test.Stratification based on age and number of miscarriages was performed, and post-stratification chi-square tests were applied.A p-value ≤ 0.05 was considered significant. RESULTS This study was conducted on 108 women, with 54 participants in each group, to evaluate the efficacy of oral and vaginal progestogens in preventing recurrent miscarriages during early pregnancy.The mean age in group A was 29±3.88 years, whereas in Group B, it was 27±3.12 years.Additional details and subdivisions concerning age are provided in Table I. The mean age of participants in Group A was 29±3.88 years, and in Group B, it was 27±3.12 years.The majority of participants in Group A (n=36; 66.7%) and Group B (n=38; 70.4%) belonged to the 31-40 years' age group (Table 1). Enrollment p r e v i o u s m i s c a r r i a g e s .O r a l progestogens showed significantly h i g h e r e f f i c a c y t h a n v a g i n a l progestogens in participants aged 20-30 years (p=0.04)and those with fewer than 4 previous miscarriages (p=0.03).No significant difference was observed in efficacy between the two groups for participants aged 31-40 years or those with 4 or more previous miscarriages. DISCUSSION The The effective role of progesterone in maintaining pregnancy may be attributed to its fundamental role in various reproductive processes.Progesterone facilitates secretory changes in the uterine lining, which are c r u c i a l f o r s u c c e s s f u l e m b r y o i m p l a n t a t i o n .A d d i t i o n a l l y , p r o g e s t e r o n e r e d u c e s u t e r i n e contractility, further supporting the 20 implantation process.Progesterone is also thought to regulate the mother's immune responses, preventing rejection of the embryo.Furthermore, pro-inflammatory cytokines have been linked to miscarriage frequency, while progesterone-induced blocking factor suppresses immunological reactions and promotes a shift from type-1 to type-2 cytokines, ultimately increasing 26 type-2 cytokine levels. CONCLUSION In conclusion, oral progestogens demonstrate superior efficacy over vaginal progestogens in preventing recurrent miscarriages during early pregnancy.This was evidenced by significantly higher effectiveness rates in the oral progestogens group (88%), compared to the vaginal progestogens group (66%).An important finding of our study was the greater efficacy of oral progestogens among participants aged 20-30 years and those with fewer than 4 previous miscarriages.These results highlight the significance of considering the route of administration when prescribing progestogens to prevent recurrent miscarriages.Further research may be necessary to validate these results and explore the underlying mechanisms.
2024-06-08T15:22:19.819Z
2024-03-31T00:00:00.000
{ "year": 2024, "sha1": "d5e1262a1775657f342abb9d483d8c8e7cbfeb72", "oa_license": "CCBYNC", "oa_url": "https://www.kmuj.kmu.edu.pk/article/download/22727/14839", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec10280b3a2fc00572e8a0a194760def84e5375a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
3678261
pes2o/s2orc
v3-fos-license
Renal Cell Carcinoma and a Pancreatic Neuroendocrine Tumor: A Coincidence or Instance of Von Hippel-Lindau Disease? We herein report a rare case of a 79-year-old man who presented with the simultaneous occurrence of pancreatic neuroendocrine tumors (PNET) and renal cell carcinomas (RCC), without any other Von Hippel-Lindau (VHL)-associated lesions or any pertinent family history. Computed tomography showed vascular-rich solid lesions in the left kidney and the pancreatic tail, measuring 72 mm and 15 mm in size, respectively. Preoperatively, RCC with pancreatic metastasis was suspected and laparotomy was performed. However, the resected specimens revealed a different tumor histology, namely renal clear cell carcinoma (G2, pT3) and PNET (G1, pT3). The patient and his family refused genetic testing, however, so far, the patient has not developed any VHL-associated lesions for more than four years. Introduction Both pancreatic neuroendocrine tumors (PNET) and renal cell carcinomas (RCC) are well-demarcated high-vascular solid tumors, and these tumors are sometimes recognized in patients with Von Hippel-Lindau (VHL) disease. When high vascular masses are recognized in both the kidney and pancreas, an accurate diagnosis is needed for each tumor, as the therapeutic strategy or genetic counseling may change. However, as far as we could determine, a sporadic case with PNET and RCC has not yet been reported in the previous literature. Case Report A 79-year-old man was referred to our hospital for the treatment of a renal tumor, which was detected due to symptoms of transitory back pain. He had a history of diabetes and cerebral infarction, but no lesions suggestive of hemangioblastoma were recognized on brain imaging. In his family history, a cancer history was positive, including gastric cancer (father), colon cancer (sister), and biliary cancer (second sister); however, there was no history of any lesions related to VHL or multiple endocrine neoplasia type 1 (MEN1). He had no continuous symptoms or findings suggesting a functional neuroendocrine tumor (NET), such as hypoglycemia, hyperglycemia, heartburn, nausea, and epigastralgia. A blood test revealed normal levels of calcium and intact parathyroid hormone. Computed tomography (CT) showed a marginally vascular-rich lesion, measured 72 mm in size, in the upper pole of the left kidney. In addition, a similarly well-enhanced 15 mm tumor was detected in the pancreas tail (Fig. 1a). Endoscopic ultrasonography-guided fine needle aspiration was not performed in order to avoid tumor seeding. RCC of the left kidney and metastasis to the pancreas were suspected preoperatively, and laparotomy was performed. The histology of the renal tumor indicated it to be clear-cell type RCC (G2, pT3) ( Fig. 2a, b); however, the pancreatic tumor was a NET (G1, Ki-67 index: <2%, pT3) surrounded by fibrous tissue (Fig. 2c, d), and both were negative for lymph node metastasis. The PNET was positive for chromogranin A, synaptophysin, and somatostatin receptor type 2 (SSTR2) by immunostaining (Fig. 2e, f). With this histology in mind, preoperative CT images were retrospectively reviewed, and a prolonged enhancement was recognized only in the pancreatic tumor, but not in the renal tumor. As a coincidental occurrence of PNET and RCC is very rare, further examinations were performed to rule out VHL disease. The patient underwent magnetic resonance imaging (MRI) of the central nervous system (CNS) and ophthalmologic examinations, but no hemangioblastoma was recognized. Genetic counseling was thus carried out for the patient and his family. A genetic test of VHL gene was repeatedly recommended, but the patient consistently refused due to social reasons (possible future disadvantages for their sibs concerning marriage, employment, insurance, etc.). To date, screening has been done for this patient for more than four years, but no VHL-associated lesion has been detected either in the patient or in his two sons who are in their 50s. Discussion The current case raised two problems associated with the differential diagnosis: 1) double primary tumors or pancreatic metastasis of the RCC, and 2) the possibility of VHL disease in the case of double primary tumors. In Japan, the overall incidence (per 100,000 population) of PNET is 2.69 (1), while that of RCC is 5.87 (2). This rarity, relatively frequent occurrence of RCC metastatis to the pancreas (3), and the similar image findings led to our preoperative misdiagnosis, although the treatment strategy in this case was not altered because surgical resection is recommended both in double primary tumors and in RCC accompanied with isolated metastasis to the pancreas (4). When re-viewing the preoperative CT images, the enhancement of the pancreatic tumor was markedly prolonged compared with the renal tumor, reflecting the dense fibrosis around the PNET (Fig. 1b) and suggesting double primary tumors. Although the possibility of seeding can not be completely ignored, for a selected situation, endoscopic ultrasonographyguided fine needle aspiration (EUS-FNA) can be recommended for the diagnosis of metastatic pancreas tumors (3). As for the second point, the simultaneous occurrence of these rare tumors was either a coincidence or associated with some inherited disease. The patient underwent screening for CNS and no hemangioblastoma was found, and he did not meet the diagnostic criteria of VHL disease (5). Up to 20% of VHL disease is de novo, and therefore it is not possible to rule out the possibility of non-affected relatives potentially developing this disease. However, in cases without any associated family history, hemangioblastoma is essential in the diagnosis of VHL. Besides, these tumors in VHL disease usually develop in younger subjects; from 60-80% of hemangioblastomas develop from 25-30 years of age, RCC in from 25-75% at 39 years, and PNET in from 35-75% at 36 years (5). Hence, it is quite unlikely that the patient had VHL disease. Furthermore, no VHL-related lesions were detected in their two son who were in their 50s, thus suggesting a small possibility of VHL disease. Generally, genetic counseling and genetic testing should be recommended in cases with >1 of following four tumors; PNET, RCC, pancreatic (serous)cystadenoma, and epididymal/adnexal cystadenoma (5). As the causative mutation can be detected by genetic tests in the most VHL cases and the surveillance outcome tends to be favorable, and therefore gene examinations are thought to be quite beneficial for such suspected cases (http://www.ncbi.nlm.nih.gov/books/N BK1463/). As a rare variant of VHL could not be completely ruled out, this patient and his family should be closely followed up by medical institutions.
2017-10-22T18:42:17.377Z
2017-08-10T00:00:00.000
{ "year": 2017, "sha1": "f24f2726eb6c329805bad3a62d1e0f94a12c3452", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/56/17/56_8347-16/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f24f2726eb6c329805bad3a62d1e0f94a12c3452", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2788286
pes2o/s2orc
v3-fos-license
High prevalence of asymptomatic malaria in south-eastern Bangladesh Background The WHO has reported that RDT and microscopy-confirmed malaria cases have declined in recent years. However, it is still unclear if this reflects a real decrease in incidence in Bangladesh, as particularly the hilly and forested areas of the Chittagong Hill Tract (CHT) Districts report more than 80% of all cases and deaths. surveillance and epidemiological data on malaria from the CHT are limited; existing data report Plasmodium falciparum and Plasmodium vivax as the dominant species. Methods A cross-sectional survey was conducted in the District of Bandarban, the southernmost of the three Hill Tracts Districts, to collect district-wide malaria prevalence data from one of the regions with the highest malaria endemicity in Bangladesh. A multistage cluster sampling technique was used to collect blood samples from febrile and afebrile participants and malaria microscopy and standardized nested PCR for diagnosis were performed. Demographic data, vital signs and splenomegaly were recorded. Results Malaria prevalence across all subdistricts in the monsoon season was 30.7% (95% CI: 28.3-33.2) and 14.2% (95% CI: 12.5-16.2) by PCR and microscopy, respectively. Plasmodium falciparum mono-infections accounted for 58.9%, P. vivax mono-infections for 13.6%, Plasmodium malariae for 1.8%, and Plasmodium ovale for 1.4% of all positive cases. In 24.4% of all cases mixed infections were identified by PCR. The proportion of asymptomatic infections among PCR-confirmed cases was 77.0%, oligosymptomatic and symptomatic cases accounted for only 19.8 and 3.2%, respectively. Significantly (p < 0.01) more asymptomatic cases were recorded among participants older than 15 years as compared to younger participants, whereas prevalence and parasite density were significantly (p < 0.01) higher in patients younger than 15 years. Spleen rate and malaria prevalence in two to nine year olds were 18.6 and 34.6%, respectively. No significant difference in malaria prevalence and parasite density was observed between dry and rainy season. Conclusions A large proportion of asymptomatic plasmodial infections was found which likely act as a reservoir of transmission. This has major implications for ongoing malaria control programmes that are based on the treatment of symptomatic patients. These findings highlight the need for new intervention strategies targeting asymptomatic carriers. Background The World Health Organization (WHO) estimated 660,000 deaths in 2011 directly attributed to malaria, approximately half of the world's population being at risk of infection [1]. The disease has re-emerged in several Central Asian countries and in Southeast Asia partly because of relenting malaria control efforts and the emergence of parasite resistance to the most commonly used anti-malarial drugs [2]. In many regions the vectors have become resistant to the main insecticides and cases of artemisinin resistance have been reported from the Greater Mekong subregion [3][4][5][6]. Resistance to chloroquine and sulphadoxine/pyrimethamine (S/P) has been reported from Bangladesh [1] but until now there is no evidence that artemisinin resistance has spread westwards to Bangladesh, which traditionally forms a gateway to the Indian Subcontinent [7]. In 2004 the Ministry of Public Health and Family Welfare of Bangladesh revised the malaria treatment guidelines, introducing artemisinin-based combination therapy (ACT) in areas with resistance against chloroquine and S/P [1]. However, ACT was not deployed on a major scale until 2007. Despite the introduction and distribution of ACT in the last years the WHO reports a 70% increase in case numbers between 2000 and 2010 in Bangladesh. However, it is difficult to discern the underlying trend in malaria incidence from improved reporting due to continuous improvements in diagnostic facilities [8]. A seemingly contradictory statement was given in the 2012 report where a decrease of 69% in malaria case incidence between 2000 and 2011 has been reported [1]. Due to a shortage of staff in health care facilities and shortcomings in surveillance and information systems there is still a significant lack of data on malaria from this area [9,10]. Further increasing drug resistance is significantly aggravating the malaria situation and treatment alternatives to currently used antimalarials are missing [11][12][13][14]. In 2007 a rapid diagnostic test (RDT)-based, crosssectional survey in 13 eastern districts of the country showed malaria to be endemic within the entire study area. Overall prevalence was reported to be approximately 4%, the majority of cases (90.2%) due to Plasmodium falciparum. Plasmodium vivax and mixed infections accounted for only 5.3%and 4.5%, respectively. Highest prevalence rates were reported from the Chittagong Hill Tract (CHT) with up to 15% [15]. Surveys among febrile patients in the region showed a malaria positivity rate of 26% by microscopy (Swoboda et al. personal communication), which increases to 50% when using polymerase chain reaction (PCR), a considerably more sensitive diagnostic tool [16]. The high sensitivity of PCR allows detecting subpatent infections that are frequently asymptomatic and it has been shown that those undetected infections represent a considerable fraction of overall infections and may therefore act as a reservoir for transmission [17][18][19][20][21]. The aim of this study was therefore to assess prevalence and proportion of asymptomatic P. falciparum infections in the southernmost district of the CHTs. Study setting and procedure Two cross-sectional surveys were performed: the first during the rainy season from August to October 2007 and the second during the dry season from December 2007 to February 2008. The study was conducted by a team from the Medical University of Vienna, Austria in collaboration with the International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh (ICDDR, B). All selected communities were visited by members of the study team, prior to sample collection to inform villagers about the ongoing study. Laboratory tests on collected samples and data analysis were carried out at the Malaria Research Initiative Bandarban (MARIB) field research centre in Bandarban town. Written informed consent was obtained from all study participants or their legal representative, the study protocol was approved by the Ethical Review Committee of the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR, B). Sampling Administratively, Bandarban District consists of seven subdistricts (upazilas), 32 unions, 140 mouzas and 1,482 villages. Geographical multistage cluster sampling techniques in a single domain were employed ( Figure 1) using population figures from the 2001 census [22]. For each of the seven subdistricts all mouzas or villages were listed alphabetically and 3 mouzas or villages were randomly selected using a probability proportion to size (PPS) sampling procedure. A list of all households within the mouza or village was prepared and 20 households per mouza or village were randomly selected. All the persons present in the household were invited to participate in the survey. During the monsoon season 2007 a total of 21 villages in all seven subdistricts were surveyed (Survey I). Eight villages in three subdistricts (Bandarban, Ruma and Rowangchari) were revisited during the second survey period (Survey II). Whenever possible the study team visited the same households as in the previous survey. Household members who declined to participate or had moved away from the area were replaced by their closest neighbouring household. Data and sample collection Participants were interviewed and information was recorded on gender, age, ethnicity, occupation, height, weight, history of fever, number of previous episodes of malaria/fever, last date of malaria diagnosis and result of diagnosis, source and drugs used in malaria treatment, and the number of household members. Axillary temperature and pulse were measured in all participants. Three ml of venous blood was collected from participants eight years or older for malaria RDT/microscopy and 100 μl of venous blood transferred on filter paper (903; Schleicher & Schuell, BioScience GmbH, Dassel, Germany) in duplicate. Five drops of blood (one drop of blood for the malaria RDTs, two for microscopy, and two drops of blood for the PCR filter paper) were drawn by finger or heel prick from children below eight years old. Laboratory methods Malaria was diagnosed in the field by RDT and later confirmed by microscopy and PCR. Rapid diagnostic test RDTs (FalciVax®, Zephyr Biomedicals, India) based on the detection of P. falciparum-specific histidine-rich protein 2 (HRP2) and P. vivax-specific lactate dehydrogenase (Pv-pLDH) were employed in all participants with malaria-like symptoms [23,24]. Patients who tested positive were provided with immediate treatment following national guidelines. Febrile patients considered seriously ill were immediately referred to the closest health care facility for further diagnosis and treatment. Microscopy Giemsa-stained blood smears were used for microscopic diagnosis of malaria following established standard operating procedures (SOPs). Thick and thin blood films were prepared, stained with Giemsa stain (Merck®, Darmstadt, Germany) and examined under an oil immersion (Olympus microscope CX21, Tokyo, Japan) for parasite positivity and species determination. Declaring a slide positive or negative and initial species diagnosis was based on the examination of 200 fields in thick films. A slide was considered positive when at least one parasite was found. After finding the first parasite another 200 fields were completed to rule out mixed infections. If no parasite was found in 200 oil fields the slide was considered negative. Parasite density was calculated by counting the number of asexual malaria parasites per 200 white blood cells (WBCs) assuming a WBC count of 8,000/μL [25]. Quality control A minimum of 5% of all positive and negative slides were randomly selected for internal quality control. Case definition Symptomatic clinical malaria cases were defined as PCRpositive individuals with documented fever (axillary temperature ≥37.5°C) and reported clinical symptoms consistent with malaria in the previous seven days. Oligo-symptomatic malaria cases were defined as PCRpositive, afebrile (axillary temperature <37.5°C) cases with a reported history of fever or illness in the previous seven days. Asymptomatic malaria cases were defined as PCR-positive cases without measurable fever (axillary temperature <37.5°C), who reported no malaria-related symptoms, and had not received treatment for malaria in the previous seven days [34]. Splenomegaly The spleen rate was determined for children aged 14 years and below. Spleen size was measured following the method established by Hackett [35] and classified either as negative (Hackett grade 0) or positive (Hackett grades 1-5). Classification of malaria endemicity The four classes of endemic malaria were defined as following [35]: hypo-endemic malaria: spleen and/or parasite rate in children two to nine years not exceeding 10%; meso-endemic malaria: spleen and/or parasite rate rate in children two to nine years between 11 and 50%; hyperendemic malaria: spleen and/or parasite rate rate in children two to nine years constantly over 50% and a high spleen and/or parasite rate rate in adults (over 25%); holo-endemic malaria: spleen and/or parasite rate rate in children two to nine years constantly over 75%, but a low spleen and/or parasite rate rate in adults. Statistical analysis P < 0.05 was considered significant. Pearson's Chi-square with Yates' correction or Fisher's Exact test as appropriate were used for categorical data, Mann-Whitney U-test was performed for comparing continuous data that did not conform to normal distribution. Student's t-test was conducted to evaluate differences between quantitative variables that were normally distributed. All analyses were performed using Microsoft Excel® and VassarStats [36]. Results Demographic data of the study population A total of 1,418 individuals in 416 households from 21 villages were included in the first survey and 436 were revisited and included in the second survey. The male/ female ratio was 0.84. The median age was 24 years (IQR 9-40). The distribution of the different age groups is shown in Table 1. Out ( Subdistricts/Upazilas Plasmodium falciparum was the predominant species in all subdistricts. The highest malaria prevalence was found in the eastern subdistricts Rowangchari and Thanchi, followed by Bandarban Sadar, Ruma, Naikhongchhari, and Lama. The lowest malaria prevalence was detected in Ali Kadam, which shares only a short stretch of border with Myanmar. Malaria prevalence among the seven subdistricts is shown in detail in Table 2. Significantly higher malaria prevalence (p < 0.01) was seen in the northern and northeastern subdistricts (Bandarban Sadar, Rowangchhari, Ruma, and Thanchi) located in the foothills close to the Indian/Myanmar boarder compared with the western subdistricts (Lama, Ali Kadam and Naikhongchhari) in or closer to the plains (Figure 2 Table 4 using microscopy or PCR as reference method. The Venn diagram showing relationship between spleen rate, microscopy and PCR positive children aged two to nine years is shown in Figure 3. Winter survey II From the 436 individuals included in the winter survey ( Plasmodium falciparum gametocytes were detected by microscopy in four (3.5%, 95% CI: 1.1-9.1) individuals of whom only one also had asexual parasites. Malaria prevalence within all participants in the winter survey (dry season) was not significantly (p = 0.117) lower compared to the summer survey. Splenomegaly Out of 97 children aged two to nine years, nine (9.3%, 95% CI: 4.6-17.3) had a palpable spleen and with 30.9% (N = 30; 95% CI: 22.2-41.2) the parasite prevalence by PCR in survey II was again high but not significantly different from survey I (p = 0.584). Malaria-positive PCR and an enlarged spleen at the same time were found in four (4.1%, 95% CI: 1.3-10.8) children two to nine years old. With 9.3% the spleen rate in winter was also lower than in summer (18.6%; p = 0.043). Sensitivity, specificity, PPV and NPV of spleen enlargement for malaria in the winter survey are shown in Table 4. Discussion Malaria is a major public health problem in southeastern Bangladesh and remains one of the most common reasons for hospital admissions during the malaria season [15]. Mapping of high-risk areas is essential for planning health interventions [37]. The primary objective of this investigation was to establish detailed baseline data on the prevalence and distribution of malaria in both symptomatic and asymptomatic carriers of Plasmodium parasites in southeastern Bangladesh. Three major observations can be deducted from these cross-sectional surveys. First, this study demonstrates that malaria is meso-endemic in the region and is concentrated in rural communities [38] where the intensity of transmission is largely dependent on environmental variables. Haque et al. have previously reported for this region that environmental factors, such as proximity to forest, household density, and elevation tend to be significantly and positively correlated with malaria risk [39]. The development of immunity is insufficient to prevent the effect of malaria on all age groups [35]. A spleen rate of 18.6% (proportions of splenomegaly >10% are considered to be related to malaria [35]) and malaria prevalence by microscopy of 14.2% in the rainy and 7.8% in dry season, show that malaria is meso-endemic throughout the year in this part of Bangladesh. The spleen rate and the higher malaria prevalence in children are further indicators of meso-endemicity and suggest that transmission occurs within the villages [35,38]. Second, malaria prevalence was significantly higher in the northern and eastern subdistricts that are located in the hilly areas bordering Myanmar and Rangamati, another endemic district, whereas endemicity was lower in western and southern subdistricts in or close to the coastal plains, which are more accessible and where access to health care facilities is better. The collected data indicate a level of malaria transmission similar to some African countries, such as Gabon [21], Somalia [40] or Mozambique [41]. Previous studies from India and Indonesia also detected similar parasite prevalence rates [42,43]. Third, a large reservoir of asymptomatic, parasitaemic individuals that are likely to act as a source of infection, was observed in both surveys. The absence of malaria-like symptoms may be an indication of a certain level of immunity in rural communities. Although generally considered to be a typical feature of malaria in highly endemic regions in Africa, a number of studies from Brazil, Peru, Thailand, Cambodia, Myanmar, Vietnam, eastern Indonesia, and Papua New Guinea have reported the existence of asymptomatic malaria infections outside of Africa and in areas with lower endemicity [34,[43][44][45][46][47][48][49]. This survey indicates a similar proportion of asymptomatic malaria carriers as previously reported from Vietnam [50], Indonesia [43] and Cambodia [45]. Naturally acquired immunity against P. falciparum, which is believed to build up with long-term exposure to malaria and presents with lower parasite densities and fewer clinical malaria episodes in older children and adults, has been reported from endemic areas in Myanmar, eastern Indonesia and India [42]. In areas with lower transmission intensity, the age at which clinical immunity develops tends to shift to an older age [42]. Malaria prevalence, parasite density and the number of clinical malaria cases were higher in children under 15 years. As previously reported by Alves et al. [46], asymptomatic carriers may act as a reservoir for parasites and are a likely source of infection. However, treatment is typically only provided to symptomatic patients as asymptomatic carriers are rarely seen and/or diagnosed at health care facilities. Particularly in times of malaria elimination, asymptomatic malaria carrier with a low parasitaemia will present new challenges for malaria control in the region where malaria diagnosis is mainly based on microscopy and RDT. Limitations of this study include all potential shortcomings of a point-prevalence study as well as the potential bias arising from the fact that only one district has been surveyed which may not be representative for all 13 malaria endemic districts in Bangladesh. Secondly, there was some overlap with the previously reported cross-sectional survey based on RDTs [15], which, however, provides far less detailed epidemiological data. Conclusions The surveys showed that there are areas in Bangladesh with prevalence rates comparable to those found in malaria-endemic regions of tropical Africa. This study indicates, in accordance with other studies from Southeast Asia, that there is still a substantial proportion of asymptomatic, parasitaemic individuals in Bangladesh that may act as a silent reservoir for malaria transmission. This has major implications for ongoing malaria control programs in Bangladesh that are based on prevention of infection through bed nets and treatment of symptomatic patients. Particularly considering ongoing elimination efforts these findings highlight the need for new intervention strategies targeting all infections, symptomatic as well as asymptomatic, for reducing potential sources of infection and for interrupting the transmission cycle. Based on this study, it is evident that malaria remains an important public health problem in the southeastern part of Bangladesh. Further research is needed to determine the role of asymptomatic individuals for malaria transmission in the area.
2016-05-04T20:20:58.661Z
2014-01-09T00:00:00.000
{ "year": 2014, "sha1": "67d919d823ff1f09f2f397c37c438659a613f089", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-13-16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bed62a2742944fb41f509df4eb9ac7358685dd49", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18469818
pes2o/s2orc
v3-fos-license
Effect of a Bolus Dose of Fentanyl on the ED50 and ED95 of Sevoflurane in Neonates Background The minimum alveolar concentration (MAC) of sevoflurane in neonates is 3.3%, but this value has not been verified in Chinese neonates and the effect of different doses of fentanyl on MAC in neonates has not been investigated. This study was designed to determine the ED50 and ED95 values of sevoflurane in Chinese neonates with and without fentanyl. Material/Methods Ninety-three neonates were randomly assigned to receive sevoflurane alone (control group, n=30), 1 μg/kg sevoflurane (group fent1, n=29), or 2 μg/kg fentanyl (group fent2, n=32). Following inhalational induction and tracheal intubation, the end-tidal concentration of sevoflurane was adjusted to achieve the designated concentration, which was determined using the modified Dixon’s up-and-down method starting with 3.0% in each group, with a 0.25% step size. Success was defined as no motor response within 60 s of skin incision. Results The MAC (standard deviation) values of sevoflurane were 2.91% (0.27) in the control group, 2.53% (0.31) in the fent1 group, and 2.34% (0.33) in the fent2 group according to Dixon’s up-and-down method. Logistic probit regression analysis revealed that the ED50 and ED95 (95% CI) of sevoflurane in neonates were 2.82% (2.66–2.98) and 3.39% (2.89–3.89), respectively, in the control group; 2.44% (2.19–2.68) and 3.30% (2.51–4.09), respectively, in the fent1 group; and 2.21% (1.97–2.45) and 3.11% (2.35–3.88), respectively, in the fent2 group. Conclusions The MAC value of sevoflurane in Chinese neonates was lower than previously reported and was reduced by the addition of fentanyl. Background Sevoflurane is widely used for the induction and maintenance of anesthesia in pediatric patients due to its beneficial pharmacological characteristics, including low blood tissue solubility, non-pungency, and limited cardiorespiratory depression. Previous studies of sevoflurane have shown that the minimum alveolar concentration (MAC) increases as age decreases in childhood and infancy, and it has a similar value in infants and neonates [1][2][3]. Lerman et al. reported that the MAC of sevoflurane in neonates was 3.3% [1], but this value has not been verified in Chinese neonates. Opioids are often combined with sevoflurane to minimize the adverse effects of sevoflurane, but, to the best of our knowledge, no English or Chinese studies have been performed to evaluate the effect of different doses of fentanyl on MAC of sevoflurane in neonates. Therefore, the aim of our study was to determine the ED 50 and ED 95 values of sevoflurane in Chinese neonates. We also investigated the effects of different doses of fentanyl on the MAC of sevoflurane. Patients and study design This clinical trial was reviewed and approved by the Ethics Committee of Guangzhou Women's and Children's Medical Center. Written informed consent was obtained from the parents or legal guardians of each pediatric patient. In total, 93 full-term healthy neonates with an American Society of Anesthesiologists physical status I-II and undergoing elective or emergency surgery under general anesthesia were enrolled into the study. Neonates were excluded if they had cardiorespiratory, renal, or hepatic dysfunction. Neonates were also excluded if they received medications known to affect anesthetic requirements. Neonates were randomly allocated using a computer-generated sequence of numbers to 1 of 3 groups. Patients received either sevoflurane alone (control group) or different doses of fentanyl combined with sevoflurane (group fent 1 : 1 µg/kg; group fent 2 : 2 µg/kg). The number of patients in each group was selected to obtain 8 pairs of crossover points in the Dixon's graph. Surgical procedure and clinical observations Neonates were fasted for 4 h before surgery, and scopolamine (0.01 mg/kg) was subcutaneously administered 30 min before surgery. Patients' electrocardiogram, oxygen saturation, noninvasive arterial pressure, and body temperature were monitored throughout the surgery. The temperature of the operating room was pre-warmed at 24°C before induction of anesthesia. The body temperature of each patient, which was measured at the deep pharynx nasalis, was kept at 36.5-37°C by applying a heating blanket. An overhead radiant heater and plastic sheets were used to cover exposed skin. All neonates were pre-oxygenated for 3 min with 100% oxygen through a tightfitting mask. Patients were then connected to a semi-closed anesthetic circuit prefilled with 6% sevoflurane with the rate of fresh airflow set at 6 L/min. After losing the eyelash reflex, a 24-gauge intravenous cannula was inserted if the patient did not have an intravenous catheter before being taken to the operation room, and 0.9% normal saline was infused at a rate of 10 ml/kg/h. After tracheal intubation, the lungs were mechanically ventilated with 1 L/min of air and 1 L/min of oxygen. The ventilator rate was adjusted to maintain an end-tidal carbon dioxide partial pressure of 4.7-6.0 kPa. Arterial blood gas analysis was performed to determine the arterial blood carbon dioxide partial pressure and to adjust the balance of blood electrolytes. The end-tidal sevoflurane concentration and carbon dioxide partial pressure were continuously monitored using a Datex Capnomac airway gas monitor (Datex-Ohmeda, Helsinki, Finland) during the study. The end-tidal concentration of sevoflurane was changed to achieve the target concentration by another anesthesiologist who was unaware of the patients' assignment. As soon as the target concentration of sevoflurane was achieved, 1 µg/kg and 2 µg/kg fentanyl were infused, over a period of 1 min in neonates in the fent 1 and fent 2 groups, respectively. In the control group, saline was infused. Drugs were prepared in unlabeled 5-ml syringes by a nurse anesthetist who did not participate in the intraoperative management. The target end-tidal concentration of sevoflurane was maintained for 20 min to allow for equilibration between the alveolar and brain partial pressures. The sevoflurane end-tidal concentration during maintenance was considered as the MAC for that study if the neonate had not moved. After the skin incision, cisatracurium (0.2 mg/kg) was given for muscular relaxation. For each neonate, a total volume of 1 ml/kg 0.2% ropivacaine was infiltrated into the wound as postoperative wound analgesia at the end of surgery. The study protocol is shown in Figure 1. For each neonate, the target end-tidal concentration of sevoflurane was determined using the modified Dixon's up-anddown method starting with 3.0% in each group, with a 0.25% step size. Increasing or decreasing the target end-tidal sevoflurane concentration was determined by the response of the previous neonate in the same group. The response of each neonate was observed for 60 s after the skin incision and evaluated as "successful" or "unsuccessful". Unsuccessful was recorded when the skin incision caused withdrawal of the neonates' hand or foot. If the response was determined to be unsuccessful, the end-tidal concentration of sevoflurane given to the next neonate was increased by 0.25%. If it was successful, the end-tidal concentration of sevoflurane given to the next neonate was decreased by 0.25%. All responses were assessed by an independent observer who was unaware of the sevoflurane concentration and group assignment. Each neonate contributed to 1 data point toward the measurement of sevoflurane MAC in each study group. The midpoint between an unsuccessful response and a successful response in 2 consecutive neonates was defined as a pair of crossover, and the study in each group ended after 8 pairs of crossover were obtained. Baseline measurements of systolic, diastolic and mean arterial blood pressures, heart rate, SpO2, and temperature were recorded at 4 time intervals as follows: awake, before intubation, at the steady-state target concentration of sevoflurane before the skin incision, and at steady-state concentration approximately 1 min after the skin incision. Hypotension was defined as a ³30% decrease in mean arterial blood pressure compared to blood pressures when the neonate was awake. Dopamine (1-10 ug/kg/min) was used to treat hypotension during sevoflurane anesthesia. The incidence of vomiting and moderate and severe airway responses, including breathholding (>15 s), coughing, laryngospasm (>5 s of phonation or inability to ventilate), bronchospasm (bilateral wheezing), and secretions (requiring suctioning) were recorded during the induction of anesthesia and emergence from anesthesia. The primary endpoint of the study was the end-tidal concentration of sevoflurane. The secondary endpoints were postoperative airway responses and adverse events. Statistical analysis Sample size determination, when using Dixon's up-and-down method, is relatively speculative. The Dixon method is a useful statistical approach of MAC calculation, requiring a moderate sample size of subjects. Indeed, 6 pairs are considered as optimal for a clinical study [4]. All statistical analyses were performed using SAS (SAS Institute Inc, Cary, NC, USA) statistical software. The end-tidal concentration of sevoflurane was analyzed by calculating the midpoint concentration of all independent pairs of crossover points. The MAC was defined as the mean of the median crossover concentration. The upand-down data was also subjected to logistic probit regression analysis to estimate the 50% and 95% effective sevoflurane concentrations (ED 50 and ED 95 , respectively) and the 95% confidence interval (95% CI). ANOVA or a Kruskal-Wallis test was used to analyze the differences between patient age, weight, time to loss of eyelash reflex, time to successful tracheal intubation, and operation time. Sex, cause of surgery, airway response, emergence agitation, and vomiting were analyzed with a chi-square analysis or Fisher's exact test. Intraoperative hemodynamic variables were analyzed using repeated measures analysis of variance and the Newman-Keuls test. A p<0.05 was considered statistically significant. Results Ninety-three neonates were enrolled in this study. Two neonates were excluded from the study because of serious airway symptoms necessitating the use muscle relaxants before the skin incision. There was no difference between the 3 groups in demographic data (Table 1). Hemodynamic responses were maintained within 20% of baseline measurements during the maintenance period. An airway response was not observed in the control group or the fent 1 group during the emergence from anesthesia. One neonate in the fent 2 group exhibited breathholding during the emergence from anesthesia (Table 2) and was treated with assisted mask ventilation. Vomiting was not observed during the induction of anesthesia in any of the groups. (17) 103 (14) 107 (16) Operative time (min) 66 (17) 63 (14) 63 (15) Cause of surgery (n) Patients in control group received sevoflurane alone. Patients in groups fent 1 and fent 2 received sevoflurane and either 1 µg·kg -1 fentanyl or 2 µg·kg -1 fentanyl, respectively. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Consecutive neonates The mean arterial blood pressure and heart rate are shown in Table 2. In total, 70% of neonates in the 3 groups had hypotension after receiving high induction doses of sevoflurane, especially before intubation. However, mean arterial pressure returned toward normal levels after reducing the sevoflurane concentration following intubation and remained less than at the steady state target concentration of sevoflurane. Discussion In this study, we found that the MAC of sevoflurane in Chinese neonates was lower than previously reported in white neonates [1]. Additionally, single doses of 1 µg/kg and 2 µg/kg fentanyl significantly reduced the end-tidal concentration of sevoflurane required for skin incision by 13% and 20%, respectively, in Chinese neonates. Few studies have evaluated the MAC of sevoflurane in neonates [1]. Lerman calculated the MAC of sevoflurane in neonates as 3.3%, which has been considered as a reference value for sevoflurane anesthesia in neonates [1]. Our study showed that the MAC of sevoflurane in neonates was 2.91% according to Dixon's up-and-down method and 2.82% according to logistic probit regression curves. The MAC of sevoflurane for Chinese neonates is less than that of white neonates, which is consistent with a previous study showing that the MAC value of sevoflurane in Asians is less than in whites. 4 The MAC value is affected by the method of determination, type of surgery, patient age, body temperature, arterial carbon dioxide tension, and physiologic and genetic factors [1][2][3][4][5][6]. It is difficult to measure the alveolar concentration of inhaled anesthetics in the same subject repeatedly. In most studies, MAC is determined using the up-and-down method because it permits a small number of individuals to be studied. However, the upand-down method can be affected by the starting concentration, the number of crossovers, increment size of concentration adjustments, and inter-individual variability [7]. The starting concentration of sevoflurane in our study was 3.0%, which is similar to that used in the clinic, but higher than the starting concentration of 2.4% in Lerman's study [1]. Furthermore, compared to Lerman's study, which had 4 crossover pairs and a total of 12 neonates, our study had 8 crossover pairs and a total of 28 neonates. More crossover pairs decreases the likelihood of reporting an inaccurate estimate and incurs minimal additional costs [7]. Physiologic, genetic, and pharmacologic conditions may alter MAC, such as body temperature, hypercapnia, and hypotension. For each 1°C decrease in core temperature, anesthetic requirements decrease by 5% [6]. Hypotension and hypercapnia may decrease MAC by affecting central nervous system function. 6 The differences in body temperature, carbon dioxide, and the type of surgical operation could have contributed to the differing results in our study and Lerman's study. Scopolamine, which was used as an anticholinergic pre-anesthetic medication in our study, has a weak sedative effect. A study in cats suggested that scopolamine does not affect the MAC of halothane [8]. However, the effect of scopolamine on the outcome of sevoflurane in humans has not yet been reported. Thus, our results provide a more accurate reference value for clinical sevoflurane anesthesia, especially in Chinese neonates. The use of high doses of sevoflurane during anesthesia induction caused hypotension in our study. However, mean arterial pressure returned to normal values with the reduction of the sevoflurane concentration. Hypotension during sevoflurane anesthesia in neonates requires careful monitoring [9,10]. At equipotent doses, all of the potent inhalational anesthetics produce unacceptable hypotension in newborns [1,11]. Even at MAC concentrations, heart rate and blood pressure decrease by 12% and 30%, respectively, when using vapor anesthetics in newborns [6]. A newborn's myocardium is less compliant than that of an older child and has decreased contractile mass and a decreased velocity of shortening. Also, greater myocardial depression in neonates induced by volatile anesthetics may be mediated by the inhibition of Na-Ca 2+ exchange and Ca 2+ influx channels and at least in part by direct inhibition of cross-bridge cycling. Therefore, the negative inotropic and chronotropic effects associated with inhaled anesthetics are poorly tolerated [12][13][14]. The need for deeper levels of anesthesia to achieve satisfactory conditions for endotracheal intubation places the infant in a precarious position because there is a small safety margin between anesthetic overdose and inadequate depth of anesthesia. Uptake of potent anesthetics is more rapid in children because of an increased respiratory rate and cardiac index, as well as a greater proportional distribution of cardiac output to vessel-rich organs. This rapid rise in blood anesthetic levels, combined with functional immaturity of cardiac development, most likely explains why it is easy to deliver an inhaled anesthetic overdose to infants [15]. Opioids are frequently used for pain relief during surgical procedures, as well as to reduce the dose of inhalational anesthetics during pediatric anesthesia [16,17]. Fentanyl, a synthetic opioid with activity on µ 1 and d-opioid receptors, is used frequently in neonates because it has a rapid onset, provides hemodynamic stability, blocks stress responses, and prevents pulmonary vascular resistance increases [18][19][20]. Furthermore, fentanyl does not significantly affect heart rate, blood pressure, cardiac output, or the regional distribution of blood flow to the major organs when it is administered in doses less than 3 mg/kg [20]. However, it can produce profound respiratory depression in newborns. Previous studies have shown that the plasma concentration of fentanyl in neonates vary slightly, between 30 min and 120 min after a bolus injection of the drug [21]. This prolonged elimination half-life of fentanyl has important clinical implications when repeated doses of fentanyl are used for the maintenance of analgesia, leading to the accumulation of fentanyl and its respiratory depressant effects. In our study, small single doses of 1 µg/kg and 2 µg/kg of fentanyl significantly decreased the MAC and concentration of sevoflurane in neonates, minimizing its adverse effects. Hence, although our study was not sufficiently powered to detect this effect, the use of fentanyl with sevoflurane had minimal respiratory depressant effects and improved the outcome of sevoflurane anesthesia in neonates. In the current study, we used both logistic probit regression and the Dixon's up-and-down method to determine the ED 50 and ED 95 values of sevoflurane in Chinese neonates. The accuracy of the parameter estimates has been questioned, particularly when small samples, such as in the present study, are being evaluated [22]. Dixon's up-and-down method assumes that each measurement in a subject is independent and not correlated with any other measurements in that individual. The logistic regression technique uses the binary endpoint of success versus failure and does have potential weaknesses. In spite of these criticisms, the logistic regression model remains the only robust method to estimate both ED 50 and ED 95 values, and the 2 methods have been frequently used to study the potency of inhaled anesthetics in previous similar studies [1,[23][24][25][26]. The end-tidal concentrations of sevoflurane in our study was measured with a sampling tube placed at the junction between the tracheal tube and the circuit, rather than measured in alveolar gas, which is the true MAC concentration. This method of monitoring expiratory concentrations of anesthetics is common in clinical anesthesia. Thus, the results can be used to guide clinical anesthesia. Conclusions In our study, the MAC value of sevoflurane in neonates was lower than previously reported. In addition, a single dose of fentanyl resulted in a dose-dependent decrease in the endtidal concentration of sevoflurane required for skin incision. Fentanyl might also improve the outcome of sevoflurane anesthesia in neonates.
2018-04-03T06:05:48.739Z
2014-12-14T00:00:00.000
{ "year": 2014, "sha1": "63e3faa94f746645ff4b3867d136808dbed6ec72", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4271805?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "63e3faa94f746645ff4b3867d136808dbed6ec72", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247056646
pes2o/s2orc
v3-fos-license
Maternal salinity influences anatomical parameters, pectin content, biochemical and genetic modifications of two Salicornia europaea populations under salt stress Salicornia europaea is among the most salt-tolerant of plants, and is widely distributed in non-tropical regions. Here, we investigated whether maternal habitats can influence different responses in physiology and anatomy depending on environmental conditions. We studied the influence of maternal habitat on S. europaea cell anatomy, pectin content, biochemical and enzymatic modifications under six different salinity treatments of a natural-high-saline habitat (~ 1000 mM) (Ciechocinek [Cie]) and an anthropogenic-lower-saline habitat (~ 550 mM) (Inowrocław [Inw]). The Inw population showed the highest cell area and roundness of stem water storing cells at high salinity and had the maximum proline, carotenoid, protein, catalase activity within salt treatments, and a maximum high and low methyl esterified homogalacturonan content. The Cie population had the highest hydrogen peroxide and peroxidase activity along with the salinity gradient. Gene expression analysis of SeSOS1 and SeNHX1 evidenced the differences between the studied populations and suggested the important role of Na+ sequestration into the vacuoles. Our results suggest that the higher salt tolerance of Inw may be derived from a less stressed maternal salinity that provides a better adaptive plasticity of S. europaea. Thus, the influence of the maternal environment may provide physiological and anatomical modifications of local populations. Salt stress is one of the main environmental factors that limits growth of plants worldwide. An environment with high, medium or low salinity may impact plants' ability to tolerate high salinity, and this impact varies between and within species. In heterogeneous environments such as natural and anthropogenic sites, plants could develop multiple strategies through producing offspring that differ in their salt stress tolerance 1,2 . For instance, according to El-Keblawy et al. 3 , the halophyte Anabasis setifera with phenotypic plasticity is more able to survive in harsh environments conditions due to the maternal environment being able to produce progeny that fit specific habitats well. Maternal effects might influence adaptive plasticity between generations, which can be considered an adaptive evolution due to the advantage conferred to the offspring reflected by an increased survival 4,5 . Many studies have demonstrated that maternal habitats can cause plants' growth to respond differently depending on environmental conditions and, subsequently, this affects the next generation 3,5,6 . Some studies have reported contradictory results: for instance, El-Keblawy et al. 3 showed that the halophyte Anabasis setifera has greater salt tolerance when taken from non-saline habitat as compared to a population from a low saline habitat (17.5 mS cm −1 ), while Van Zandt and Mopper 4 reported that Iris hexagona seeds from maternal high salinity germinated earlier and in greater quantity than did seeds from low salinity plants. Currently, many Results Morphometrical parameters in salinity gradient. Overall www.nature.com/scientificreports/ increase of 128% and 246% was observed between the minimum and maximum A for Cie and Inw, respectively. Between populations, A showed the highest difference between Cie and Inw at 1000 mM NaCl, with an increase of 159% in Inw with respect to Cie. The degree of succulence (S) of the stems was also calculated, these results adequately show the change between salt treatments, the values are in accordance to Delf 17 report. The highest S is observed for Inw population at 1000 mM. Then, a maximum increase in the cell's diameter (Cdiam) and roundness (R) of water-storing cells was observed from 0 to 1000 mM NaCl, for Inw population, 75.5%, 11.3% respectively, while Cie population, has its maximum Cdiam 49.7% increase from 0 to 200 mM NaCl treatments, and R increases 11.5% from 0 to 600 mM NaCl (Fig. 2b,c). Therefore, a significant different behaviour from Cie and Inw population was detected as shown in 3D plot (Fig. 2d) which comprises the three morphological parameters of both populations through the 6 salinity treatments. Inw population showed a wider distribution suggesting a better adaptation during the experimental salinity stress with respect to Cie which presents a reduced distribution in the 3D plot. Biochemical modification to salt stress. Proline (P) showed an increase with salinity gradient (Fig. 3a). The results show that P was significantly higher in Inw in comparison to the Cie population under salt stress, mainly at 400, 800 and 1000 mM. Meanwhile, hydrogen peroxide (HP) was significantly higher in Cie with respect to Inw population through 0, 200, 600, 800 and 1000 Mm NaCl treatments. A significant increase was observed at 800 and 1000 mM NaCl in Cie and very slightly at 1000 mM NaCl in Inw population. Regarding the enzyme activity analysed, peroxidase (POD) activity increased markedly at 800 and 1000 mM NaCl in Cie with respect to Inw, which maintains a homogeneous low POD activity through all the treatments. These results correlate with the HP analysis (Fig. 3b,c). Then, the lowest catalase (CAT) activity for Inw was found at medium salinity treatments 200, 400, 600 mM, while the highest CAT activity was found at the extremes (0, 800 and 1000 mM), in contrast, for Cie population the CAT activity decreased along with the salinity gradient, both populations showed significant difference between them in all the treatments (Fig. 3d). Chlorophyll a (Cha), b (Chb) and carotenoid (Carot), content show a remarkable decrease in both populations under NaCl stress ( Table 1). The chlorophyll content among Inw and Cie was significantly different in Cha at 200 mM and in Chb at 0 and 200 mM. No significant differences between the two populations were found in total chlorophyll content, but in carotenoid, the highest content was found only at 0 mM for Cie and at 0 and 200 mM in Inw. However, comparing both populations, significant differences were observed through all treatments. Interestingly the total soluble protein content was higher for Cie at 0 mM treatment and it progressively decreases along with salinity gradient, while it increases with salinity in Inw. High and low methylesterified HGs content and distribution under salt stress. Immunofluorescence analysis of the location of high and low methylesterified HGs (HM-HGs and LM-HGs) showed variances in the total levels of methylesterified HGs as well as in their distribution through the semi-thin cross-section of the fleshy tissue among the stem, epidermis, palisade tissue, cortex, vascular bundles and vascular cylinder. An increase in the total intensity level of HM-HGs when subjected to salt stress was identified with JIM7 antibody in the stem cross-section for Inw, whereas for Cie the highest total intensity levels of HM-HGs were observed only at 200 mM NaCl, then a gradual decrease occurred along with the salinity gradient (Fig. 4a). LM-HGs homogalacturonans distribution identified with LM19 antibody show significantly higher levels of total intensity for Inw with respect to Cie in all salt treatments, with exception of treatment at 200 mM (Fig. 4b). Also, the HM-HGs and LM-HGs quantity varied between treatments and populations as observed in Fig. 5a-f (Ciechocinek) vs. Table 2. In particular, epidermis tissue showed Figure 1. Stem-cortex cell's area changes of S. europaea after 2 months in Cie (a-f) and Inw (g-l) populations grown under different NaCl concentrations. Scale bar 150 μm. n = 300 ± 50 cells, 12 individuals per treatment. S: correspond to the degree of succulence in the stems 17,24 . The F value of S that corresponds to the 2-way ANOVA for the interaction salt treatment × population is F 5,48 = 5.5; p < 0.001. Table 2). For LM-HGs in Inw, non-significant differences were found between treatments ( Fig. 6h-l arrowheads, respectively; Table 2). In the palisade tissue a significant increase in LM-HGs is identifiable at 1000 mM NaCl for the Inw population (Fig. 6l, Table 2). Scientific Evaluation of the differences between S. europaea populations. All the variables were evaluated in each population using principal component analysis (PCA) (Fig. 7a); both populations show a similar tendency at the low salt treatments. Figure 7a shows the PC1 and PC2 axes, which accurately describe the variance of the samples (75.43%). This plot shows which plants are the most tolerant with regard to salt stress and how they correlate with the active variables that describe the low or high stress. It also shows that the Inw population seems to cope better with salinity. The biplot demonstrates that I1000 mM correlates well with the cell area variable, which is the morphometric trait that suggests Inw is less affected under stress salinity; this agrees with the image growth analysis reported in a previous salinity tolerance study for the same populations 25 . Variables www.nature.com/scientificreports/ related to high stress, such as HP and POD, correlate better with high salinity treatments in Cie (Fig. 7a). This biplot also shows how the individuals move through the two-dimensional space of the main components, from the positive to the negative quadrant of PC1 as salinity increases. The results were also grouped on a 3D plot ( Fig. 7b) according to their similarities through the three principal component scores (PC1, PC2 and PC3) that describe the variance of the samples (84.87%), which shows that Cie plants are more susceptible to salt stress. Factorial scores from the PCA of each sample were used to calculate the distance between the two points under the same treatment P1 = (x 1 , y 1 , z 1 ) and P2 = (x 2 , y 2 , z 2 ) in the 3D space of the PCA (Fig. 7b). The comparisons of C0 vs. I0 (3.30) against C1000 vs. I1000 (8.39) were created in the 3D cartesian axis (x = PC1, y = PC2, z = PC3), with distance results indicating that the greater the stress, the greater the separation. In addition, the shortest distance C200 vs. I200 (2.12) is observed at the optimum salinity for S. europaea-growth, at between 200 and 400 mM NaCl. Expression patterns of SeNHX1 and SeSOS1 genes involved in Na + segregation of S. europaea stem. The expression patterns of NHX1 and SOS1 in S. europaea stems under saline treatments were analysed with real-time quantitative reverse transcriptase polymerase chain reaction (qRT-PCR). These genes NHX1 and SOS1 encode a tonoplast Na + /H + antiporter and apoplast antiporter, respectively. SeNHX1 and SeSOS1 expression does not show a significant difference within treatments of the same population, but a significant difference in gene expression is visible between populations. SeNHX1 and SeSOS1 were equally expressed in the Inw population, while the Cie population showed the highest expression for SeSOS1 but very low SeNHX1 expression, as shown in Fig. 8. This confirms differences between these two populations. Different letters indicate significant differences between treatments within population and * indicates significant difference between populations within treatment (p < 0.05), n = 3. Discussion According to the S. europaea anatomical cells results obtained through image analysis, cell's area has a similar result under 0 mM NaCl in both populations, but significant differences were observed when populations were subjected to salt stress. The Inw population has the highest values for all cell parameters tested. The highest value observed was in Inw at 1000 NaCl. Our results are in accordance with Akcin et al. 26 , who demonstrated that Salicornia freitagii stem anatomical characters such as thickness, length and width of water-storing tissue significantly increased when the halophyte grows under high salinity. While the roundness of the cells analysed in this study show that at higher salinities, cells lose their natural hexagonal shape, therefore, this parameter was useful to determine that cells turn round probably due to the high-water storage within them. The highest roundness was observed for Cie at 400 and 600 mM and for Inw population at 800 and 1000 mM suggesting that these rounded cells store higher amount of water. These parameters, area of cells and roundness, can be associated with an increase in (S) succulence as a way to aid in storing additional water by increasing vacuolar volume for [26][27][28][29] , who also showed that succulence is an adaptation mechanism in salt-tolerant cultivars subjected to saline stress. Results of succulence in the present work, adequately show the change between salt treatments which are associated to the area of the cells, large cells can be linked to high turgidity and hence to the S of the plant. In the same line, proline, which allows additional water to be reserved in the water storage cells from the environment, positively correlates with the anatomical analysis. The 22% and 40% higher results for P in the Inw population at 800 and 1000 mM NaCl, respectively, relative to the Cie population can be linked to the increased cell area and roundness in the Inw population. These features may allow cell water potentials to decrease 30,31 . Kumar et al. 31 demonstrated that between two cultivars of Morus alba L. subjected to salt stress, the proline metabolism was significantly altered and the extent of alteration varied between both cultivars, where proline accumulation was higher in the salt tolerant cultivar than in the salt sensitive one because higher content of proline leads to the maintenance of turgor by preventing the loss of water and ion toxicity, supporting its salt tolerance. Also, our results are in accordance with studies carried out by Aghaleh et al. and Akcin and Yalcin 30, 32 for S. europaea. Moreover, the Cie population showed the highest HP and POD values, especially at the highest salinity, with percentage differences of 285% and 219%, respectively, with respect to Inw; in the Fig. 3b,c we can determine which population is more salt-tolerant. According to Kong and Seo 33 , salt-tolerant cultivars showed less HP content compared to salt-sensitive cultivars, due to the effect of salinity induction of reactive oxygen species (ROS), such as HP, which severely reduced overall plant growth in sensitive species. In the present study, the results indicate that the Cie population is more salt-sensitive than is the Inw population. Aghaleh et al. 32 tested the effects of salt stress on the activities of antioxidative enzymes in two Salicornia species at NaCl concentrations (0, 100, 200, and 300 mM), finding that the salinity progressively enhanced the POD activity, whereas the CAT activity was only registered at the low salinity. POD and CAT play a key role in removing ROS produced in plant cells under abiotic stresses. In this study, the Cie population showed higher levels of POD activity under high salinity, probably due to the remarkably higher content of HP that this population has under high salinity with respect to Inw. Meanwhile, the decrease in photosynthesis activity when plants are subjected to salinity is reflected in the reduction of chlorophyll and CO 2 fixation due to the lower stomatal conductance 27,32 . Some plants grown under high salinity have a lower stomatal conductance as a strategy to conserve water 34,35 . Consequently, CO 2 fixation is reduced and photosynthetic rate decreases. The chlorophyll content of both populations was significantly different at 200 mM NaCl (Table 1), with no difference at high salinity. In this line, it is important to note that Chb type is an adaptive feature of adapted chloroplasts, because high Chb content produces an increase in the range of wavelengths absorbed by the chloroplasts, which is attributed as a mode of adaptation when plants www.nature.com/scientificreports/ are subjected to some abiotic stressor 36 . In the present study, Inw showed a statistically significant higher Chb content compared to Cie under 0 and 200 mM treatments, while Inw was the one with higher Carot content as a sign of a better adaptability to salt stress. The lower content of protein in the Cie population under salt stress and higher content in Inw population, suggests the possible connection of protein with an osmotic adjustment that confers higher salt tolerance. The importance of protein for abiotic and biotic stress adaptation was thoughtfully reviewed by Sasidharan et al. 37 , who stated that the regulation of cell wall protein activity results in growth modulation during stress, and that this can be mediated by the regulation of wall modifying proteins that alter cell wall structure and allow it to yield to turgor, thus driving a cellular expansion, which was corroborated with the cell area analysed in this study for each population. According to Zagorchev et al. 12 , around 30 kDa proteins are involved in the cell wall rigidity, which plays a crucial role in plant growth and development during stress adaptation. With regard to high and low methylesterified HGs, levels and distribution were noticeably different in each population. As pectin is important for the cell wall structure and could be modified in response to different signals such as salt stress, analysis of pectins received major attention in the present study. Overall, it is already known that a large majority of the genes encoding proteins modifying cell wall structure, are down-or up-regulated under salt treatment. In the case of pectin, Fan et al. 38 reported that genes encoding methylesterases www.nature.com/scientificreports/ inhibitor family proteins are up-regulated under saline conditions, which decreases the level of methyl esterification of pectins and affects their normal function by inhibiting pectin methylesterase activity. This behaviour was reflected in the present study for Cie, the less salt-tolerant population. However, differences in the content of HM-HGs may occur between salt-tolerant populations. For instance, Uddin et al. 39 indicate that under stress conditions the concentration of methylated pectic epitopes tends to drop, especially for those species that are less tolerant to salt stress. Meanwhile, Liu et al. 40 indicate that the degree and pattern of the methyl-esterification of pectin to some extent determines the stiffness of cell walls and, with this, the tolerance to salinity. In the same study it is stated that the overexpression of the gene (AtPMEI13) that causes a decrease in pectin methylesterified enzyme activity in Arabidopsis enhances the total levels of methyl-esterification pectins, which was reflected in an improvement in seed germination and survival growth rate under salt stress. Also, Le Gall et al. 11 reported that in salt-sensitive species, the high salinity triggers the de-esterification of loosely bound pectins that impede the swelling of cells, affecting the plant more than those with higher tolerance. According to Peaucelle et al. 15 , the de-esterification of homogalacturonans can lead to cell wall stiffening through the creation of "egg boxes", and to enzymatic degradation of pectin, which indicates a denser and less extensible cell wall. On the other hand, HM-HGs can be involved in remodelling the cell wall structure andmechanical properties, which under salt stress helps to regulate the cell wall elongation and cell shape for better water accumulation, which translates into higher resistance to abiotic factors such as salinity 11 , as confirmed in the present study. Herein, high methylesterified pectin was detected at high levels mainly in epidermis (ep) and in vascular bundles (vb) ( Table 2). This vb tissue corresponds to the collenchyma cells, which are elongated cells composed of cellulose and pectin, with irregularly thick cell walls that provide support and structure. These cells are often found under the epidermis and associated with vascular bundles. A study carried out on three maize hybrids with contrasting salt tolerances showed an accumulation of highly methylated pectin in the salt-tolerant maize genotype, which favoured their cells' elongation 39 . Another study reported by Muszyńska et al. 2 showed that high methylated pectin (identified by immunolabelling with JIM7 antibody) increased within the cell wall of Populus tremula under saline conditions. This increase was linked with a rise in the modulus of elasticity and a decrease in cell wall plasticity in order to keep the turgor pressure necessary for plant growth. According to these authors, under salt stress, cell walls of salt sensitive cultivars can became more rigid (less flexible) while turgor pressure is maintained. Thus, maintaining good cell wall flexibility might be part of the mechanism by which salt-tolerant cultivars adapt to environmental stresses. Pectin polysaccharides are also believed to play an important role in cell adhesion and tissue cohesion 41 , which would be very important for adaptation to stress in plants that live in saline environments. For instance, in the halophyte Sonneratia alba a decrease in calcium content was detected, which may be a strategy by this halophyte to reduce cell rigidity 42 . Meanwhile, Le Gall et al. 11 reported that, in Salicornia europaea, the genes encoding the cell wall proteins of the primary cell wall (including UDP-l-rhamnose synthase and cellulose synthases) decrease under saline conditions, while other genes that encode pectin methylesterase inhibitor proteins increase. Byrt et al. 43 reported that there are associations between higher cell wall pectin content and increased tolerance to salinity. According to Rasouli et al. 44 , salinity altered the physical properties of epidermic cells, specifically in the guard cell wall. In their study they demonstrated that the cell-wall-modifying enzymes such as acetyl-and methyl-esterifications esterases of pectin were upregulated in the epidermic cells of the halophyte Chenopodium quinoa. They concluded that the methyl-esterifications of pectins at epidermis are critical for salt tolerance by increasing the mechanical strength in the guard cells that are exposed to salinity. So, pectin methyl-esterification is essential for plant responses to environment stresses, which was also observed in the present study through the quantification of the fluorescence of the high methyl esterified pectin. The Inw population had a higher level of HM-HGs in the epidermis cells than did Cie, through all the salt treatments. This finding may also be associated with the fact that pectins in guard cell walls provide strength and flexibility in order to accommodate the turgor-pressure-driven changes in size and shape that underlie the opening and closing of stomatal pores during abiotic stress factors 45 . Moreover, for the case of vascular bundles, Fan et al. 38 reported that under salinity the S. europaea genes involved in cell wall metabolism are well linked with the vessel differentiation increment in xylem. In this sense, highly methyl-esterified pectin together with lignin in xylem are the main passage for assimilation of water and mineral elements 46 . So, the accumulation of large amounts of salt in S. europaea shoots under salinity requires a more rigid support transport system from root to shoot, which may be an important strategy for this halophyte when adapting to salinity. This may explain our results with regard to the higher levels of HM-HGs in the vb for Inw population. The results of the correlation between investigated parameters are of great interest and some have not been reported before, especially the positive correlation between proline and cell area (0.728) ( Table 3); this result confirms that the higher the cell's area, the higher the proline content, which promotes plant succulence. Roundness has similar correlation tendency with proline (0.672) due to cell turgidity, while the plants under 0 mM were mainly described by higher total content of chlorophyll (Fig. 7a), and these two variables (R and TC) have a high significant negative correlation (0.781). Moreover, the inverse correlation (− 0.688) between HM-HGs and HP is also an interesting finding, suggesting that when the plant is under salt stress, the chemical cell wall composition is restructured; this was observed in a reduction in pectin content along the salinity gradient. However, the more salt-tolerant population Inw increases its HM-HGs content, suggesting that this component is related to better salt resistance; furthermore, a significant positive correlation was detected between HM-HGs vs. A and Cdiam (0.605 and 0.639 respectively). Meanwhile, HP positively correlated with POD, as expected (0.863). The results in Fig. 7a illustrate population salt tolerance and how the two populations move through the biplot showing the active variables that correlate with the studied individuals depending on their salt tolerance. In the Inw population, individual I1000 correlates well with cell area and proline content, while in Cie, individuals C1000, C800 and C600 correlate well with HP and POD. Then, the C0 and I0 individuals correlate well with TC, Cha, Chb and Carot. Additionally, factorial www.nature.com/scientificreports/ scores (Fig. 7b) were useful in demonstrating that the highest separation between Inw and Cie parameters was found at the highest salinity, indicating also that Cie has more stress-modifications at this salinity level. The present gene expression results confirmed that Inw can be considered a more salt-tolerant plant in comparison to the Cie population. This is because, according to Lv et al. 47 , NHX1 is one of the Na + /H + antiporters in tonoplast responsible for Na + transport from cytosol to vacuole, and it plays a central role in salinity tolerance. In the present study, SeNHX1 was highly expressed only in the Inw population, suggesting the important role of Na + /H + in the Na + influx to vacuole in plant cells. For instance, Hayatsu et al. 42 reported that in the halophyte Sonneratia alba, the Na + content in the vacuole was higher than in glycophyte Oryza sativa, concluding that halophilic cells gain salt tolerance by transporting Na + into their vacuoles. Meanwhile, the SeSOS1 gene was highly expressed in the salt-sensitive population (Cie), suggesting that excreting Na + in apoplast is the main mechanism by which this population copes with salinity. Jha et al. 48,49 stated that the transcript of a Na + /H + antiporter gene from Salicornia brachiata (SbNHX1) increased under different NaCl concentrations. However, in the present study, no significant differences were found between salt treatments. This phenomenon may be attributed to the different plant species, salt treatments and experiment duration. Fan et al. 38 found that more than half of the differentially expressed transcription factors in Salicornia are directly or indirectly involved in the salt response but also in the growth and development process of S. europaea roots and shoots. So, only a small fraction participated exclusively in the stress response, which means that salinity efficiently induces the growth of S. europaea and that most of these genes can be activated through all the development growth process, independently of the salt content. All the discussed results confirm our hypothesis that different maternal salinity populations of the same S. europaea species adapt differently to salt stress at the anatomical, pectin, biochemical and gene level, which can be important in the context of Salicornia europaea species as future crop 50 , especially considering seed sources. Materials and methods Plant materials, growth conditions and salt treatments. Soil samples were performed as in previous experiments with S. europaea 25 , seeds were collected at two maternal sites, the first of which represents natural salinity related to inland salt springs at the health resort of Ciechocinek (Cie) (52°53′N, 18°47′E) characterised by a high soil salinity of ca 100 dS m −1 (~ 1000 mM NaCl), and the second of which is associated with soda factory waste that affects the local environment in Inowrocław-Mątwy (Inw) (52°48′N, 18°15′E) and with a lower salinity of ca 55 dS m −1 (~ 550 mM NaCl). The complete soil description is reported in Piernik et al. 51 and Szymanska et al. 52,53 . Populations are isolated by a distance of ca 40 km without any saline environment between them, however, they were somehow connected due to the presence of salt springs in the nineteenth century. The seeds came from one generation and were collected in early November 2018. The seeds were germinated and grown according to the same steps reported in Cárdenas-Pérez et al. 25 with a slight modification in the number of salt treatments at 0, 200, 400, 600, 800 and 1000 mM NaCl. In total, 144 plants were cultivated, and, therefore, a complete randomised factorial design 2 6 was used, which included (12 plants × 6 treatments × 2 populations) with 14 response variables. After 2 months of development, anatomical analysis such as cell area (A), roundness (R) and maximum cell diameter (Cdiam) were estimated in 12 samples, whereas high and low methyl esterified pectins (HM-HGs and LM-HGs), proline (P), hydrogen peroxide (HP), total soluble protein (Prot), catalase activity (CAT), peroxidase activity (POD), chlorophyll a, b and total (Cha, Chb and TC), carotenoid (Carot) contents, as well as SeNHX1 and SeSOS1 gene expression, were all determined per triplicate (plants were randomly selected). The collection of plant material, comply with relevant institutional, national, and international guidelines and legislation, IUCN Policy Statement on Research Involving Species at Risk of Extinction and Convention on the Trade in Endangered Species of Wild Fauna and Flora. The voucher specimen of the plant material has been deposited in a publicly available herbarium of the Nicolaus Copernicus University in Toruń (Index Table 3. Pearson correlation matrix of the anatomical, biochemical and pectin content parameters. Values in bold are different from 0 with a significance level alpha = 0.05. www.nature.com/scientificreports/ Herbarium code TRN), deposition number not available (dr. hab. Agnieszka Piernik, prof. NCU undertook the formal identification of plant species, and permission to work with the seeds was provided by the Regional Director of Environmental Protection in Bydgoszcz, WOP.6400.12.2020.JC). Anatomical image analysis. From the middle primary branch (fleshy segment shoot) of S. europaea plant treatments (0, 200, 400, 600, 800 and 1000 mM NaCl), slices of fresh tissue were obtained by cutting them with a sharp bi-shave blade. The thinner slices of approximately 0.5 mm were selected and used in the microstructure analysis. The size and shape of the stem-cortex cells from the fresh water-storing tissue were characterised by a light microscope (Olympus BX51, USA) connected to a digital camera (DP72 digital microscope camera) and digital acquisition software (DP2-BSW). The microscope images were captured at a magnification of 10 ×/0.30 in RGB scale and stored in TIFF format at 1280 × 1024 pixels. A total of 300 ± 50 cells from five individuals per treatment were analysed. Finally, the shape and size of the cells were obtained from the captured images. Cell image analysis (IA) was performed in ImageJ v. 1.47 (National Institutes of Health, Bethesda, MD, USA). The following anatomical parameters were obtained. Firstly, the cell area (A) was estimated as the number of pixels within the boundary. Secondly, the maximum cell's diameter (Cdiam) was determined by the distance between the two points separated by the largest coordinates in different orientations, and the cell roundness (R) was obtained through the equation R = (4 A)/(π (Cdiam) 2 )-where a perfectly round cell has R = 1.0, while elongated cells will show an R → 0. Finally, the degree of succulence (S) in stem was calculated according to 24 with slight change S = (Fresh Weight-Dry Weight)/stem Area, where the Area of the stem (As) was calculated as: As = π × r 2 , the diameter of the stems was obtained according to Cárdenas-Pérez et al. 25 . Immunolocalisation experiments. The samples dissected from the middle segment of the shoot (3 individuals per treatment) were prepared for embedding in BMM resin (butyl methacrylate, methyl methacrylate, 0.5% benzoyl ethyl ether (Sigma) with 10 mM DDT (Thermo Fisher Scientific) according to Niedojadło et al. 54 . Next, specimens were cut on a Leica UCT ultramicrotome into serial semi-thin cross sections (1.5 µm) that were collected on Thermo Scientific Polysine adhesion microscope slides. Before immunocytochemical reaction, the resin was removed with two changes of acetone and washed in distilled water and PBS pH 7. Fluorescence quantitative evaluation. For the quantitative measurement, each experiment was performed using consistent temperatures, incubation times and concentrations of antibodies. The aforementioned ImageJ (1.47v) software was used for image processing and analysis. The fluorescence intensity was measured for five semi-thin sections for each experimental population (Inowrocław and Ciechocinek) at the same magnification (100 ×) and the constant exposure time to ensure comparable results. The threshold fluorescence in the sample was established based on the autofluorescence of the control reaction. The level of signal intensity was expressed in arbitrary units (a.u.) as the mean intensity per μm 2 according to Niedojadło et al. 54 . Biochemical analysis. Proline content (P) was measured according to Ábrahám et al. 55 . Five hundred milligrams of fresh stem material was minced on ice and homogenised with 3% aqueous sulfosalicylic acid solution (5 μl mg −1 fresh plant material), centrifuged at 18,000×g, 10 min at 4 °C, and the supernatant was collected. The reaction mixture: 100 μl of 3% sulphosalicylic acid, 200 μl of glacial acetic acid, 200 μl of acidic ninhydrin reagent and 100 μl of supernatant. Acidic ninhydrin reagent was prepared according to Bates et al. 56 . The standard curve for proline in the concentration range of 0 to 40 μg ml −1 . The standard curve equation was y = 0.0467x − 0.0734, R 2 = 0.963. P was expressed in mg of proline per gram of fresh weight. Hydrogen peroxide (HP) levels were determined according to the methods described by Velikova et al. 57 , and 500 mg of stem tissues were homogenised with 5 ml trichloroacetic acid 0.1% (w:v) in an ice bath. The homogenate was centrifuged (12,000×g, 4 °C, 15 min) and 0.5 ml of the supernatant was added to potassium phosphate buffer (0.5 ml) (10 mM, pH 7.0) and 2 ml of 1 M KI. The absorbance was read at 390 nm, and the HP content was given on a standard curve from 0 to 40 mM. The standard curve equation was y = 0.0188x + 0.046, R 2 = 0.987. HP concentrations were expressed in nM per gram of fresh weight. Chlorophylls (Cha and Chb) and carotenoids were extracted from fresh plant stems (100 mg) using 80% acetone for 6 h in darkness, and then centrifuged at 10,000 rpm, 10 min. Supernatants were quantified spectrophotometrically. Absorbance was determined at 646, 663 and 470 nm and calculations were performed according to Lichtenthaler and Wellburn 58 , when 80% of acetone is used as dissolvent. Total chlorophyll content was calculated as the sum of chlorophyll a and b contents. Total CAT activity was determined spectrophotometrically by following the decline in A 240 as H 2 O 2 (ε = 39.9 M −1 cm −1 ) was catabolised, according to the method of Beers and Sizer 59 . Decrease in absorbance of the reaction at 240 nm was recorded after every 20 s. One unit CAT was defined as an absorbance change of www.nature.com/scientificreports/ signal was recorded at the end of the extension step in each cycle. The specificity of the assay was confirmed by the melt curve analysis i.e., increasing the temperature from 55 to 95 °C at a ramp rate 0.11 °C/s. The fold-change in gene expression was calculated using LightCycler 480 Software release 1.5.1.62 (Roche, Penzberg, Germany). Statistical and multivariate analysis. In order to determine the projection of the effect of salt treatment in plants we followed Cárdenas-Pérez et al. 25 The data was fit with a modified three parameter exponential decay using SigmaPlot version 11.0 66 . The relationships between variables were performed using a Pearson analysis, while a significance test (Kaisere Meyere Olkin) was performed in order to determine which variables had a significant correlation with each other (α = 0.05). Then, a 3D plot was developed using the three principal component factors according to the Kaiser criterion which stated that the factors below the unit are irrelevant. The three main factorial scores of the PCA from each sample were used to calculate the distance (D) between the two points (populations) under the same treatment P1 = (x 1 , y 1 , z 1 ) and P2 = (x 2 , y 2 , z 2 ) in 3D space of the PCA (Eq. 1). where x, y, and z are the three main factorial scores in the PCA corresponding to the evaluated treatment in Inw and in Cie. Distances were used to evaluate and determine in which salt treatment the greatest differences between the populations were recorded. Conclusions This work shows that cell's image analysis was efficient at evaluating the salinity-anatomical modification response of S. europaea and can be used to identify differences between populations coming from different maternal salinities. By analysing the cell parameters of area and roundness, we can conclude that these parameters are a good indicator of both succulence and plant salinity tolerance. The biochemical analysis proved that anatomical parameters that confer higher salinity tolerance strongly correlate with the cell's modifications, as confirmed by the Pearson correlation, which highlighted the relationships between anatomical and biochemical parameters. PCA provided evidence that the plants from the anthropogenic saline (Inw) with lower maternal soil salinity (~ 550 mM) habitat are more tolerant to saline stress at laboratory conditions than are those from the natural site with high maternal soil salinity (~ 1000 mM). Our results suggest that the higher salt tolerance of the Inw population may be derived from the maternal salinity being less stressful, and from the better adaptive plasticity of S. europaea. Based on our analysis as a whole, it is clear that our applied methods are able to demonstrate that the two S. europaea populations from different maternal habitats do indeed have different mechanisms of salt adaptation at a cellular and biochemical level at high salinities, as well as a positive salt-tolerance effect under lower salinities. The gene expression analysis suggested the important role of Na + sequestration into the vacuoles and confirmed that the Inw population can be considered the most salt-tolerant in comparison to Cie. Therefore, these results can be used in the future for the selection of resistant plants. The present correlation results between anatomical and biochemical modifications vs. maternal soil salinity are novel in the study of salt-resistant plants, meaning that researchers can apply this correlation analysis straightforward, for future experiments related to plant salinity-development responses. Although further studies are required, these preliminary results support the idea that maternal effects influence offspring physiology under stress environments. However, future studies are required to consider the ecological context in which plasticity and maternal effects are expressed, such as studies of the patterns of natural populations in term of their environmental heterogeneity.
2022-02-24T06:23:09.879Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "865068b3fee403a37e0f70d6e40a76de94c30eb7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-06385-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4be286c6f7fb302c08e867f1a313c47538fae090", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55358234
pes2o/s2orc
v3-fos-license
A PLS Approach to Measuring Investor Sentiment in Chinese Stock Market We select five objective sentiment indicators and one subjective sentiment indicator to build investor sentiment composite index in Chinese stock market by using the partial least squares. The reason why we do that is to improve the shortcomings of the principal component analysis, which was adopted to build investor sentiment composite index in the pioneering research. Moreover, due to the large proportion of individual investors in Chinese stock market and the rapid change of investor sentiment, we innovatively use the weekly data with smaller information granularity and higher frequency. Through empirical tests for its reasonability and market’s predictive capability, we find that this index appears to fit the data better and improves prediction. Introduction Recently, investor sentiment measurement has become one of the more widely examined areas in behavioral finance. The key to measuring investor sentiment is to find the proxy indicators which can express sentiment accurately. It is better that these proxies are observable and quantifiable and can objectively and comprehensively reflect the views of investors on the market. Investor sentiment proxy indicators are usually divided into three types: single objective sentiment indicator, single subjective sentiment indicator, and comprehensive sentiment index. Single indicator is the basic component of composite index construction, which is used flexibly in different studies. While the composite index has theoretical advantages, if the method is properly constructed, we will obtain a more accurate measure of sentiment. According to the pioneering literature, the construction of comprehensive sentiment indexes has become the mainstream of the construction of sentiment indexes. Baker and Wurgler [1] used the first principal component of the proxies as their measure of investor sentiment, and it had been extensively adopted in the following research. For example, Stambaugh et al. [2], Ben-Rephael et al. [3], Chen et al. (2014), Chong et al. [4], Zhigao and Ning [5], Ma and Zhang [6], and so on are basically adopted this method. However, the first principal component appears to be a combination of six proxies that maximally represents the total variations of the six proxies. Since all the proxies may have approximation errors for the actual condition but unobservable investor sentiment and these errors are parts of their variations, the first principal component can potentially contain a substantial amount of common approximation errors that are not relevant for forecasting returns. The partial least squares (PLS) will address the problem effectively. The principal advantage of PLS is that it can extract as much as possible part of investor sentiment from the proxy variable of sentiment. This will ensure that the extracted part is close to the real investor sentiment. For example, Huang et al. [7] use the same six American individual investor sentiment proxies of Baker and Wurgler [1], which include close-end fund discount rate, share turnover, number of IPOs, firstday returns of IPOs, dividend premium, and equity share in new issue to propose a new sentiment index by adopting PLS method. They call the new index extracted by this way the aligned investor sentiment index. They find that their index has greater power in predicting the aggregate stock market than the Baker and Wurgler [1] index. The PLS method has proved suitable for constructing investor sentiment index in the American stock market by Huang et al. [7]. In this paper, for the purpose of predicting 2 Discrete Dynamics in Nature and Society the Chinese aggregate stock market better, we develop Chinese market sentiment index by using the PLS method. The rest of the paper is organized as follows. Section 2 introduces principle of partial least squares (PLS) method to construct indexes. Section 3 constructs the comprehensive index of investor sentiment and then tests its robustness and the power of predicting the stock market. Finally, Section 4 concludes the paper. Principle Introduction of Partial Least Squares (PLS) Partial least squares (PLS) was first proposed by Wold and Albano in 1983. It can realize multiple variables regression modeling in small samples. After the improvement of Kelly and Pruitt [8], it can be used to solve the problem of variable information extraction. Different from the principal component analysis, partial least squares use the method of decomposing predictive variable and response variable , extract component (usually called factors) from them at the same time, and then arrange the factors from large to small arrangement according to the correlation between them. In other words, the partial least squares method can not only well explain the information in the prediction variables, but also well summarize the response variables and eliminate the noise interference in the system. Therefore, it can effectively improve the problems where the PCA method just extracts the information hidden in the predictive variable , resulting in regression model accuracy decrease. We assume that the one-period ahead expected log excess stock return explained by investor sentiment follows the standard linear relation: where SENT represents the comprehensive investor sentiment index of the period. represents the closing price of China Securities Free Float Index (CSI Free Float) (the CSI circulation index is composed of full circulation shares of Shanghai and Shenzhen stock markets; it is based on December 30, 2005, and it adjusts the market capitalization of the stock based on all samples; the base point is 1000) during time period t. The formula shows that the excepted closing price of CSI Free Float in the period is related to the investor sentiment in the period. So the real closing price of CSI Free Float in the period is where is a residual term. It is unpredictable and has nothing to do with investor sentiment SENT , ordering = ( 1 , 2 , . . . , ) to represent a single investor sentiment proxy variable vector of × 1-order in the period and assuming that each original proxy index has the following structure: We assume that SENT should be a linear combination of SENT , which means the relationship between SENT and SENT is where SENT represents the investor sentiment information contained in the original proxy variable . represents the deviation information, which is unrelated to the investor sentiment but is related to the closing price of CSI Free Float. is a unique noise contained in the proxy variable . 1 , 2 represent the sensitivity of SENT and to proxy variables , respectively. represents the weight of the integrated measure index in the investor sentiment information which is contained in the proxy variable . Therefore, we think that the core of the problem lies in how to decompose investor sentiment information SENT of a structure for each original proxy variable . The partial least squares method is better than the principal component analysis method, which can effectively eliminate the interference of information deviation and specific noise and can construct the comprehensive sentiment index which can reflect the real investor sentiment. Integrating (2), (3), and (4), we can sort out that there is such a relationship between the individual investor sentiment proxy index = ( 1 , 2 , . . . , ) and the closing price of CSI Free Float : From it, represents the explanatory power of the original proxy variable to the closing price of CSI Free Float combining with (2), (3), and (4); we can see that each investor sentiment proxy variable can be expressed as a linear function of the closing price of CSI Free Float, and it has nothing to do with the unpredictable deviation . Therefore, we think that in (5) can be used to reflect the contribution degree of investor sentiment proxy variable to the comprehensive investor sentiment index SENT . As far as the contribution of each proxy variable to investor sentiment is concerned, it can be determined by the covariance between the investor sentiment proxy variable and the closing price of CSI Free Float . Then, based on the PLS method, the comprehensive investor sentiment index can be expressed as From it, = ( 1 , 2 , . . . , ) represents a single investor sentiment original proxy variable sequence; = ( 1 , 2 , . . . , ) represents the weight of each proxy indicator in the comprehensive investor sentiment index. Data. In the process of collecting indicator data, considering the larger proportion of individual investors in Chines stock market, it is extremely easy to be influenced by shortterm market volatility and then lead to irrational speculation. Discrete Dynamics in Nature and Society 3 In order to more accurately track changes in investor sentiment on the market, in this paper, we innovatively adopt weekly data which have smaller information granularity and higher frequency, to capture the immediate investor sentiment, rather than the annual or monthly data used in most of the previous literature. In this paper, the weekly data set from January 4, 2008, to May 30, 2014, is used as the training set of sentiment index construction. At the same time, in order to test the validity and robustness of the index construction method, we will intercept the weekly data from June 6, 2014, to May 29, 2015, as the test set of the index construction and use the corresponding cycle of CSI Free Float to represent the overall performance of Chinese A shares. In this paper, we select five objective indicators through the optimization in the specific selection of proxy indicators, which are SWS Low Profit Margin Stock Index (LPM(0)), SWS High-P/E-Ratio Index (HPEI (0)), SWS High-P/B-Ratio Index (HPBI(0)), one-period lag Newly Additional Fund Accounts (NAFA (+1)), six-period lag new number of IPO (NIPO (+6)), and a subjective indicator: New Fortune Analyst Index (CAI (0)) over the same period. Based on conclusions of Baker and Wurgler [1], we believe that investor sentiment leads investors to make decisions; at the same time, investor sentiment itself will also be affected by changes of macroeconomic factors; for example, the number of IPOs will change with the macroeconomic cycle fluctuations. But this is based on the objective analysis of the reality of the macroeconomic operation situation. It is a rational sentiment based on the investor's psychological factors and not included in the scope of the study. Therefore, we will separate the rational components of investor sentiment through the multivariate regression model, eliminate it, and only retain the irrational elements of investor sentiment: From it, is the original proxy variable value of the period. That means, LPM(0), CAI(0), NAFA(+1), HPBI(0), HPEI(0), NIPO(+6), and Macro are a series of indicators reflecting macroeconomic fundamentals, is the parameter to be estimated, and 0 is a constant. is the residual of a regression equation, which represents irrational sentiment excluding macroeconomic fundamentals. Here, taking into account the representativeness of the macroeconomic cycle variables and the availability of weekly data, we use China's commodity price index (CCPI) and the Central Bank weekly monetary net supply (MNS) as proxy variables to reflect the macroeconomic fundamentals. Residual sequence obtained by regression is as follows: 1 , 2 , 3 , 4 , and 5( +6) , respectively, expressed by ELPM(0), ECAI(0), EHPBI(0), EHPEI(0), and ENIPO(+6). They represent the proxy variables of irrational investor sentiment after the elimination of macroeconomic fundamentals. Because the selected original proxy variables of the investor sentiment are not subject to normal distribution, in this paper, we choose the standardization of 0-1 method to standardize the index. The method uses observed value of a variable to subtract the minimum value of the variable. The specific formula is After the sequence of the standard deviation, ELPM(0), ECAI(0), EHPBI(0), EHPEI(0), and ENIPO(+6) are expressed as sLPM(0), sCAI(0), sHPBI(0), sHPEI(0), and sNIPO(+6). After standardization, the observed values of each variable will fall between (0, 1); the standardized data are pure numbers without units and can be directly used for the following index structure. After the above pretreatment, the results of the descriptive statistics of the selected investor sentiment proxy indictors are shown in Table 1. Investor Sentiment Composite Index Construction. We choose the investor sentiment proxy indictor sequence after pretreatment: sLPM(0), sCAI(0), sHPBI(0), sHPEI(0), and sNIPO(+6), 5 indictors in all. Firstly, before the number of principal components in the model is determined, we should determine the number of principal components by a certain method. For the selection of principal components, normally, if the number of selected components is too much, it is likely to lead to the problem of overfitting. Conversely, if the number of selected principal components is too small, it is likely to lose some important information. In order to find out the optimal number of principal components, it is necessary to follow the conclusion of "Leave-One-Out Cross Validation" when choosing the number of components as the final model's one. Moreover, we collect data when the sum of squares of errors is the minimum value, or it almost remains with no change. The results are shown in Table 2. Table 2 shows the model fitting results of the number of different principal components. Based on the results in Table 2, the error square sum of the number of different components obtained by the "Leave-One-Out Cross Validation" and combined with Figure 1, we can see that, when the number of principal components is two, the square error is almost with no change. And the cumulative contribution The correlation coefficient between the comprehensive measure index of investor sentiment SENT PLS and each sentiment proxy variable can be seen in Table 3. It can be seen from the statistical results of correlation coefficient that the correlation between sLPM(0), sCAI(0), sHPBI(0), sHPEI(0), and the investor sentiment indictor index SENT PLS is the highest. The correlation coefficients were 0.9350, 0.9439, 0.9626, and 09704, respectively. The correlation coefficient between sNIPO(+6) and sHPEI(0) is 0.4913. From the symbol of factor composition, we can find that, in addition to sNIPO(+6), the factor composition coefficients of all the other variables are positive. It means that sLPM(0), sCAI(0), sHPBI(0), and sHPEI(0) are positive indicators of a composite index built on the basis of the PLS method and are basically consistent with theoretical expectations. On the contrary, sNIPO(+6) is a negative indictor. Robustness Test. In order to guarantee the stability of every proxy indictor in the investor sentiment composite index, we divided the whole study period into two "bull market" periods (the time span is, resp., 2008.11.7-2010.11.5 and 2012.12.7-2014.5.30) and two "bear market" periods (the time span is, resp., 2008.1.4-2008.11.7 and 2010.11.5-2012.12.7). Then we construct the investor sentiment index in two market states, respectively, and observe whether there has been a significant change between the coefficients and the symbol of each proxy indictor and the upper section. It should be particularly noted that although the sample period is divided into "bull market" period and "bear market" period, in the span of the sample period, from January 1, 2008, to May 30, 2014, the overall market has never exceeded the previous highs. So the entire sample period is still regarded as bear market. Therefore, conditions for robustness testing will be relaxed, as long as, in the "bear market" period, there is no significant difference between the factor structure of the sentiment composite index and the full sample index. It can be assumed that the investor sentiment composite index constructed by this method is robust. Otherwise, we can assume that it is not robust. It will change with the change of market conditions, affecting the validity and accuracy of the empirical results. In the bull market and bear market period, the partial least squares method is, respectively, used to extract the investor sentiment information from the original proxy indicator; then the information will be synthesized to form the investor sentiment composite index. Here, we still use sLPM(0), sCAI(0), sHPBI(0), sHPEI(0), and sNIPO(+6) five indictors and the results of Cross Validation to determine the number of principal components in the model. Among them, we select the first two principal components (the cumulative contribution rate of investor sentiment proxy variables is 93.90%; the cumulative contribution rate of the closing price of CSI Free Float is 97.34%) in the "bull market" period and the first two principal components (the cumulative 1.0000 sCAI(0) 0.8320 * * * 1.0000 sHPBI(0) 0.8927 * * * 0.8694 * * * 1.0000 sHPEI(0) 0.8786 * * * 0.8922 * * 0.9144 * * * 1.0000 sNIPO(+6) 0.4894 * * * 0.4351 * * * 0.4133 * * * 0.5790 * * * 1.0000 Notes. The first rows of data in the table are the factors of the composition of the index of the 5 proxy indictors in the sentiment composite index. The correlation coefficient between second lines of data is the composite sentiment index and the proxy indictor. 3-7 lines are the correlation coefficient among behavioral surrogate indictors. * * * , * * , and * , respectively, represent significant levels at 1%, 5%, and 10%. Note. * * * , * * , and * , respectively, represent significant levels at 1%, 5%, and 10%. contribution rate of investor sentiment proxy variables is 96.73%; the cumulative contribution rate of the closing price of CSI Free Float is 98.86%) in the "bear market" period and then construct the investor sentiment composite index as follows: SENT PLS bull = 0.3320 × sLPM + 0.2319 × sCAI Combining the statistical results of Table 4, we compare comprehensive measure indexes of investor sentiment during 3 periods: the "bull market" period (9), the "bear market" period (11), and the whole sample period of (9). It finds that there is little difference between (10) and (11) in the size and the symbol of factor composition of the comprehensive measure index of sentiment and (9). It can be explained that the change of market condition does not influence the original proxy variables of each sentiment when constructing the investor sentiment index. That means that the comprehensive measure index of investor sentiment constructed in the "bull market" and "bear market" period is more robust and has little difference with the full sample index factor composition. Interpretive Power to the Closing Price of CSI Free Float. In general, the more optimistic the investor sentiment tends to be, the higher the closing price of CSI Free Float will be. On the contrary, it will be lower. In other words, the level of investor sentiment is consistent with the changing track of market fluctuation in theory. We select the sample data of test set (June 6, 2014-May 29, 2015) and examine the interpretive power of the comprehensive investor sentiment index based on the PLS method to the closing price of CSI Free Float after the same pretreatment with the training set data. First of all, we draw the time series comparison chart of the investor sentiment composite index and the closing price of CSI Free Float, which is shown in Figure 2. Judging from the trend comparison chart, the interpretive power of the investor sentiment index constructed by PLS method to the closing price of CSI Free Float is relatively good. In order to make the conclusion more convincing, we treat the investor sentiment composite index as the predictor variable and treat the closing price of CSI Free Float as the response variable. Then, we carry out linear regression on them and use 2 value of the linear regression model to represent the explanatory power of the sentiment composite index to the closing price of CSI Free Float; at the same time, we combine the AIC information criterion to select the optimal sentiment index. The final result is as follows: 2 of the regression equation of SENT PLS and the closing price of CSI Free Float is 0.9964; the value of AIC is −274.74. This shows that the fitting effect is very good; the investor sentiment index based on the PLS method has strong ability to interpret the stock market index. Conclusion Investor sentiment measurement has long been one of the challenging problems in behavioral finance. Although principal component analysis (PCA) is able to furthest extract nonrepetitive information about variables, there are also drawbacks. Due to the proxy indicator of the synthetic principal component factor, there may still be a large amount of bias information unrelated to the real sentiment of investor, resulting in reduced accuracy of the model. In order to address the defects of principal component analysis, this paper uses the partial least squares (PLS) to rebuild the investor sentiment composite index in Chinese stock market and analyze the robustness and the explanatory power to the closing price of CSI Free Float. It turns out that the investor sentiment composite index based on PLS is in better agreement with actual condition. What is more, it has strong predictive power in the stock market.
2018-12-12T17:27:59.737Z
2017-08-06T00:00:00.000
{ "year": 2017, "sha1": "9e63b70dfcac3484a973fb4e4cf1dab6a5053295", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ddns/2017/2387543.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9e63b70dfcac3484a973fb4e4cf1dab6a5053295", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
119350377
pes2o/s2orc
v3-fos-license
Reduced Fine-Tuning in Supersymmetry with R-parity violation Both electroweak precision measurements and simple supersymmetric extensions of the standard model prefer a mass of the Higgs boson less than the experimental lower limit of 114 GeV. We show that supersymmetric models with R parity violation and baryon number violation have a significant range of parameter space in which the Higgs dominantly decays to six jets. These decays are much more weakly constrained by current LEP analyses and would allow for a Higgs mass near that of the $Z$. In general, lighter scalar quark and other superpartner masses are allowed and the fine-tuning typically required to generate the measured scale of electroweak symmetry breaking is ameliorated. The Higgs would potentially be discovered at hadron colliders via the appearance of new displaced vertices. The lightest neutralino could be discovered by a scan of vertex-less events LEP I data. The Standard Model of particle physics is arguably the crowning achievement of the last half-century's work towards the understanding of the laws of nature at short distances. However, two somewhat nagging features remain. The first is that while statistical fits of standard model parameters to precision measurements produce a best fit value for the Higgs scalar mass of 85 +39 −28 GeV [1], LEP II places a lower bound of 114.4 GeV at 95% CL [2]. While these constraints taken together do not constitute a discrepancy, a Higgs mass measured below the current LEP bound would have improved the fit to precision data. In addition, it has been argued that the electroweak observables most sensitive to the Higgs mass are themselves not in good agreement and imply a discrepancy with the LEP II bound [3]. The second feature is that the scale of electroweak symmetry breaking is very sensitive to quantum corrections. If the standard model is valid to some very high energy scale M ≫ 1 TeV, the parameters of the ultraviolet theory would require an unnatural tuning of order one part in (M/1 TeV) 2 to maintain the hierarchy. While this fact alone does not guarantee new physics beyond a Higgs boson at the electroweak scale, it is strongly suggestive of physics at the weak scale which stabilizes scalar masses with respect to radiative corrections. A well-known solution to the naturalness problem is to impose supersymmetry on the standard model and softly break it at a scale of M ∼ 1 TeV (for a review, see [4]). Radiative corrections to scalar masses in these theories are proportional to the scale of supersymmetry breaking and therefore naturally stabilize the mass of the Higgs at around the weak scale. Remarkably, the minimal version of these theories predict the unification of couplings at a renormalization scale near the Planck scale. A discrete symmetry, R parity, is introduced to forbid dimensionfour baryon and lepton number violating operators and avoid proton decay as we discuss below. While the MSSM (Minimal Supersymmetric Standard Model) contains over a hundred new parameters, it has become tightly constrained. A robust constraint on the MSSM is the bound on the Higgs mass. The physical mass gets contributions which depend only logarithmically on superpartner masses (for example, the scalar top quark mass) through corrections to the Higgs quartic interaction. On the other hand, the Z boson mass and the scale of electroweak symmetry breaking gets corrections proportional to superpartner masses. To satisfy the current bound on the physical mass, large scalar top masses (mt ≃ 1 TeV) are required. For a large cutoff Λ, say of order the Planck scale, contributions to the (squared) Z mass will be roughly of order the superpartner masses, say δm 2 Z ∼ m 2 t . A cancelation would be required among contributions with a tuning of the order one part in (mt/m Z ) 2 . Thus, 1 TeV scalar tops would require ∼ 1% tuning. A beautiful discussion of this tension in the MSSM is contained in [5]. One possible resolution to the paradox is that the Higgs is in fact light but missed by experiments. The quoted lower bound on the Higgs mass comes from analyses assuming a Higgs with standard model properties such as a standard model cross section for Z-Higgs production and standard model branching ratios into bottom quarks and tau leptons. If the branching ratio to standard model final states are uniformly suppressed by, for example, a factor of five -and the new decay modes are not picked up by any LEP searches -the 95% CL lower limit on the Higgs mass reduces to roughly 93-95 GeV (see Figure 2 of [2]). Our model exploits this weakness. Other attempts to modify Higgs decays for the purpose of naturalness have been made in the context of the next-tominimal supersymmetric standard model [6], and in a general analysis of the MSSM with an additional singlet superfield [? ]. In this letter, we show that in the MSSM with R parity violation and non-unified gaugino masses, there is a significant amount of parameter space in which the Higgs dominantly decays to a pair of unstable neutralinos, each of which subsequently decays to three quark jets. The parameter space allows, as we detail below, Higgs masses around the Z mass even with a standard model production cross section; this is our main result. R parity is a symmetry under which all superpartners are odd. It forbids the following renormalizable operators in the superpotential: where L, E c ,H, Q, U c , and D c are lepton doublet, lepton singlet, up-type Higgs, quark doublet, up-type quark singlet, and down-type quark singlet superfields respectively, and the ijk are flavor indices. These interactions violate lepton number (the first line) and baryon number (the second). Their existence would predict unacceptable levels of proton decay unless at least some of these couplings are extremely small. However, proton stability could be provided by a symmetry that allows only the lepton-number or baryon-number violating terms [7]. In this paper, we focus on the latter. Bounds on the individual λ ′′ couplings are only stringent from neutron-anti-neutron oscillations and double nucleon decay, requiring λ ′′ 112 < ∼ 10 −7 and λ ′′ 113 < ∼ 10 −4 for 200 GeV scalar quark and gluino masses. The other seven coupling are less constrained. The tightest bounds are on products of two different couplings which range from λ ′′ ijk λ ′′ i ′ j ′ k ′ < 10 −2 − 10 −4 and come dominantly from limits on rare hadronic decays of B mesons. For a broad review of R parity violation in supersymmetry, see [8]. Do LEP searches put a bound on a Higgs that decays to 6 quarks (via two neutralinos)? No analysis has been performed looking for this exclusive final state. A decay-mode-independent search for a Higgs boson was performed by the OPAL experiment (by looking for the associated Z in leptonic channels [9]) and puts a lower bound of 82 GeV when the production cross section is equal to that of the standard model. In addition, the search for a Higgs decaying to two jets of any flavor [10] could be sensitive to our Higgs to six jets when the latter can be forced into a two-jet topology (there may also be sensitivity from h → 2b when each neutralino decay contains a b quark). An analysis of this type was done by DELPHI [11] in the search for a cascade decay of the Higgs to four b quarks via two pseudo scalars, a [12]. They modified the search for e + e − → hZ → (bb)Z to be sensitive to the cascade h → aa → bbbb by forcing the latter into two jets and estimating the efficiency of the h → 2b search to pick up h → 4b. Comparing Tables 27 and 29 in [11], one can see that DELPHI rules out a 100 GeV Higgs decaying exclusively to bb with roughly a quarter of the standard model cross section, while it requires 80% of the cross section to rule out the same mass Higgs decaying to 4b's (about three times the signal events). In our case, the Higgs decays to six jets and thus the jets will be softer and more numerous and thus harder to reconstruct into two jets. To see the effect of softer jets, the same tables show that for a 65 GeV Higgs decaying to 4b's vs. 2b's requires 7-8 times more signal events -and this is even with pseudo scalars as light as 12 GeV. In addition, part of the DELPHI analysis takes advantage of the additional b quarks in the final state, whereas our final state will have at most 2b's. Assuming the loss of the extra b quarks in the final state cuts the efficiency in half, we estimate that our soft jets with 2 b's in the final state would be picked up by the 2b search with an efficiency of ∼ 1/7 × 1/2 ∼ 7%. This would also be true of the flavorless search [10], which is similar to the 2b search with the b-tagging requirement removed. While we feel this estimate is conservative, it is very rough and a full analysis is warranted. What are the constraints on the mass of the lightest neutralino? The neutralino should be light enough to allow for our Higgs decay ( < ∼ 50 GeV), while the lightest chargino must satisfy its current lower bound (∼ 103 GeV in most of parameter space, even for R-parity violating decays [13]). This constrains the MSSM parameters such that M 1 < M 2 , µ, and the lightest neutralino is mostly bino -although it must have enough of a higgsino component to allow the Higgs width to be dominated by this decay, and thus the µ parameter shouldn't be too much larger than 100 GeV as we see below (see also [14]). The remaining question then is how could such a light neutralino with strong enough couplings to dominate the Higgs width not being detected indirectly by its effect on the Z width or directly in searches at LEP II. There are two reasons: the first is that the width of the Z in the standard model (∼ 2.5 GeV) is three orders of magnitude bigger than the standard model width of a 100 GeV Higgs. The second is that in the range of small to moderate bino-higgsino mixing, the Higgs decay rate into neutralinos is roughly proportional to the mixing angle squared while the same rate for the Z goes like the mixing angle to the fourth power. The decay width of the Z into the lightest neutralino at tree level is where ∆, defined in the appendix, is roughly the binohiggsino mixing angle squared, and Γ ν is the standard model Z width into one species of neutrino. We require this contribution to the total and hadronic widths to be less than 0.1% -roughly 1σ as determined by the electroweak fit [1]. This requirement sets a bound of ∆ < ∼ 1/10 for a very light neutralino, and weaker for heavier neutrinos as phase space gets reduced. We find that in most of our parameter space -where the decay to neutralinos dominates the Higgs width and the chargino bound is satisfied -we satisfy this constraint. Searches for neutralinos which decay via baryon number violation have been preformed by Aleph, DELPHI and L3 [13]. None of the three searches were able to put a bound on the neutralino mass via a direct search, but only through a search for a chargino and the theoretical assumption M 1 = (5/3) tan 2 θ W M 2 ∼ M 2 /2 relating the two masses through the assumption of gaugino mass unification. The L3 experiment does present crosssection bounds for neutralino masses between 30 GeV and roughly 100 GeV of around 0.1 pb. The neutralino cross section through an s-channel Z at LEP II is where σ →νν , the neutrino pair-production cross section, is ∼ 1 pb at center of mass energy √ s = 200 GeV. To satisfy the L3 bound, we require ∆ < 1/3. This bound is satisfied in our entire parameter space. However, if scalar leptons are relatively light, a t-channel diagram can dominate the cross section and overwhelm the bound. Requiring the cross section to satisfy the L3 constraint places a lower bound of ∼ 300 GeV on scalar electron masses (in the case degenerate scalars). This becomes our strongest constraint on a superpartner mass in the baryon-number violating MSSM. The lightest neutralino, due to the weakness of their couplings and the nature of their decays, has no significant collider bound within our parameter space. The points in parameter space which predict a large Higgs to neutralinos branching ratio and satisfy the lower bound on the chargino mass satisfy M 2 > 3M 1 [14], and the effect on the branching ratio becomes unimportant above M 2 > 250 GeV. In Figure 1 we show a plot of different branching ratios of the Higgs to neutralinos for M 2 , M 1 , and the Higgs mass fixed at 250, 50, and 100 GeV respectively. Each point also satisfies the constraint on the contribution to the hadronic Z width. We also require m χ 0 > 12 GeV to allow for significant phase space for the decay. We see that a large branching ratio requires relatively low values of tan β and µ. In Figure 2 we scan over M 1 , M 2 , µ, and tan β and plot points which satisfy the chargino mass and Z width bounds. For these points, the branching ratio is less than 25% to normal standard model decays, thus lowering the Higgs mass bound in this part of parameter space to roughly 95-100 GeV according to Figure 2 of [2]. The scans are done in the decoupling limit (i.e., the pseudo-scalar mass is fixed at 1 TeV), where the heavier CP-even Higgs boson is much more massive and thus all couplings of this lightest Higgs are standard model like. Away from this limit, the decay width to standard model channels increases while the overall production cross section goes down. Moderate mixing with the heavier Higgs does not significantly change the qualitative features of these plots. The decay length of the neutralino can be long enough to leave a displaced vertex. The average decay length of the lightest neutralino is [15] L ≃ 384π 2 cos 2 θ w where |U 21 | is an element of the mixing matrix in the appendix and p χ is the neutralino's momentum. Finalstate particle masses, Yukawa couplings and QCD corrections have all been neglected. For very small couplings, light neutralinos, or heavy scalar quarks, the decay length could be quite long and might have been seen as anomalous events at LEP if they decay in the tracking chamber, and perhaps by searches for stable squarks and gluinos [16] if they decay in or near the hadronic calorimeter. If their decay length is longer than about a meter, the invisible Higgs search would pick up these events and rule out masses up to 114 GeV [17]. What does baryon number violation do for susy parameter space? In general, bounds on superpartner masses are weaker than in the case of the R-parity conserving MSSM due to a lack of missing energy in the signal. The current bounds of 200 -300 GeV on squarks and gluinos from Tevatron searches came from analyses which required significant missing transverse energy cuts [18]. With baryon-number and R-parity violating interactions, the bounds on all superpartners are below 100 GeV, except for the chargino, whose bound remains roughly the same (102.5 GeV) [19]. The bounds on the lightest neutralino quoted in the particle data book are due to chargino searches and the requirement of gaugino mass unification. The direct search at LEP for a decaying lightest neutralino is unable to put bounds on its mass. Of course another impact of this model is the allowance of a lighter Higgs mass thus reducing the need for large radiative corrections to the quartic potential from the stop loop. For the same value of tan β = 3, the allowed lighter Higgs mass (say around 96 GeV) requires an enhancement of the quartic of only half as much as in the MSSM with R-parity conservation. If instead we compare allowed MSSM Higgs masses at large tan β to our model's allowed Higgs masses at tan β = 3 (since we require low values for our decay to dominate), we still typically require a lower quartic enhancement by roughly 10 − 30%. This translates into lower stop masses needed and less tuning. However, while R parity violation and non-unified gaugino masses help to relieve much of the persistent fine tuning in the MSSM, they clearly do not eliminate it [20]. Among the strongest constraints are the chargino mass bound and the restrictions on contributions to b → sγ. In addition, avoiding the Higgs mass bound requires one to be in a non-generic part of parameter space in which the Higgs decays to neutralinos. R parity violation can allow for other non-standard Higgs decays which evade LEP searches. For example, one linear combination of scalar bottom squarks can be perhaps as light as 7.5 GeV [21] due to suppressed couplings to the Z. With baryon number violation, and sbottom masses below half the Higgs mass, this would allow the Higgs to decay to four light jets, and the decay would dominate standard Higgs decays at moderate to large tan β [22]. On the other hand, lepton number violation through, for example, the superpotential operator λ ′ i33 L i Q 3 D c 3 could produce a dominant Higgs decay of h → 4b + E to which the standard 2b and 4b searches should have significantly reduced sensitivity. These and other lepton-number violating decays are being explored [23]. If the above scenario is correct, searching for the Higgs at hadron colliders could pose great difficulty. However, if the neutralinos decay at a displaced vertex with a decay length greater than about 50 microns, these events could potentially be picked up by a dedicated search at the Tevatron, LHC, or LHCb [24,25]. The vertex tag-ging at LHCb would be well suited for this search, and the statistics high enough -roughly 30% of the Higgs bosons produced via gluon fusion are expected to fall in the detector's acceptance range [26]. In addition, half of these decays would be baryon violating (assuming the lightest neutralino is a Majorana particle) and this could potentially be a striking signal. Finally, the small but non-zero coupling of long-lived neutralinos to the Z may allow them to be discovered by studying the beam gas ("vertex-less") events in LEP I data [27]. Appendix: The neutralino mass matrix is diagonalized from the gauge basis to the mass basis by the orthogonal matrix U . The eigenvalues are in ascending order in magnitude, and thus m χ1 is the mass of the lightest neutralino. In the gauge basis, the mass matrix above multiplies the vector {W ,B,H,H} corresponding to the wino, bino and down-and up-type higgsinos. The bino-higgsino mixing can be characterized by a parameter ∆ (used in the text) defined as ∆ = |U 13 | 2 − |U 14 | 2 (6)
2019-04-14T02:39:23.583Z
2006-07-19T00:00:00.000
{ "year": 2006, "sha1": "4ef0cd9ad8c7734b02ec8fdc069732706f94f9a7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0607204", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4ef0cd9ad8c7734b02ec8fdc069732706f94f9a7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
222090757
pes2o/s2orc
v3-fos-license
Effect of short-ranged spatial correlations on the Anderson localization of phonons in mass-disordered systems We investigate the effect of spatially correlated disorder on the Anderson transition of phonons in three dimensions using a Greens function based approach, namely, the typical medium dynamical cluster approximation (TMDCA), in mass-disordered systems. We numerically demonstrate that correlated disorder with pairwise correlations mitigates the localization of the vibrational modes. A correlation driven localization-delocalization transition can emerge in a three-dimensional disordered system with an increase in the strength of correlations. Introduction Anderson introduced an ideal theoretical model containing the essential ingredients for studying the nature of one-electron states in disordered systems [1]. The model assumed noninteracting electrons moving through a lattice and allowed to hop only to nearest-neighbor sites. Disorder was introduced in the local orbital energies, which were independent quenched random variables distributed according to some specified probability distribution. Anderson predicted that the wave function may become exponentially localized with a characteristic localization length depending on the strength of disorder. Scaling theory [2] bolstered Anderson's idea of localization [1] by considering non-interacting electron systems with uncorrelated disorder. It found that all one-electron states are exponentially localized in one and two dimensions even for infinitesimal amount of disorder, with a true metalinsulator transition occurring only in three dimensions (3D) whence the single-particle states may survive as extended states for weak disorder. A series of analytical, numerical and experimental results find strong agreement with one parameter scaling theory of localization. However, the characteristics of the disorder potential can have a strong impact on Anderson localization. In particular, spatial correlations in the disorder can markedly change the conventional physics of Anderson localization. Such correlated disorder is relevant to transport properties of binary solids, DNA [3,4], graphene [5], quantum Hall wires [6], topological insulators [7] and so on. Recently, there has been a growing interest in understanding the effect of spatial correlations on Anderson localization due to tremendous experimental progress. Clement et al. [8] developed a experimental technique for creating correlated disorder through the laser speckle method. In this method, one can accurately control the spatial correlation length. A spatial correlation induced localization-delocalization transition has been experimentally observed in GaAs-AlGaAs superlattices [9]. Very recently, a transition between algebraic localization and delocalization in a 1D disordered potential with a * Author for correspondence (raja@jncasr.ac.in) bias has been reported [10]. Such experimental observations call for an in-depth theoretical analysis of the effect of shortrange correlations on Anderson localization. We describe briefly the theoretical investigations that have incorporated short-range as well as long-range spatial correlations in the diagonal as well as off-diagonal disorder. A series of one-dimensional versions of the Anderson model have been used to demonstrate a breakdown of Andersons localization driven by spatial correlations on the disorder distribution [11,12,13,14,15]. Also, effort has been made to demonstrate the strong effect of off-diagonal correlated disorder on Anderson localization. For example, a number of studies have employed correlated off-diagonal interactions and found delocalized states [16,17,18]. Besides short range correlations, several investigations have been performed considering long range correlations in the disorder distribution. Carpena et al [19] find a long-range correlation-induced metal -insulator transition using a one-dimensional tight-binding model. Francisco et al [20] obtain an Anderson-like metalinsulator transition studying a one-dimensional tight-binding model with long-range correlated disorder. All these studies suggest that localization properties are greatly renormalized when some kind of spatial correlation is introduced in the disorder distribution. However, most of the studies are limited to electronic problems and Anderson localization of phonons in the presence of spatially correlated disorder has received scant attention, both theoretically and experimentally. Being a general wave phenomenon, Anderson localization is ubiquitous. Sajeev John et al [21], using field theoretic techniques, investigated phonon localization in the presence of long range correlated random potential. However, methods like exact diagonalization (ED), transfer matrix method (TMM), multifractal analysis, diagrammatic techniques, itinerant coherent-potential approximation (ICPA) have not been employed for studying phonon localization in the presence of correlated disorder. Most of the mentioned methods have been confined to simple models of lattice vibrations, where the diagonal matrix elements M(l) of the Hamiltonian are independent random variables. In our previous study [22], we provided a detailed description of a typical medium dynamical cluster approxima-tion (TMDCA), that yields a proper description of the Anderson localization transition in 3D. It adopts the typical density of states (TDOS) as a single particle order parameter for the Anderson localization transition (ALT) which makes it computationally less expensive compared to other numerical methods like ED and TMM. It satisfies all the essential requirements expected of a successful quantum cluster theory. We have also been able to extend the formalism for studying Anderson localization of phonons in the presence of both diagonal and off-diagonal disorder [22,23]. In this work, we investigate the nature of the Anderson transition for phonons in the presence of spatially correlated disorder in 3D. This paper is organized as follows. In section II, we give a brief description of the model and method that are used in this work. In section III, we present results and discussions. We conclude our work in section IV. Method As before [22], we consider the following Hamiltonian for the ionic degrees of freedom of a disordered lattice within the harmonic approximation in the momentum (p) and displacement(u) basis, as where the symbols have their usual meaning as described in Ref [22]. In this work, we again restrict ourselves to a single branch (α) and single basis atom (i = 1) case, hence we drop the indices, α, β, i, j. The unit cell index (l) is retained. The spatial dependence of the ionic masses M(l) is incorporated through a local disorder potential V aŝ In the previous work [22], we had considered a uniform box distribution, where the quantity 1 − M(l)/M ∈ [−V, V] can take any value in that interval with equal probability and 0 ≤ V ≤ 1 is the disorder strength. The random V ′ s from site to site were taken to be uncorrelated with each other. As mentioned in the introduction, the objective of this work is to investigate the effect of short-range correlations in the mass disorder. We begin with nearest-neighbour correlations. We first distribute masses randomly on the odd indexed sites and on the even indexed sites, exactly as was done previously, according to a uniform distribution with the same mean and variance. The disorder potential at the odd indexed sites is denoted as V 1 and that on the even indexed sites is denoted as V 2 . Therefore, the following initial correlations hold: Now, since V 1 and V 2 are independent, ρ V 1 V 2 = 0. From these two uncorrelated random sequences, we want to generate correlations between consecutive sites of the odd and even sequences with a specified correlation coefficient ρ. The resulting new sequences for the odd and even indexed sites, denoted as V odd and V even , should be correlated pairwise. So, the site 2n + 1 and 2n should be correlated. where σ 2 is the variance. Let us construct V odd and V even using linear combinations of V 1 and V 2 as where the unknown coefficients, a, b, c and d will be chosen so that the odd and even sequences get correlated with each other. So, Using Eq.(4) in Eq (7), we write Using Eq.(4) in Eq.(5), we write Using Eq. (8) in Eq. (9), we get From Eq(6), we write Using Eq.(4), we get We impose the condition From the above, it is easy to see that So, the transformation that yields the desired correlations can be chosen as Hence, the expression ac + bd = 2 cos φ sin φ = sin 2φ . Thus, random V odd and V even are correlated with ρ V odd V even which is equal to sin 2φ, where We can verify that this method does induce correlations between the even and the odd sequences. For vanishing correlation, i.e. for ρ V odd V even → 0, from Eq. 17, φ → 0 as well. This implies, from Eqs. 7 and 15, that a, d → 1 and b, c → 0, hence Since V 1 and V 2 are anyway uncorrelated, the new sequences, V odd and V even , in this limit are also uncorrelated. While in the other extreme, namely ρ V odd V even → 1, we get φ → π/4, which implies a, b, c, d → 1/ √ 2, and hence V odd ≃ V even ≃ (V 1 + V 2 )/ √ 2. Thus, in this limit, V odd and V even become almost equal and are hence fully correlated. We illustrate this in Fig. 1, where for four different correlation coefficients, ρ V odd V even = 0.2, 0.5, 0.8 and 0.99, the difference of the two sequences, V odd − V even is plotted as a function of the siteindex. It is seen that for small correlation coefficients, the difference is large, and hence the odd and even sequences are uncorrelated. While for large correlation coefficient ( 0.9), the difference is very small, and hence the two sequences are strongly correlated. An algorithm that implements the described formalism for creating correlated disorder potential is stated below: 1. The algorithm for generating correlated disorder potential starts with creating local disorder potential V l , which we initially consider as spatially independent random variables distributed according to uniform (box) distribution as where V l is the disorder potential defined in Eq.(2) and V is the width of the distribution that corresponds to the disorder strength. 2. Identify the V l at lattice sites l that are labeled by the even number or odd number. We define V 1 (l) as the disorder potential at the odd indexed lattice sites and V 2 (l) as the disorder potential at the even indexed lattice sites. 3. We set ρ as correlation strength parameter which can be varied from 0 to 1. For a given value of ρ, we calculate φ using Eq.(17). 4. The unknown coefficients a, b, c, d are calculated using Eq. 15 and the normalization is maintained by imposing the condition given in Eq. (14). 5. The spatial correlations among the V odd and V even are introduced depending on the strength ρ according the relation given in Eq. (6). The rest of the algorithm is the same as described in our previous publication [22]. Results and discussion As we have already discussed, a true delocalization-localization transition occurs in 3D depending on the strength of disorder (V). We investigate this Anderson transition of phonons using the TMDCA in the presence of short range order. In our previous study [22], we have already established that the TDOS is a valid order parameter for studying phonon localization. So, we first observe the evolution of the TDOS with increasing disorder strength V for correlated strength ρ = 0 (uncorrelated) and ρ = 0.99. It is displayed in Fig.2. As may be expected, the TDOS for ρ = 0 is almost the same as the TDOS for ρ = 0.99 for low disorder (V ≤ 0.3). But, for V > 0.3, the TDOS for ρ = 0.99 starts to deviate strongly from the TDOS for the uncorrelated disorder. We note that the TDOS for ρ = 0.99 differs significantly from the TDOS for the uncorrelated disorder at V = 0.9. We have already understood that the vanishing of the TDOS implies the localization of vibrational modes [22]. Here we reproduce such behavior for ρ = 0. The overall TDOS for ρ = 0 decreases with increasing V which indicates that the vibrational modes get localized as disorder increases. This kind of disorder-induced delocalization-localization transition is prevented by the introduction of spatial correlations in the system. Through a direct comparison of the TDOS for ρ = 0 with the TDOS for ρ = 0.99, such behavior can be easily observed. The mobility edges marked by the arrows represent the energy scale demarcating the extended states from the localized states. Again, from Fig.2, it is clear that the mobility edge shifts to higher energies with increasing correlation strength, thus implying that the latter induces delocalization of the hitherto localized states. An alternative measure of the proximity to the Anderson localization transition is total spectral weight of the TDOS. The variation of total spectral weight of the TDOS with increasing correlations is shown in Fig3. It clearly shows that the total spectral weight of the TDOS for ρ = 0.99 decreases at a much slower rate compared to the uncorrelated disorder (ρ = 0). Such behavior indicates that spatial correlations prevent the localization of vibrational modes. Another perspective of spatial correlations is obtained through an investigation of mobility edges which can be extracted from the TDOS presented in Fig2. A mobility edge is defined as the energy which separates localized and extended states [24]. The mobility edge has been measured for 3D Anderson localization [25]. The effects of spatial correlations on the mobility edges for Anderson localization of electrons have been studied extensively. However, to best of our knowledge, it has not been yet reported for the Anderson localization of phonons in the correlated disorder case. We define the mobility edge by the boundary of the TDOS and denote by arrows as indicated in Fig.2. In Fig4, we show calculated mobility edges using the TMDCA with N c = 64 for mass disorder. The phase diagram implicates that the spatially correlated diagonal disorder delocalizes the uncorrelated diagonal disorder induced localized vibrational modes. In the phase diagram, we first observe the usual behavior of the mobility edges with increasing V for ρ = 0. For small disorder V < 0.5, the trajectory of the mobility edges moves outward with increasing V. But, it starts moving inward for strong disorder V ≥ 0.5. Thus, a re-entrance transition with increasing disorder occurs at V = 0.5. We explored this behavior of the mobility edges in Ref [22]. The spatially correlated disorder destroys this re-entrant behavior of the mobility edges. As seen in Fig4, the trajectory of the mobility edges for ρ = 0.99 is almost the same as that for ρ = 0 in the presence of small disorder V ≤ 0.5. However, in contrast to the uncorrelated case, the trajectory of the mobility edges keeps on moving outward with increasing disorder strength V > 0.5. It suggests that the spatial correlations drive the system towards delocalization. Conclusions We have applied the TMDCA formalism for investigating the effects of short-range spatial correlations on phonon localization in 3D. We have only considered pairwise correlations between the adjacent odd-indexed and even-indexed sites. The correlation strength is varied from 0 to 1. In the weak correlation limit, all the sites have completely random masses, while in the strong correlation limit, the masses of the (2l − 1) th site and the (2l) th site are the same, but as a function of l, the odd/even sequence of masses is still random. Our main conclusion is that correlated disorder with just pairwise correlations can markedly change the localization transition of phonons. Such a conclusion is validated by observing the variation of the TDOS and mobility edges with increasing correlation strength. We show that short-range correlated disorder impedes the localization of the vibrational modes, and eventually, a correlation induced localization-delocalization transition of phonons sets in a 3D disordered sample. It would certainly be valuable to understand the observed delocalization transition in the presence of long-range correlated disorder. For doing so, an extension of the current framework incorporating long-range correlations is in progress.
2020-10-02T01:00:29.924Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "344d9016ace44cb903f503a753c78f1b23a345b2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.00068", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "344d9016ace44cb903f503a753c78f1b23a345b2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
259258892
pes2o/s2orc
v3-fos-license
Enriched environmental exposure reduces the onset of action of the serotonin norepinephrin reuptake inhibitor venlafaxine through its effect on parvalbumin interneurons plasticity in mice Mood disorders are associated with hypothalamic-pituitary-adrenal axis overactivity resulting from a decreased inhibitory feedback exerted by the hippocampus on this brain structure. Growing evidence suggests that antidepressants would regulate hippocampal excitatory/inhibitory balance to restore an effective inhibition on this stress axis. While these pharmacological compounds produce beneficial clinical effects, they also have limitations including their long delay of action. Interestingly, non-pharmacological strategies such as environmental enrichment improve therapeutic outcome in depressed patients as in animal models of depression. However, whether exposure to enriched environment also reduces the delay of action of antidepressants remains unknown. We investigated this issue using the corticosterone-induced mouse model of depression, submitted to antidepressant treatment by venlafaxine, alone or in combination with enriched housing. We found that the anxio-depressive phenotype of male mice was improved after only two weeks of venlafaxine treatment when combined with enriched housing, which is six weeks earlier than mice treated with venlafaxine but housed in standard conditions. Furthermore, venlafaxine combined with exposure to enriched environment is associated with a reduction in the number of parvalbumin-positive neurons surrounded by perineuronal nets (PNN) in the mouse hippocampus. We then showed that the presence of PNN in depressed mice prevented their behavioral recovery, while pharmacological degradation of hippocampal PNN accelerated the antidepressant action of venlafaxine. Altogether, our data support the idea that non-pharmacological strategies can shorten the onset of action of antidepressants and further identifies PV interneurons as relevant actors of this effect. INTRODUCTION Major depressive disorder (MDD) is characterized by a persistent low mood associated with other core symptoms, including suicidal ideation, loss of mental and physical energy, feelings of guilt, anhedonia, anxiety, and cognitive deficits [1]. The therapeutic activity of first-line treatments for MDD, such as the selective serotonin reuptake inhibitors (SSRIs) or the serotoninnorepinephrine reuptake inhibitors (SNRIs), relies on their ability to increase monoaminergic tone in the brain of depressed patients [2]. However, despite their indisputable efficacy, SSRIs and SNRIs suffer from several limitations including a delayed onset of action (4-8 weeks) [3], numerous adverse effects, and a modest efficacy in almost 30% of patients [4]. Although brain levels of serotonin (5-HT) and norepinephrine (NE) increase within hours after administration of an SSRI or SNRI, it is not fully understood why it takes weeks to obtain behavioral improvement. Current hypotheses to explain this delayed clinical response involve pre-and postsynaptic adaptive mechanisms. After an initial increase in intrasynaptic concentrations of neurotransmitters at the presynaptic level, SSRIs and SNRIs produce a progressive downregulation of somatodendritic inhibitory 5-HT 1A autoreceptors, increasing neuronal firing of 5-HT neurons and neurotransmitter release at nerve terminals, notably in the hippocampus [5]. Interestingly, the time required for antidepressant drugs to desensitize these 5-HT 1A autoreceptors coincides with their onset of action. Long-term changes in gene expression, protein translation, and neuroplasticity are also detected at the post-synaptic level. For instance, it is well known that increased hippocampal 5-HT and/or NE neurotransmissions in response to chronic administration of SSRIs or SNRIs promote BDNF expression [6], which positively regulates adult hippocampal neurogenesis [7] and parvalbumin (PV) GABAergic neuron maturation [8]. In addition, PV neurons also critically regulate adult hippocampal neurogenesis [9]. Hence, a better understanding of the mechanisms underlying the therapeutic action of antidepressant drugs would help to develop more efficient treatments, including faster-acting strategies. Recent evidence suggests that the presence of perineuronal nets (PNN), an extracellular matrix located around fast-spiking GABAergic PV neurons, plays a role in mood regulation. PNN endows neuronal plasticity that could influence the behavioral response to antidepressant treatment [10,11]. Accordingly, it has been shown in rodent models of depression, that chronic stress and corticosterone exposures increase hippocampal expression of PNN [12,13]. On the contrary, the SSRI fluoxetine or the SNRI venlafaxine administered perinatally or in adulthood, decreases the formation of PNN around the soma and proximal dendrites of fast-spiking PV GABAergic interneurons in the mouse hippocampus [14][15][16] and cortex [17]. Furthermore, PNNs are dynamically regulated by experience, and exposure to an enriched environment results in a reduced presence of PNN around hippocampal PV cells [18,19]. In the present study, we aimed to investigate whether the onset of action of the antidepressant venlafaxine can be shortened by combining this pharmacological treatment with exposure to an enriched environment. Then, to start identifying the cellular mechanisms underlying the shortened response to antidepressant, we focused on the extracellular matrix surrounding hippocampal PV interneurons. MATERIALS AND METHODS Animals Ten-week-old male C57Bl/6Rj mice were housed five per cage under standard conditions with a 12 h light/dark cycle (light on at 8:00 a.m) and temperature-controlled room. Food and water were available ad libitum. All experimental procedures were conducted in accordance with the European directive 2010/63/EU and were approved by the French Ministry of Research and the local ethics committee (APAFIS # 2018100110245946#16913). The enriched environment (EE) was performed in the Marlau™ cages (Viewpoint, France) which provides standardized environmental enrichment procedures for rodents. Living in Marlau cages increases social and sensory stimulation that evokes brain and cognitive reserves and supports functional rehabilitation after brain injury. The cage (length: 580 mm × width: 400 mm × height: 320 mm; weight: 13 kg) consists of a first floor with two compartments (one containing food, the other drinking water), and an upper floor where a maze is placed. To obtain food, the rodents must climb from the lower compartment to the upper floor, pass through the maze, and then descend to the other compartment through a sliding tunnel. Another pathway procedure provides access to drinking water. This ensures that all animals are frequently and equally exposed to the different features of the enrichment. According to the protocol for these cages, cognitive stimulation and curiosity are maintained over time through regular changes (three times a week) in the maze configuration, of which 12 different versions are available [20]. Venlafaxine (VLX). During the last 8 weeks of corticosterone exposure, the SNRI venlafaxine (Sigma-Aldrich, France, Cat#99300-78-4) was dissolved in corticosterone solution and delivered in drinking water at a fixed dose of 16 mg/kg. This dose was chosen on the basis of our electrophysiological demonstration that it is the lowest dose allowing concomitant inactivation of SERT and NET and that this dose is effective to promote antidepressant effects after 3 weeks in CORT mice [22]. Doxycycline (DOX). Inhibition of matrix metalloproteases was achieved by feeding mice a diet containing doxycycline. Doxycycline intake was about 5 mg/day per mouse (doxycycline hyclate, Ssniff, Germany). Chondroitinase ABC (ChABC). The chondroitinase ABC (ChABC) was used to degrade PNN. Mice were anesthetized with isoflurane (3%) (Centravet, France) and placed in a stereotaxic apparatus. Lidocaine was applied subcutaneously before surgery, then each mouse received two bilateral injections of 100nL of a solution containing ChABC (50 U/ml, Sigma-Aldrich, France) or vehicle (PBS 0.1 M) into the area CA1 of the dorsal hippocampus. The following coordinates were used (in mm from bregma): (AP) −1.34, (L) ± 1 and (V) −1.5 and (AP) −2.46, (L) ± 2 and (V) -1.5 mm. After recovery in a heated chamber, mice were returned to their home cages where they recovered for two days before behavioral testing. Behavioral tests The same batteries of behavioral tests were performed for all the different experimental groups. These batteries encompassed six tests measuring emotional or cognitive aspects: the elevated plus maze (EPM), the tail suspension test (TST), the splash test, the novelty suppressed feeding test (NSF), the object location test (OL) and the three-chamber test. Description of these tests and parameters studied are described in Supplementary material. Data analysis and z-scores. A z-score was calculated to integrate the performance of each animal across the comprehensive battery of behavioral tests. Details and rationale in animal behavior analysis have been described by Guilloux et al. [23]. The z-score is used to compare overall behavioral performance between the experimental and control groups. Briefly, for each behavioral measure, an individual Z-score is calculated using the following equation: X represents the individual data for the observed parameter. µ and σ represent the mean and standard deviation for the control group, respectively. Z-score indicates how many standard deviations (SD) observations (X) are above or below the mean of the control group (µ). Two separate z-scores were established to account for the emotional and cognitive aspects of the behavioral measures. In these z-scores, each test is weighted equally. The data integrated in the emotional z-score includes the following parameters: percentage of time spent in open arms for the EPM, latency to feed for NSF, immobility time in the TST, time of grooming in the splash test. The data integrated in the cognitive z-score includes social novelty during the three-chamber test and preference for the displaced object in the object location test. For the experiments shown in Figs. 1 and 2, the control group consisted of the VEH animals. Thus, an increase of Z-scores indicates a deterioration in emotional and cognitive performance compared to the VEH group. For the experiment shown in Fig. 3, the control group is composed of CORT animals, so the reduced z-scores represent improved emotional and cognitive performance compared to the CORT group. Stereotaxic injection For PNN degradation by the chondroitinase ABC (ChABC), mice were anesthetized with isoflurane (3%) and placed in a stereotaxic apparatus (Kopf). Lidocaine was applied subcutaneously before surgery, then each mouse received two bilateral injections of 100 nL of a solution containing ChABC (50 U/ml, Sigma-Aldrich) or vehicle (PBS 1×) into the area CA1 of the dorsal hippocampus. The following coordinates were used (in mm from bregma): (AP) −1.34, (L) ± 1 and (V) −1.5 and (AP) −2.46, (L) ± 2 and (V) -1.5 mm. After recovery in a heated chamber, mice were returned to their home cages where they recovered for two days before behavioral testing. Brain preparation At the end of behavioral experiments, mice were deeply anesthetized with dolethal and perfused with NaCl at room temperature for 1 min while cold NaCl (4°C) was used in the experiment involving doxycycline. Then brains were harvested and dissected into right and left hemispheres. For immunochemistry, hemi-brains were fixed in 4% paraformaldehyde solution for 48 h at 4°C and then stored in 30% sucrose solution with 0.1% sodium azide. For protein quantification by ELISA, hippocampi were dissected from the remaining hemi-brains quickly frozen in liquid nitrogen. ELISA of MMP9 Homogenates from hippocampal tissue were prepared by lysis in immunoprecipitated buffer (RIPA -Thermo Scientific, 89901) with protease inhibitor cocktail (P8340-1ML, Sigma-Aldrich). Lysates were sonicated for 10 s, placed on ice for 20 min, and centrifuged 15 min at 14,000 rpm at 4°C. Lysate supernatants were saved for protein analyses. Pro-MMP9 protein concentration in hippocampal lysates was measured by ELISA, performed according to the manufacturer's protocol (Mouse Pro-MMP9, R&D systems, catalogue number MMP900B). B. Coutens et al. Fig. 1 Exposure to enriched environment shortens the time to antidepressant action of venlafaxine. A Experimental timeline. Animals were exposed to corticosterone (CORT mice, n = 63) or vehicle (VEH mice, n = 12) in the drinking water during the whole experiment. Starting at week 0, CORT mice received a treatment of venlafaxine or vehicle for 8 weeks either in their home cage (CORT-VLX mice, n = 12 and CORT-VEH mice, n = 12) or in an enriched environment (CORT-EE mice, n = 16 and CORT-EE-VLX mice, n = 23). Behavior was evaluated after 2 (B, C) and 8 (D, E) weeks of treatment. B, D Emotional and (C, E) cognitive tests were analyzed using two-way ANOVAs with housing condition and treatment as main factors. Emotional and cognitive z-scores were established based on VEH mice. After 2 weeks of treatment, a significant effect of housing condition (F (1;59) = 27.0, p < 0.001) and housing condition × treatment interaction (F (1;59) = 4.16, p < 0.05) were found for emotional z-score. A significant effect of housing condition was detected for cognitive z-score (F (1;59) = 32.0, p < 0.001). After 8 weeks of treatment, ANOVA revealed a significant effect of housing condition, treatment and housing condition × treatment interaction for the emotional (F (1;44) = 21.0, p < 0.001, F (1;44) = 14.9, p < 0.001, and F (1;44) = 5.57, p < 0.05, respectively) and the cognitive (F (1;44) = 8.39, p < 0.01, F (1;44) = 10.4, p < 0.01, and F (1;44) = 19.5, p < 0.001, respectively) z-scores. The gray shaded area shows mean ± SEM values for VEH mice. In all other groups, data represent mean ± SEM and dots illustrate individual values. Post-hoc analysis when appropriate: **p < 0.01, ***p < 0.001 indicate significant differences compared to CORT-VEH group. $ p < 0.05, $$$ p < 0.001 indicate significant differences compared to CORT-VLX group. Immunochemistry of PV and PNN Floating coronal sections (30 µm thick) were prepared with a freezing-stage microtome (Leica SM2010R) and stored in cryoprotectant at −20°C until use. For each animal, series of 1-in-6 sections spanning the hippocampus were washed in PBS + 0.25% Triton-X-100 (PBST). Then sections were placed in 3% H 2 O 2 and 10% methanol in PBST and washed again in PBST before incubation in blocking solution for one hour (PBST containing 10% normal donkey serum). Finally, sections were incubated in the blocking solution containing biotinylated Wisteria Floribunda Agglutinin (WFA); Sigma L1516; 1:1000) and Goat anti-PV (Swant PVG 213; 1:2500) antibodies, overnight at room temperature. The next day, sections were rinsed in PBST and incubated for 90 min at room temperature with Donkey anti-Goat Alexa 488 (Molecular Probes, A11055; 1:250) and Streptavidin TRITC (Vector Labs, SA-5549, 1:500). Then sections were mounted onto slides, coverslipped using Mowiol containing Hoechst (1/10,000), and stored at 4°C. Examination of positively labeled cells was confined to the dorsal hippocampus CA1. Quantifications of PV-immunoreactive (PV + ) cells and PV-PNN immunoreactive (PV + /PNN + ) cells were conducted using a DM6000B fluorescence microscope (Leica, Germany) equipped with a motorized X-Y sensitive stage and a video camera connected to a computerized image analysis system (ExploraNova, France). Quantification of PV + cells and their associated PNN The counting of labeled cells was conducted using Mercator v.2 software (ExploraNova) to measure the corresponding hippocampal surface. Densities of immuno-positive cells were calculated by dividing the number of positive cells by the region of interest (ROI) sectional volume. The densities of PV + cells and PV + cells enwrapped by PNN (PV + /PNN + ) were calculated for each section, to obtain the percentage of colocalization of PV + co-expressing PNN+ around them for each ROI. Randomization and blinding Given that treatments were given in the drinking water (e.g., CORT and/ or venlafaxine) in standard or enriched environments, randomization was not possible in the majority of the experiments. However, randomization was applied for the pharmacological experiments involving the intra-hippocampal injection of ChABC or its vehicle. In the latter experiments, animals receiving the different treatment were mixed in the same cage. With respect to the blinding, the experimentors remained blind to the experimental conditions until the end of data analysis. Statistical analysis All data were expressed as mean ± standard error of the mean (SEM) unless stated otherwise. Analyses were performed using GraphPad Prism 8 (GraphPad Software, San Diego, CA, United States) and compared by t-test, one-way or two-way analysis of variance (ANOVA) Fig. 2 The antidepressant action of VLX requires remodeling of the extracellular matrix surrounding hippocampal interneurons. A Experimental timeline. All along the experiment, animals were exposed to corticosterone (CORT mice, n = 19) or vehicle (VEH mice, n = 6) in the drinking water. From week 0, CORT mice received venlafaxine or vehicle for 2 weeks either in their home cage (CORT-VLX mice, n = 4 and CORT-VEH mice, n = 6) or in an enriched environment (CORT-EE mice, n = 5 and CORT-EE-VLX mice, n = 4). Animals were sacrificed after 2 weeks of treatment and brain were processed for histology. B Proportion of PV + /PNN + cells among PV + cells in CA1. One-way ANOVA analysis reveals significant difference (F (4;20) = 6.378, p < 0.01). C Scheme representing the hypothetic modulation of PNN (in red) by venlafaxine (VLX) and enriched environment (EE) in the hippocampus. In depressed state (CORT), we observed that the number of PV + cells surrounded by PNN increases compared to basal state (VEH) while the combination VLX and EE abolishes this effect, allowing neuronal plasticity and behavioral recovery. In contrast, doxycycline (DOX) inhibits matrix metalloproteases known to degrade PNN. This enables the stabilization of PNN networks around PV + cells and prevents neuronal plasticity. D Experimental protocol used to study the impact of the pharmacological blockade of PNN remodeling. Starting at week 0, CORT mice (n = 29) received venlafaxine for 2 weeks in EE. Mice were either fed a normal diet (CORT-VLX-EE-DOX (−) , n = 11) or a diet containing DOX (CORT-VLX-EE-DOX (+) , n = 10). Emotional and cognitive z-scores were established from mice exposed to CORT alone (CORT-VEH mice, n = 8). E Photomicrographs depicting PV + /PNN + cells in the CA1 region of CORT-depressed mice housed EE and treated with VLX (VLX-EE) or with the combination of VLX and DOX (VLX-EE-DOX). F Proportion of PV + /PNN + cells among PV + cells in CA1. A significant increase is observed in DOX fed mice (unpaired Student t-test). G Emotional and (H) cognitive z-scores showing a significant effect of DOX using unpaired Student t-test. The gray shaded area shows mean ± SEM values for CORT mice. Data represent mean ± SEM and dots illustrate individual values. Post-hoc and t-test analysis when appropriate: * p < 0.05 indicates significant differences compared to CORT-VEH group. $$ p < 0.01 indicates significant differences compared to VEH group. ### p < 0.001 indicates significant differences between groups. Scale bars = 50 µm. followed by Tukey's post hoc tests as mentioned in the figure legends. The analyzing statistic difference was indicated in the figure legends. The statistical power and the required sample size were based on our previous analysis using the same tests. No animals were excluded from the study and the number of animals per group can be found in the statistics tables (see supplemental data) as well as on the graphs with the dots representing the individual values. Null hypothesis was rejected when P < 0.05. These symptoms included anxiety, resignation, impairments in self-care, spatial memory and social recognition compared to nondepressed control mice (Supplemental Fig. S1 and Supplemental Table S1). Altogether these findings are in line with previous reports and support that chronic exposure to CORT elicits a robust depressive-like phenotype in mice [21,24]. Long-term treatment with venlafaxine is necessary to induce antidepressant effects We next sought to determine the impact of the duration of treatment with the antidepressant venlafaxine (VLX) on the CORT mouse model of depression. To do so, we evaluated the behavioral effects of a short (2 weeks) or a long (8 weeks) period of antidepressant treatment (Fig. S1). After 2 weeks of venlafaxine, the performances of CORT-VLX mice did not differ from those of CORT mice, in any of the emotional or cognitive tests (Fig. S1A, B, C, E, G, H, I and K). Specifically, venlafaxine had no effect on anxiety evaluated by the time spent in the anxiogenic open arms of the elevated plus-maze (Fig. S1A). Venlafaxine also did not reverse the action of CORT in the novelty suppressed feeding test assessing hyponeophagia, another symptom of anxiety (Fig. S1C). Furthermore, venlafaxine does not show any effect on self-care assessed by the time of grooming in the splash test (Fig. S1E). Resignation was evaluated in the tail suspension test, and 2 weeks of venlafaxine failed to reverse the depressive traits of CORT mice, as reflected by a high immobility (Fig. S1G). At the cognitive level, deficits of the CORT mice were also not improved by 2 weeks of venlafaxine, as shown for spatial memory in the object location test (Fig. S1I) and for social recognition in the three-chamber test (Fig. S1K). Overall, 2-week venlafaxine treatment does not abolish the depressive-like symptoms in this mouse model of depression, indicating that a longer treatment with venlafaxine is necessary to elicit antidepressant-like effects [25]. To test this idea, behavioral performances of CORT-VLX mice were assess after 8 weeks of venlafaxine. We found that this longterm treatment abolished anxiety (Fig. S1B), hyponeophagia (Fig. S1D), resignation and self-care (Fig. S1F, H) symptoms of depressed mice. Moreover, 8 weeks of venlafaxine also restored spatial (Fig. S1J) and social (Fig. S1L) memory of depressed mice to control levels. Altogether, our data indicate that antidepressant-like effects of venlafaxine are observed after 8, but not 2, weeks of treatment. Exposure to enriched environment greatly shortens the time to antidepressant action of venlafaxine In an attempt to solve this issue, we investigated whether combining venlafaxine with non-pharmacological approaches would shorten the onset of action of venlafaxine in the CORT mouse model of depression. Indeed, cognitive and physical stimulations provided by housing mice in enriched environment (EE) have long been shown to exert a positive impact on behavior and brain plasticity [26]. We thus evaluated the impact of 2 and 8 weeks of venlafaxine and EE, alone or in combination, on depressive symptoms of CORT mice (Fig. 1A). Using the same battery of behavioral tasks as in Fig. S1, emotional and cognitive z-scores that integrated all related parameters into a single value (see methods) were calculated for each mouse and standardized to the values of non-depressed control mice. Housing condition and the interaction between these two factors revealed statistical differences (Fig. 1B, Table S2). After 2 weeks of treatment, post-hoc analysis revealed that the combination of VLX with EE significantly decreased the emotional z-score of CORT mice while each strategy applied separately failed to do so. At the cognitive level, only the housing condition factor showed a significant effect. While 2 weeks of venlafaxine alone failed to improve the cognitive z-score of depressed mice, EE reduced the cognitive deficits observed in CORT-depressed mice treated or not with venlafaxine (Fig. 1C). After 8 weeks of venlafaxine, a significant effect of both factors (treatment and housing condition) but also their interaction was unveiled for the behavioral z-scores (Fig. 1D, E). Emotional and cognitive impairments of depressed mice were overcome by VLX, EE alone or their combination. It is noteworthy that all three strategies (VLX, EE, their combination) were equally effective in decreasing emotional and cognitive z-scores, with no statistical difference found between them. Together our data demonstrate that exposure to EE shortens the response time to venlafaxine by 8 to 2 weeks, particularly with regards to emotional alterations associated with depression. The neurobiological substrate underlying these beneficial effects remains to be identified. The antidepressant action requires the remodeling of the extracellular matrix surrounding hippocampal interneurons Growing evidence suggests that the extracellular matrix PNN enwrapping PV cells participates to antidepressants response by reinstating hippocampal plasticity (for review [27]). We thus sought to determine whether manipulating this form of neuronal plasticity could influence the delay of action of venlafaxine in the CORT mouse model of depression. First, we examined whether PNN presence was affected by 2 weeks of venlafaxine treatment, alone or combined with EE, in the CORT mouse model of depression ( Fig. 2A). We found that CORT exposure enhanced the percentage of PV-labeled (PV + ) cells enveloped by PNN (PV + /PNN + cells) in the dorsal part of the CA1 hippocampal region (Fig. 2B, Table S3), while the density of PV + interneurons did not vary in CA1 (Fig. S2). Of all treatments, only the 2 weeks combination of VLX and EE significantly reduced the proportion of PV + /PNN + cells in CA1, compared to depressed mice (Fig. 2B). Although venlafaxine or EE alone tended to lower the proportion of PV + interneurons enveloped by PNN, these effects did not reach significance (Fig. 2B). Work by Kwok et al. [28] suggests that PNN around neurons prevents the formation of new synapses on PV interneurons, and thereby participates in hippocampal plasticity. In this context, our data suggest that the combination of VLX and EE, by allowing the remodeling of PV-dependent network, could contribute to the behavioral recovery of depressed mice. To investigate this possibility, we asked whether the plasticity mediated by the presence of PNN around PV + neurons is required for the rapid antidepressant action of venlafaxine combined with EE, in the CORT mouse model. We hypothesized that preventing the degradation of PNN around PV + interneurons in depressed mice treated with VLX and exposed to EE would hinder hippocampal plasticity and block the antidepressant action of this combined treatment. To prevent PNN degradation, concomitantly with venlafaxine administration, animals housed in EE were fed a diet containing doxycycline, an inhibitor of matrix metalloprotease 9 (MMP9), the PNN degradation enzyme [29,30] (Fig. 2C, D). According to this idea, we observed that doxycycline reduces hippocampal expression of MMP9 using ELISA quantification, although this effect remained below significance (Fig. S3). Immunohistological analysis of the CA1 region confirmed that the proportion of PV + cells harboring PNN in mice receiving the VLX + EE combination was similar to the proportion observed in the previous experiment (around 60%, Fig. 2A, B vs. Fig. 2E, F). Remarkably, and confirming our hypothesis, we observed that the action of VLX + EE on PV + /PNN + cell numbers was blocked when PNN degradation was prevented by doxycycline (Fig. 2E, F). Altogether, these findings suggest that in CORT-depressed mice, VLX + EE treatment reduces PNN expression around PV + cells, allowing experience-dependent remodeling within the hippocampus. We then evaluated the behavioral effects of VLX + EE treatment in CORT depressed mice fed a doxycycline-containing diet. In these mice, emotional and cognitive z-scores remained robustly impaired compared to animals fed with a standard diet (Fig. 2G, H). These findings demonstrate that inhibiting PNN degradation with doxycycline prevents behavioral recovery in depressed mice. Collectively, these data show that restricting the synaptic plasticity onto hippocampal PV + interneurons blocks the rapid antidepressant action of VLX + EE combination. They also further suggest that rapid antidepressant action of VLX + EE tightly depends on the regulation of the extracellular matrix around hippocampal PV neurons and/or of the subsequent PV cell activity. Pharmacological degradation of hippocampal PNN replicates the rapid antidepressant action of VLX + EE treatment To assess to which extent, hippocampal PNN remodeling is crucial to the rapid action of VLX + EE, we performed intra-hippocampal administrations of the chondroitinase ABC (ChABC), a bacterial enzyme that degrades PNN [31]. After only 2 weeks of VLX treatment and just before behavioral testing, ChABC was infused into CORT-depressed mice CA1 region (Fig. 3A, B). As expected, ChABC massively degraded the PNN at the injection sites, resulting in a significant reduction in the proportion of PV + neurons embedded in PNN, compared to depressed mice receiving vehicle (Fig. 3C, Table S4). Of note, the combination of ChABC and venlafaxine treatment did not produce a more pronounced effect on the proportion of PV + /PNN + neurons in CA1 than ChABC alone (Fig. 3C). At the behavioral level, emotional and cognitive z-scores were calculated for each mouse and normalized to the values of CORT-depressed control mice that received intra-hippocampal injection of vehicle. While emotional z-score showed a significant effect of the two main factors (pretreatment and treatment), no significant effect of their interaction was unveiled. With respect to the cognitive z-score, only a significant effect of treatment was detected. Based on these analyses, we found that ChABC induced an early antidepressantlike effect in the absence of venlafaxine treatment, as evidenced by significantly lower emotional and cognitive z-scores of CORT-ChABC mice compared to CORT-VEH animals (Fig. 3D, E). The same statistical observations were found among venlafaxine-treated mice. Remarkably, among ChABC-injected mice, venlafaxine improved the emotional state of depressed mice (Fig. 3D). These results strongly suggest that the restoration of PV cell remodeling, mediated by the absence of PNN is one of the neurobiological mechanisms by which EE shortens the time to antidepressant action of venlafaxine. Collectively our results demonstrate that venlafaxine antidepressant action is faster when combined with exposure to a stimulating environment. Furthermore, they reveal that regulation of the hippocampal extracellular matrix could be one of the molecular actors involved in the delayed onset of action of this antidepressant drug. DISCUSSION Our study shows that the antidepressant-like effects of the SNRI venlafaxine in depressed mice are achieved earlier when this compound is combined with animal's exposure to an enriched environment (EE) providing social, physical, and cognitive stimulations. We propose that the remodeling of the extracellular matrix PNN located around hippocampal parvalbumin interneurons may be a pivotal cellular mechanism underlying the rapid antidepressant effects of venlafaxine combined with EE. Indeed, we have gathered evidence that maintaining hippocampal PNN's integrity hinders the antidepressant-like effects of venlafaxine and EE combination; in contrast, disrupting hippocampal PNN allows rapid beneficial effects on emotional and cognitive hallmarks of the depressive state. The chronic mouse model of CORT exposure was used to induce a robust and persistent depressive-like phenotype associated with cognitive deficits. One limitation of this model is that C57BL6J female mice are insensitive to long term administration of corticosterone [32], which led us to test the impact of different antidepressant strategies on the time course of behavioral recovery on males only. Using emotional and cognitive z-scores capturing the heterogeneity of depressivesymptoms [23], we show that venlafaxine at the lowest dose (16 mg/kg) to enhance serotonergic and noradrenergic neurotransmissions in mice [22], abolishes CORT-induced behavioral deficits after 8 weeks of treatment. In these animals, venlafaxine has, however, no effect after only 2 weeks. The dose used herein is an important factor in this lack of effect as it has been reported that 30 mg/kg of venlafaxine was sufficient to induce antidepressant-like effects after 2 weeks of treatment [13]. However, our results are consistent with previous data showing that 3-5 weeks are required after initiation of the treatment with the SSRI fluoxetine or SNRI venlafaxine to elicit improvement in behavioral parameters [22,33,34]. Mechanistically, this delay of action of antidepressant drugs coincides with the desensitization of the inhibitory somatodendritic 5-HT1A autoreceptors and the enhancement of hippocampal neurotransmission that occurs unambiguously after more than 2 weeks of venlafaxine administration [35]. Although non-pharmacological interventions such as exposure to EE were shown to exert antidepressant-like effects [36], their ability to improve behavioral response to antidepressant drugs has been much less studied. A study in rats reported that the SSRI sertraline exerts anxiolytic-like effects when administered during EE but not in standard or isolated housing conditions [37]. Similar findings were reported in different mouse or rat models of depression using the combination of the SSRI fluoxetine and EE [38][39][40][41]. Consistent with these findings, our results show that 2 weeks of venlafaxine administration combined with EE exposure produces beneficial effects on the emotional z-scores of depressed mice, whereas venlafaxine or EE applied separately failed to do so. Also in line with a previous report, the effect of EE alone on cognitive z-score is not enhanced when combined with venlafaxine, likely due to its own robust impact on this parameter [42]. PNN are a complex of extracellular matrix molecules that mostly surround the soma and dendrites of fast-spiking GABAergic neurons in various brain regions. PNN are functionally involved in the stabilization of excitatory synapses onto PV cells [43] and they have been reported to play a crucial role in hippocampal plasticity and thus, in memory processes [44]. Emerging evidence suggests that PNN also control stress response and emotional state. Specifically, increased PNN formation is observed in the hippocampus of rodents chronically exposed to CORT [13] or to social defeat [12]. Such an increase may contribute to reduce hippocampal plasticity and to impair emotional and cognitive abilities of stressed mice [45][46][47]. In agreement with this hypothesis, it has been shown that a 2-week administration of the SNRI venlafaxine reduces PNN immunoreactivity in mice hippocampus [15]. Other studies have demonstrated that chronic fluoxetine treatment also reduces the density of PNNs and PV cells in the hippocampus [16,48]. In the present study, while 2 weeks of venlafaxine or 2 weeks of EE do not modify the proportion of PV + interneurons enveloped by PNN in CA1, the combination of venlafaxine and EE decreases the population of PV + /PNN + cells in CORT depressed mice. From these data, it is tempting to speculate that PNN attenuation in CA1 may contribute to the rapid antidepressant action of venlafaxine combined with EE. It is not clear yet by what mechanisms PNN changes may translate into beneficial effects on depression symptoms. Although research on hippocampal function in depressed patients (or relevant animal models) is very scarce, studies have reported increased hippocampal activity in major depression, whereas antidepressants would counteract this effect by attenuating hippocampal activity [49,50]. Since PNNs facilitate the firing of PV-expressing interneurons and probably the extracellular accumulation of GABA [51], their inactivation in response to the combination of venlafaxine and EE should, on the contrary, limit the tonic inhibition of hippocampal activity [12]. It is thus difficult to reconcile our data with this theory. Nevertheless, CORT can be considered to cause non-experience-dependent, or aberrant, PNN formation, resulting in strong inhibitory control of PV cells on the hippocampal circuit. In this context, degradation of PNN would overcome these constraints, and provide a time window during which appropriate experience-dependent plasticity on the PV cell network is possible. To determine the mechanism by which changes in PNN can translate into beneficial effects on depression symptoms, we tested if preventing PNN degradation through the inhibition of metalloproteases (MMPs) could negatively reverberate on the rapid beneficial effects of venlafaxine associated with EE. Therefore, we used the tetracycline antibiotic doxycycline, a pharmacological agent crossing the blood-brain barrier [52], to inhibit MMPs activity [53]. We found that doxycycline prevents the ability of venlafaxine combined with EE to abolish emotional and cognitive symptoms of depression, after 2 weeks of treatment. Our results concur with the observation that the genetic or pharmacological inactivation of MMP9 decreases basal anxiety [54], despair and sociability in stressed animals [55] but also dampens the behavioral response of venlafaxine [15]. They are also consistent with data underlying the involvement of MMP polymorphisms in the development of depression [56,57]. However, limitations related to the utilization of doxycycline in the diet have to be considered. Indeed, we cannot preclude that other molecular or cellular targets of the antibiotic doxycycline are involved in these behavioral effects. Since gut microbiota is known to influence emotional and cognitive processes [58], it is possible that slowing down the growth of specific bacteria played a negative role on emotionality. Other mechanisms may be involved, including the ability of doxycycline to reduce microglia activation [59,60]. However, it seems unlikely that our data result from this property because the inactivation of microglia and neuroinflammation positively influences emotionality [61] and cognitive performances [62]. To better understand the link between MMP, PNNs and treatment response, it would now be interesting to examine the effects of a selective and potent inhibitor of MMP, notably a MMP9 inhibitor directly injected into the hippocampus, on PNN levels in our different experimental groups. Given the limitations of the use of doxycycline, we have implemented another approach. Indeed, since our results suggest that delayed antidepressant response might result from the formation of PNN in the hippocampus, we speculated that PNN digestion by chondroitinase (ChABC) might, instead, mimic the behavioral effects of venlafaxine combined with EE. We therefore sought to determine whether injecting ChABC into CA1 would elicit behavioral recovery. After showing that intra-hippocampal ChABC effectively reduced the presence of PNN locally, which likely results in a reduction of the inhibitory GABAergic tone [18,51] leaving the hippocampus more excitable, we evaluated its behavioral outcomes. Remarkably, the sole administration of ChABC induced rapid neurobehavioral effects, as evidenced by its ability to improve the emotional and cognitive profiles of CORTdepressed mice. Interestingly, ChABC into CA1 also exerted an additive beneficial effect to venlafaxine on emotional state. This echoes recent data showing that the fast acting antidepressant ketamine also reduces the density of PNN [63]. This reinforces the idea that the modulation of hippocampal excitatory/inhibitory (E/I) balance by this extracellular matrix is crucial to reduce the delay of action of antidepressant drugs. Regarding the cognitive dimension, we found that the combination of ChABC and venlafaxine had no greater effect than ChABC alone. Although the reasons of different effects of ChABC on emotional and cognitive z-scores remain unknown, it seems unlikely that the injection of ChABC in CA1 has affected PNNs in CA2 since the social behavior of CORTexposed mice was not altered [64] (Supplemental Fig. S4). Interestingly, a reduction of chondroitin sulfate proteoglycan, a major component of PNN, has been reported early after stress (i.e., 72 h), whereas an increase can be unraveled after 8 weeks. The latter finding underscores a pivotal role of the integrity of PV interneurons and their surrounding PNNs in mediating experience-dependent plasticity in the adult hippocampus [65][66][67][68]. In the search for the cellular and molecular mechanisms by which EE shortens the onset of action of antidepressants, our study did not explore hippocampal adult neurogenesis although there is mounting evidence that manipulating this process impacts hippocampal neuronal activity. In particular, stimulation of neurogenesis in the dentate gyrus, a process triggered by pharmacological [7,21] and non-pharmacological antidepressant strategies [42,69], has been shown to reduce CA1 neuronal activity [70]. In the present study we demonstrate that the combination of EE and venlafaxine reduces the PV + /PNN + cell population in the CA1 region of CORT mice, while their number of PV + cells in CA1 remains unchanged (Supplemental Fig. S2). From these data, we expect a decrease in inhibitory tone and thus an increase in neuronal activity in CA1 which is compatible with the results reported herein. This reinforces the interest in further exploring the relationship between PV+/PNN+ cell activity and adult hippocampal neurogenesis as an integrated mechanism that could underpin the onset of action of antidepressant drugs. More specifically, since dorsal and ventral hippocampus are functionally distinct structures (learning and emotions respectively associated with dorsal and ventral hippocampus) [71], it would be interesting in the future to study the ventral hippocampus. It is expected that the manipulation of PNN in this region will result in more pronounced effects on emotion and less robust effects on cognition.
2023-06-28T06:17:25.194Z
2023-06-26T00:00:00.000
{ "year": 2023, "sha1": "fb121cda77ab8b01fa571ce35c693b004c7dd607", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b7b18886f79a3e46383c86b2bf209b0a689fcad7", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201058509
pes2o/s2orc
v3-fos-license
Iterated ${\phi}^4$ Kinks A first order equation for a static ${\phi}^4$ kink in the presence of an impurity is extended into an iterative scheme. At the first iteration, the solution is the standard kink, but at the second iteration the kink impurity generates a kink-antikink solution or a bump solution, depending on a constant of integration. The third iterate can be a kink-antikink-kink solution or a single kink modified by a variant of the kink's shape mode. All equations are first order ODEs, so the nth iterate has n moduli, and it is proposed that the moduli space could be used to model the dynamics of n kinks and antikinks. Curiously, fixed points of the iteration are ${\phi}^6$ kinks. φ 4 kinks and impurities The φ 4 scalar field theory in one spatial dimension has Lagrangian and dynamical field equation The vacuum solutions are φ = ±1 and kinks and antikinks are solutions interpolating between these vacua [1,2]. The kink satisfies boundary conditions φ → −1 as x → −∞ and φ → 1 as x → ∞, and for the antikink the boundary conditions are reversed. Small and moderate amplitude field oscillations around either vacuum are interpreted as radiation, and tend to disperse. As is well known, a static kink obeys the first order differential equation has the antikink solution − tanh(x − b), and its centre b is the antikink's modulus. In kink-antikink dynamics one studies the time-evolution of a field that is initially close to a kink centred at a joined to an antikink centred at b, where b a. For this configuration, φ → −1 as x → ±∞, but between a and b, φ is initially close to 1. Even at rest, the kink and antikink attract, but the force is exponentially small in b − a. If the kink and antikink are given initial velocities toward each other, they approach more rapidly. The evolution is complicated during the collision. The kink and antikink can completely annihilate into radiation (a rather slow process), or they can quasielastically scatter, emitting less radiation. What happens depends sensitively on the initial velocities [3,4,5,6]. Ideally, one would like to model kink-antikink dynamics in terms of a finite number of degrees of freedom, coupled to radiation. To do this it is helpful to have a moduli space of field configurations with at least two moduli -one representing the kinkantikink separation, and the other the centre of mass. Further to these moduli one can consider oscillations of the shapes of the kink and antikink. But there is no obvious moduli space available within the original φ 4 theory. There are no static fields representing kink and antikink together, because of the attractive force between them. One idea is to use the gradient flow curve connecting a well separated kink-antikink to the vacuum φ = −1. This consists of the instantaneous field configurations obtained by replacing ∂ 2 φ ∂t 2 by ∂φ ∂t in the dynamical field equation, and evolving from a well separated kink-antikink configuration to the vacuum [7]. These field configurations form a moduli space which is fairly closely followed in the true, second order dynamics, but the vacuum configuration is an endpoint of this moduli space, whereas the true dynamics conserves energy and smoothly passes through the vacuum, or close by it, into field configurations where φ is everywhere less than −1. The field then continues to evolve, oscillating and emitting some radiation in the process. Gradient flow therefore fails to produce a satisfactory moduli space in this case. A promising resolution of this difficulty has recently been identified [8], based on consideration of the modified static, first order equation χ is referred to as an impurity field, and eq.(1.5) as the kink equation in the presence of an impurity [9,10]. We need to analyse eq.(1.5) in some detail. Throughout, we assume that χ → −1 as x → −∞, with the approach sufficiently rapid that the integral converges. We also assume that χ → ±1 as x → ∞, and that if χ → −1 then the integral also converges. Only impurities satisfying these conditions occur in the context of the iterated kinks that will be introduced in section 2. Linearising eq.(1.5), we see that φ = −1 is an attractor as x → −∞, and φ = 1 a repeller. We can therefore impose the boundary condition φ → −1 as x → −∞, which excludes the vacuum solution φ(x) = 1. As x → ∞, φ = 1 is an attractor and φ = −1 a repeller in the case that χ → −1, so for generic solutions, φ → 1 as x → ∞. Similarly, φ = 1 is a repeller and φ = −1 an attractor in the case that χ → 1, so φ → −1 as x → ∞. Solutions cannot cross φ = ±1 so, apart from the vacuum solution φ(x) = −1, either φ is trapped between −1 and 1, or φ is everywhere less than −1. A general impurity field χ that oscillates between −1 and 1 can make eq.(1.5) resemble the original equations (1.3) and (1.4) in different regions, thus allowing for solutions having several kinks and antikinks. We stress that these are static solutions of a first order equation. We can make some more precise statements about the solutions trapped between −1 and 1 by exploiting Rolle's theorem. Let us define kink and antikink locations to be precisely the points x where φ(x) = 0, with dφ dx positive for a kink, and negative for an antikink. The non-generic situation where zeros of φ coalesce and dφ dx = 0 is where a kink-antikink pair is about to be produced or annihilated. Let us focus on the generic case where χ and φ have simple zeros. By Rolle's theorem, between any pair of distinct zeros of φ there is a point where dφ dx is zero. Suppose, then, that the impurity χ has N zeros. These zeros split the real line into N + 1 intervals (two of which extend to ±∞), and there can be at most one kink or antikink in each of these intervals. φ therefore has at most N + 1 kinks and antikinks. There can be fewer, by a multiple of 2, and the number varies as the constant of integration in the solution of eq.(1.5) varies. For our choice of boundary condition they must alternate as kink-antikink-kink-... . A simple impurity is the φ 4 kink itself, χ(x) = tanh x, with its zero at the origin. The precise solution of eq.(1.5) for this impurity is given below, but let us describe a subset of the solutions more heuristically here. As tanh x is close to −1 in the region We see that the impurity χ(x) = tanh x acts as a mirror. The kink part of the solution, around −A, is reflected in the impurity as an antikink around A. If the impurity is χ(x) = tanh(x − a), then there is a solution with a kink at a − A and antikink at a + A. There are now two moduli -one is the centre of the impurity, and the other the distance of the kink and antikink from the impurity. We propose that the moduli space of these solutions could be used to model the kink-antikink fields that occur in the original φ 4 theory dynamics. The metric on the moduli space has been calculated [8], but there is also a potential energy, that has not yet been worked out. Both are needed to define a dynamics on moduli space. The exact solutions of eq.(1.5), for χ(x) = tanh x, are The allowed range of the modulus, the constant of integration c, is c > −1. Outside this range, φ has singularities. All solutions satisfy the boundary conditions φ → −1 as x → ±∞. The kink-antikink configurations, described earlier heuristically, occur for c considerably greater than 1. Then the zeros of φ are approximately where e 2x = 4c and e −2x = 4c, that is, at x = ± 1 2 log(2c). These are the locations ±A of the antikink and kink. We can check the field profile near x = − 1 2 log(2c). Just keeping the dominant exponential term in cosh x, we find the kink φ(x) tanh(x + 1 2 log(2c)). When c = 1 the kink and antikink annihilate, and for c < 1 there is no kink or antikink, as φ is nowhere zero. The solution that remains we call a bump. For c small, it is a small positive or negative bump around φ = −1 of the form and for c = 0, it reduces to the vacuum φ(x) = −1, For c near −1 the bump is large and negative, with φ −1 near the origin. This set of solutions, over the whole allowed range of c, forms a good moduli space for kink-antikink annihilation (with centre of mass at the origin), better than what is obtained using gradient flow, because it interpolates from well separated kink and antikink, through the vacuum, to a large negative bump. See Fig. 1. All these configurations occur in kink-antikink dynamics. Another impurity that has been considered in [11] is of the bump shape (1.9), with c not necessarily small. For c = 0, one solution of eq.(1.5) is the standard kink centred at the origin, but for c small and non-zero, the kink becomes deformed by a variant of the shape mode [12]. For c > 1 2 the impurity (1.10) has two zeros. This allows the kink to be sufficiently deformed that it becomes a kink-antikink-kink configuration. Recall that the shape mode is a small, normalisable deformation of the kink with frequency of oscillation ω = √ 3 according to the linearised dynamical equation (1.2) for φ. The continuum of radiation modes have frequencies ω ≥ 2, and the kink's translation zero mode has frequency ω = 0. A kink distorted by both a zero mode of amplitude α and a shape mode of amplitude β has the form For the impurity (1.10), with c small, there are solutions of eq.(1.5) close to the standard kink that are similar to (1.11). To see this, set φ(x) = tanh x + η(x) and work to linear order in both η and c. Eq.(1.5) then reduces to 12) and this linear inhomogeneous equation has the general solution combining a zero mode of arbitrary amplitude with a modified shape mode where the power of cosh x in the denominator is 3 not 2. This is interesting. The shape mode usually arises through oscillations of the kink, but here a variant arises independently through the effect of a small-amplitude bump impurity which itself arises (approximately) as a solution of the kink equation with kink impurity (1.5). This hints at an exact iterative scheme that could capture more of the degrees of freedom needed to model kink-antikink dynamics using a finite-dimensional moduli space. It has long been recognised that an effective model for kink-antikink dynamics should allow not only for the kink-antikink separation, but also for the shapes of the kink and antikink to be deformed [3,5,6,13]. The shape mode also plays a role in (symmetric) kink-antikink-kink dynamics [14]. A kink-antikink-kink configuration can annihilate into a single kink, emitting radiation, and the approach towards annihilation is approximately tangent to the shape mode of the surviving kink. All this suggests that useful moduli spaces of multiple kink-antikink configurations can be found as exact solutions of an iterated kink equation with impurity. We describe this next. Iterated kinks Our proposed iterated kink equation is We impose the boundary condition φ n → −1 as x → −∞, for all n, and also require that φ n has no singularities. This allows the vacuum solution φ n (x) = −1 for any n, but excludes φ n (x) = 1. The boundary condition appears to be consistent, by the following inductive argument. Obviously φ 0 satisfies the boundary condition, and linearisation of eq.(2.1) about φ n = −1 shows that if φ n−1 satisfies the boundary condition, then φ n (x) ∼ −1 + µe 2x for x 0 and some constant µ, and hence φ n satisfies the boundary condition. The iteration can go on indefinitely, introducing one extra modulus each time. On the other hand, for each n there is always the vacuum solution φ n (x) = −1, whatever the form of φ n−1 , and one can iterate this repeatedly and get the vacuum for all larger n. The iteration has then effectively stopped at the (n − 1)th step. Iterating the argument in section 1 concerning the attractive and repulsive natures of φ = −1 and φ = 1, we deduce that for generic solutions of eq.(2.1), φ n → 1 (−1) as x → ∞ for n odd (even). Exceptionally, the sign may be reversed if one or more fields φ k in the solution sequence is the vacuum, φ k (x) = −1. Equation (2.1) for φ n is simply the kink equation (1.5) with impurity φ n−1 , and as each equation in the sequence is first order, its solution has one constant of integration. Iterating, and allowing these constants to be free, we may interpret φ n as having n moduli. The arguments in section 1 concerning zeros of φ imply that φ n has at most n zeros, and if it has the maximal number, it is interpreted as a solution with n kinks and antikinks, whose locations are a choice for the moduli. In this case, the iterated kink equation adds one new kink or antikink to the solution at each step. This is reminiscent of a Bäcklund transformation in sine-Gordon theory, although the details seem quite different. The first few iterates are field configurations we have previously discussed. φ 1 obeys the standard kink equation (1.3), having solution φ 1 (x) = tanh(x − a) with arbitrary centre a. Notice that the equation also has the solutions φ 1 (x) = −1 and φ 1 (x) = coth(x − a) satisfying the boundary condition, but the latter is excluded because it is singular at x = a. For the second iteration, let us take φ 1 to be the kink centred at the origin, as the effect of a translation is rather trivial. Equation (2.1) for φ 2 is the same as equation (1.5) with impurity tanh x, having the solutions (1.8) illustrated in Fig. 1. For all x and c, φ 2 (x) < 1. As before, the solutions include kink-antikink pairs, and also positive and negative bumps on the background of the vacuum φ 2 (x) = −1. Also acceptable at the second iteration is for the impurity to be the vacuum, φ 1 (x) = −1. This gives solutions for φ 2 that are either simple kinks or again the vacuum. The family of solutions φ 2 therefore incorporates all acceptable solutions in the φ 1 family, including the starting, vacuum solution φ 0 . An interpretation is that the family of generic φ 2 kink-antikink solutions is completed by sending the antikink to infinity, and then both the kink and antikink to infinity. The third iteration is algebraically more complicated. We need to solve eq.(2.1) for φ 3 with impurity φ 2 given by eq. (1.8). The explicit solution is given in section 3. The three moduli of the solution are the constant of integration x 3 , the parameter c in φ 2 , and the centre of the original kink φ 1 . Particularly interesting are the solutions with the reflection symmetry φ 3 (−x) = −φ 3 (x), which arise when φ 1 is a kink at the origin, φ 2 has arbitrary parameter c > −1, and the constant of integration is chosen to preserve the symmetry. These solutions are shown in Fig. 2. Use of their 1-dimensional moduli space could resolve some difficulties in modelling kink-antikink-kink dynamics that arose in ref. [14]. Note the appearance of a shape deformation when c is close to zero, as we anticipated in the approximate solutions (1.13), and the occurrence of kink-antikink-kink solutions for c > 1. Fig. 3 shows a class of solutions φ 3 without reflection symmetry, with fixed c = 10 5 and various constants of integration x 3 (see eqs.(3.10) and (3.11)). We have not systematically attempted a fourth iteration but can make some general observations. A class of solutions φ 4 consists of kink-antikink-kink-antikink configurations. If these are well separated we can denote their locations, where φ 4 (x) = 0, by a 1 , a 2 , a 3 , a 4 . The kink-antikink pair at a 1 and a 2 arises from a kink impurity at their midpoint 1 2 (a 1 + a 2 ). Similarly the antikink-kink pair at a 2 and a 3 arises from an antikink impurity at 1 2 (a 2 + a 3 ), and so on. So φ 3 is a solution with kink, antikink and kink locations 1 2 (a 1 + a 2 ), 1 2 (a 2 + a 3 ), 1 2 (a 3 + a 4 ). In turn, φ 3 arises from a kink-antikink solution φ 2 with kink and antikink locations 1 4 (a 1 + 2a 2 + a 3 ), 1 4 (a 2 + 2a 3 + a 4 ), and finally φ 2 arises from a single kink impurity φ 1 centred at 1 8 (a 1 + 3a 2 + 3a 3 + a 4 ). Not all solutions φ 4 are well separated kink-antikink-kink-antikink configurations. Some such solutions, and some alternative types of solution involving bumps, are shown in Fig. 4. There could also be an interesting class of solutions φ 6 with two moduli. These would be configurations with a reflection symmetry, where a kink on the left is deformed, and an antikink on the right is similarly deformed. The moduli space could be similar to that proposed in [3] and further discussed in [6,13]. Space-deformed kinks The equation for a kink with impurity (1.5) can be formally integrated [8], and this solution method gives considerable geometrical insight. The method can be applied iteratively to solve the entire set of equations (2.1), but the result involves multiple integrations, and appears algebraically intractible. Recall that the right hand side of (1.5) vanishes for φ = ±1, so solutions cannot cross these values. A solution φ(x) that approaches −1 as x → −∞ is either (i) trapped between −1 and 1, or (ii) is everywhere less than −1. We ignore here the vacuum solution φ(x) = −1. Let us first rewrite eq.(1.5) as In case (i), the solution is where the lower limit of the integral provides a constant of integration. In case (ii), the solution is with a arbitrary. (Recall that we are assuming that the last integral converges.) We refer to The interpretation of the solution tanh(y(x) − a) depends on the behaviour of y as x increases. If χ is everywhere negative, which means that χ → −1 as x → ∞, then y increases to ∞ monotonically with x, and the solution is a spatially deformed single kink. If χ < −1 everywhere, then y increases more rapidly than x. The effect is to produce a solution φ(x) that is a steepened kink. If χ crosses zero at x = X, then dy dx changes sign and part of the profile of φ is reflected about X. Equivalently, there is a spatial fold at X. If χ crosses zero again, there is another reflection, or fold. When χ → −1 as x → ∞, we can define the overall stretching or compression of the deformed kink, The asymptotic form of φ(x) is tanh(x − a) for x 0 and tanh(x − a − s) for x 0. The kink has been stretched by distance s if s > 0 and compressed by |s| if s < 0. Stretching by more than a small distance can introduce kink-antikink pairs. Analogous to the spatial folding in the relation between y and x is to imagine walking the length of a corridor, when it is uncomfortable to walk very slowly, but comfortable to sit for a while. One can walk the length in one go (a kink), and sit the rest of the time, or walk backwards and forwards a few times (kinks and antikinks), sitting less. With more time available one can walk more often backwards and forwards. If the time available is short, one must walk quickly (a steepened kink). All this analysis applies to the iterated kink equation. Consider a generic sequence of solutions φ n (x). For n odd, φ n must be of the tanh type, to avoid singularities, but for n even, φ n can be of tanh or coth type. For n odd, φ n is a spatially deformed kink, whose deformed spatial coordinate is and whose overall stretching/compression is x n is the arbitrary constant of integration. An explicit solution for φ 3 can be found using this approach. Let us assume that φ 1 is a kink centred at the origin; φ 2 is then given by eq. (1.8). Using the deformed spatial coordinate y 3 given by the integral (3.8), we obtain for c ≥ 0, and for −1 < c ≤ 0, The solutions are shown in Fig. 2 and Fig. 3. Specifically, in Fig. 2 we plot φ 3 for x 3 = 0. These are the solutions with reflection symmetry. It is clear that the modulus c, which measures the strength of φ 2 , controls the emergence of an antikink-kink pair. For large c such a pair is easily visible in φ 2 , and the whole solution φ 3 represents a kink-antikink-kink configuration. When c approaches zero, φ 2 tends to the constant −1, which leads to a single kink for φ 3 . This single kink solution becomes steeper and steeper as c → −1. In Fig. 3 we show the impact of x 3 on φ 3 for fixed c. We choose c = 10 5 to better visualise the observed behaviour. Here, φ 2 represents a well separated kink-antikink pair. For large x 3 the solution describes a single kink monotonically interpolating between the vacua. The impact of φ 2 is negligible, except on part of the kink tail. When x 3 approaches zero, the single kink interacts strongly with φ 2 and the kinkantikink pair hidden in φ 2 has a pronounced effect. Finally, for large negative x 3 , the single kink reappears but on the opposite side of the origin. This variation with x 3 represents a flow on the moduli space, where an incoming kink creates an antikinkkink pair (due to the interaction with φ 2 ), and later on annihilates this pair leaving an outgoing kink. φ 6 kink as fixed point The iterated kink equation has a curious fixed point. We find this by setting φ n = φ n−1 . Then eq.(2.1) becomes the φ 6 kink equation The generic non-singular solutions, satisfying the boundary condition φ → −1 as x → −∞, are of the form with c arbitrary. These all have the property φ → 0 as x → ∞. We have not constructed an iterated sequence of solutions φ n with limiting form (4.2). The approach to the limit cannot be uniform in x. There is also an interesting 2-cycle of the iteration, a solution of the pair of equations We assume that φ → −1 and ψ → −1 as x → −∞. Setting ψ = φΩ, we find that these equations reduce to The equation (4.6) for Ω is the usual first order equation for a φ 4 antikink. It has trivial solutions Ω(x) = ±1, and non-trivial solutions Ω(x) = − tanh x and Ω(x) = − coth x, or translates of these. If Ω(x) = 1 then we recover the fixed point solution (4.2), the φ 6 kink. There is no solution with Ω(x) = −1 satisfying the boundary conditions. When Ω(x) = − tanh x, then ψ(x) = −φ(x) tanh x, so φ(x) = −ψ(x) coth x. Therefore, multiplication by − tanh x and − coth x automatically alternate during iteration of the 2-cycle, so we need only consider the case Ω(x) = − tanh x. The remaining equation (4.5) can be expressed as This is the standard equation for a φ 6 kink, but in terms of a deformed spatial coordinate y, defined by dy = tanh x dx. Integrating, we find that φ(y) = ±(1+2e −2(y−c) ) − 1 2 , where y = log cosh x. Choosing the appropriate sign, and rearranging, we obtain the solution with a > −1, and this is paired in the 2-cycle with where b = a + 1. See Fig. 5. For large positive a the field φ describes a well separated kink-antikink pair of φ 6 theory (interpolating between −1 and 0). When a decreases the kink and antikink approach each other and finally, for a = 0, annihilate to the vacuum φ = −1. For negative a the field φ forms a negative bump whose strength becomes arbitrarily large as a approaches −1. Simultaneously, the field ψ represents a kink (interpolating between −1 and 0) and a second kink (interpolating between 0 and 1) of φ 6 theory. They separate completely as a → ∞. When a = 0 these kinks merge into the kink of φ 4 theory, and this becomes steeper and steeper as a → −1. Energy function for iterated kinks Here we present an energy function whose stationary points include the solutions of the iterated kink sequence of equations (2.1). Let us start with eq.(1.5) for a kink with given impurity χ, rewritten as We shall suppose that the zeros of χ (if any) are a discrete set of points, and require that dφ dx is zero at these points. Recall that φ(−∞) = −1, and φ(∞) = ±1. Consider the energy Formally, this is the standard static energy of φ 4 theory in terms of the deformed spatial coordinate y, because dy = −χ(x) dx, except that the endpoints of the y-integration may be non-standard. This energy differs from previous self-dual impurity models which have eq. (1.5) as the corresponding Bogomolny equation [8]. In the usual way, we can complete the square in the integrand, and obtain The last term depends only on the field topology -the boundary data of φ. The energy E χ is stationary for solutions of eq.(5.1), because a change in φ of order ε changes E χ at order ε 2 , though it is not guaranteed to be a minimum unless χ is everywhere negative. The energy value is E χ = 4 3 if φ(∞) = 1 and E χ = 0 if φ(∞) = −1; it can be zero because the energy density is negative in any region where χ is positive. It is straightforward to extend the energy function (5.2) to deal with iterated kinks. Define where µ n are fairly arbitrary positive numbers whose sum is finite. We require that φ n has zero derivative at all locations where φ n−1 is zero. Completing the square in each term of the sum, we see that E is stationary when the sequence of iterated kink equations is satisfied. Summary We have introduced a new, iterated equation for kinks in φ 4 theory. This was motivated by examples of how impurities can affect a kink. Each equation in the iterated sequence is a first order, static ODE, whose solution includes one new modulus, the constant of integration. In the iterated scheme, the first iteration generates a kink from the vacuum. At the second iteration, this kink is an impurity which acts like a mirror. It generates a kink-antikink configuration, or a positive or negative bump solution around the vacuum, depending on the value of the constant of integration. The third iteration can produce a kink deformed by a variant of the kink's shape mode, and also kink-antikink-kink configurations. More generally, the nth iterate generates an n-dimensional moduli space of solutions which we propose could be useful for modelling the dynamics of n kinks and antikinks. The bump-like configurations capture the type of fields that occur dynamically when kinks and antikinks annihilate, and that are missed in some existing collective coordinate schemes. It would be interesting to use the standard φ 4 theory Lagrangian to calculate the metric (equivalently, the kinetic energy for time-varying moduli) and potential energy on these moduli spaces, and to study in detail the classical and quantized dynamics of kinks using these novel collective coordinates. 1) is also the linearisation of eq.(2.1) for φ n 0. We fix u −1 (x) = 0. Then, generically, u 0 is a constant, u 1 is linear in x, u 2 is quadratic, and so on. u n (x) is a polynomial of degree n, so it has at most n real zeros. A zero can be regarded as analogous to the location of a kink or antikink, depending on whether dun dx is positive or negative at the zero. Exceptionally, u n is a polynomial of degree n − k if the first non-zero function in the sequence is u k , a non-zero constant.
2019-08-16T08:57:24.000Z
2019-08-16T00:00:00.000
{ "year": 2019, "sha1": "00d1a8d443803223b79d53c8667a27058edd981f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2019)086.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "00d1a8d443803223b79d53c8667a27058edd981f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
1965334
pes2o/s2orc
v3-fos-license
Usage and Distribution for Commercial Purposes Requires Written Permission. Cataract Surgery with a New Fluidics Control Phacoemulsification System in Nanophthalmic Eyes Chang Et Al.: Cataract Surgery with a New Fluidics Control Phacoemulsification System in Nanophthalmic Eyes Purpose: To report visual outcomes and complications after cataract surgery in nanophthal-mic eyes with a phacoemulsification system using the active fluidics control strategy. Methods: This is a retrospective case series. All eyes with an axial length of less than 20 mm that underwent cataract surgery or refractive lens exchange using the Centurion Vision System (Alcon Laboratories Inc.) in Hong Kong Sanatorium and Hospital were evaluated. The visual acuity and intraoperative and postoperative complications were reported. Prior approval from the Hospital Research Committee has been granted. Results: Five eyes of 3 patients were included. The mean follow-up period was 10.2 ± 5.3 months (range, 4–18). Two eyes (40%) had a one-line loss of corrected distance visual acuity. No uveal effusion and posterior capsular tear developed. An optic crack and haptic breakage in the intraocular lens developed in 1 eye (20%) and 2 eyes (40%), respectively. Additional surgeries to treat high post-operative intraocular pressure were required in 1 eye (20%). Conclusion: The use of a new phacoemulsification system, which actively monitors and maintains the intraoperative pres-219 sure, facilitated anterior chamber stability during cataract surgery in nanophthalmic eyes. This minimized the risk of major complications related to unstable anterior chambers such as uveal effusion and posterior capsular tear. Development of intraoperative crack/breakage in a high-power intraocular lens was common. The Centurion Vision System (Alcon Laboratories Inc.) is a new phacoemulsification system that actively monitors and maintains a constant pressure change in the eye during cataract surgery. A recent laboratory study reported that this system effectively reduces postocclusion surge [10]. In the current study, we evaluated the safety of performing cataract surgery in nanophthalmic eyes with this system followed by implantation of a single highpower intraocular lens (IOL). Methods This retrospective, observational case series included nanophthalmic eyes defined as having an AL of <20 mm [4,7] without ocular malformation [2,6] that underwent cataract surgery or refractive lens exchange with the Centurion Vision System and implantation of the aspira-aA IOL (HumanOptics AG) between July 2013 and December 2014 at the Hong Kong Sanatorium and Hospital. Ethical approval was obtained from the hospital. All eyes underwent a comprehensive eye examination preoperatively. The keratometry was obtained by a manual keratometer. The IOLMaster (Carl Zeiss Meditec AG) was used to acquire the AL and AC depth for IOL power calculation using the Hoffer-Q formula (personalized A-constant, 118.7). All patients were informed about the increased risk of refractive surprise and complications due to the small eyes. The same surgeon (J.S.M.C.) performed all surgeries. Preoperatively, intravenous mannitol was administered to lower the intraocular pressure (IOP) in 3 eyes (both eyes, patient 1; and right eye, patient 3). A 2.2-/2.75-mm temporal/superior clear corneal incision was created with a keratome. Healon GV Ophthalmic Viscosurgical Device (OVD) (Abbott Medical Optics Inc.) was injected into the AC. A continuous curvilinear capsulorhexis was created. After hydrodissection and nucleus spitting, coaxial phacoemulsification was performed using the Centurion Vision System. The Active Fluidics strategy was selected to improve the stability of the surgical environment. A target IOP of 60 mm Hg, equivalent to a bottle height of 72 cm of water, was chosen to balance patient comfort and irrigation pressure. An irrigation factor of 1.2 (an increase of pressure on the balanced salt solution [BSS] bag by 20% more than the intended IOP) was used intraoperatively to compensate for the increased posterior pressure in the nanophthalmic eyes. Irrigation and aspiration of the residual cortex and posterior capsular (PC) polishing were performed with coaxial technique using the same phacoemulsification system. The IOL was loaded and injected into the capsular bag. At the end of surgery, a surgical peripheral iridotomy was performed in 2 eyes (left eye, patient 1; and right eye, patient 3). All incisions were hydrated and intracameral moxifloxacin (Vigamox; Alcon Laboratories Inc.) was administered. The postoperative medications included topical neomycin and polymyxin B sulfates and dexamethasone ophthalmic ointment (Maxitrol; Alcon Laboratories Inc.), 0.1% nepafenac ophthalmic suspension (Nevanac; Alcon Laboratories Inc.), 1% prednisolone acetate ophthalmic suspension (Econopred Plus; Alcon Laboratories Inc.), and 0.5% moxifloxacin hydrochroide ophthalmic solution (Vigamox; Alcon Laboratories Inc.) for the operated eye. Postoperatively, prophylactic pressure lowering topical medication was not used. In 1 patient (patient 1), the small pupil of the left eye was stretched first with injection of OVD and then manually with a Sinskey Hook (Bausch & Lomb) to release the posterior synechiae before the continuous curvilinear capsulorhexis was created. Five 1-mm selfsealing paracenteses were created followed by insertion of 5 iris retractors (Alcon Laboratories Inc.) to mechanically dilate the pupil. The leakage compensation was set at 10 ml/min to allow for extra loss of fluid at the 5 leaking incisions intraoperatively. At the end of surgery, the iris retractors were withdrawn. In the right eye of the same patient, a Malyugin ring (Microsurgical Technology) was implanted first to mechanically dilate the small pupil. Manipulation of the Malyugin ring was difficult due to the shallow AC and bulging anterior lens capsule. Trypan blue stain was applied to aid visualization of the anterior lens capsule while implanting the Malyugin ring and creating the continuous curvilinear capsulorhexis. The Malyugin ring was withdrawn at the end of surgery. Table 1 shows the demographics of the 3 patients (5 eyes) and the preoperative ocular findings. Table 2 shows the postoperative uncorrected and corrected distance visual acuities. Two eyes (40%) had a one-line loss in corrected distance visual acuity. Table 2 shows the postoperative manifest refractions. Table 2 also shows the IOL powers and predictability of the postoperative refraction using different IOL formulas. Back-calculation showed that 3 (60%), 2 (40%), and 0 eyes (0%) achieved a manifest refraction spherical equivalent from the target refraction of within 0.50 D using the Hoffer-Q, Haigis, and SRK/T formulas, respectively. Intraoperative Complications The AC remained stable and hardly any posterior pressure was experienced in any eyes intraoperatively. No uveal effusion or PC tear developed. Optic crack was found in 1 eye (left eye, patient 1) (20%) when unfolding the IOL in the eye. No intervention was required because the crack occurred at the periphery of the IOL optic and did not affect the implantation. The patient was asymptomatic postoperatively. Haptic breakage at the junction between the optic and haptic was found in 2 eyes (right eye, patients 1 and 3) (40%) when unfolding the IOL in the eye. In the former case, the haptic broke partially but did not come off the IOL. However, the IOL (+55 D) was too thick to be folded and cut for removal and therefore was explanted through an enlarged 6-mm incision and a new IOL was implanted in the capsular bag. In this eye, a capsular tension ring (type-14C; Morcher GmbH) was implanted in the capsular bag after IOL implantation. During incision suturing, iris prolapse occurred and the iris was pushed back into the AC by injecting OVD. In the latter case, one haptic was missing and was found in the cartridge injection system. The optic of the IOL (+41 D) was cut partially and the IOL was folded and removed through an enlarged 3-mm incision. No suturing was required. Table 2 shows the postoperative complications. No uveal effusion or retinal detachments developed postoperatively in any eyes. The IOP value in any eyes were 21 mm Hg or lower at all follow-up visits after 1 month. All ACs and IOLs were unremarkable at the last visit. Postoperative Complications In 1 eye (right eye, patient 1) (20%), additional surgeries were performed because of an IOP of 27.3 mm Hg at postoperative day 1. After fluid release from the AC, the IOP remained high (24 mm Hg). Since the AC was very shallow, aqueous misdirection was suspected. Intravenous mannitol and topical antiglaucoma medications were administered, but this was unsuccessful. Therefore, vitreous tapping with a 30-guage needle through the pars plana with injection of air bubbles into the AC were performed to deepen the AC. At postoperative day 3, iris repositioning and re-suturing of incisions were performed. Diffuse macula edema was present in both eyes of this patient at the last visit. The IOP was 10 mm Hg in both eyes. Discussion Cataract surgery in nanophthalmic eyes differs from normal eyes because of the decreased intraocular space, increased risk of intraoperative complications, and decreased accuracy in IOL power prediction. Implantation of a single high-power IOL in the capsular bag [2][3][4][5][6][7][8][9]11] and piggybacking IOLs in the capsular bag [12] or in both the capsular bag and ciliary sulcus [1,12] have been commonly attempted in small eyes. In the current study, we implanted a single high-power IOL in the capsular bag in nanophthalmic eyes with an AL of <20 mm without major complications. The current patients had a few characteristics that increased the difficulties during cataract surgery. First, the very shallow AC (range, 1.94-2.27 mm) increased the risk of damaging the corneal endothelium intraoperatively. Healon GV, a cohesive, highly viscous OVD, was therefore injected to create and maintain the AC during capsulorhexis and IOL insertion. A recent meta-analysis [13] that compared the protective effects of different OVDs on the corneal endothelium intraoperatively reported that viscoadaptives and a soft-shell technique performed the best; however, superviscous cohesive OVDs, e.g., Healon GV, is still superior to dispersive OVDs. Second, the pupils of patient 1 were small and could not be dilated because of posterior synechiae. Malyugin ring can be used for mechanical pupillary dilation during cataract surgery in eyes with small pupil and deep AC. However, during the first-eye surgery, the Healon GV injected was forced out of the eye by the high posterior pressure [7,8]. There was insufficient space to place the Malyugin ring in the eye. The excessive anterior iris bulging also made the peripheral AC insufficiently deep for implantation. To avoid touching the corneal endothelium in eyes with a shallow AC, more downward force than usual had to be applied to the Malyugin ring to capture the iris, but this can increase the risk of anterior capsular tear and subsequent development of a radial tear. We instead used iris retractors, but this resulted in 5 partially leaking wounds. This potentially made the already shallow AC even more difficult to maintain during phacoemulsification. The Centurion Vision System that we used allows surgeons to customize the estimated leakage compensation with different incision sizes and numbers. In the current case, we set it at 10 ml/min to compensate for the extra fluid loss from the 5 paracenteses. In the second-eye surgery of the same patient, although the AC was shallow, it had less posterior pressure and a Malyugin ring was used. Maintaining a stable IOP intraoperatively is critical to prevent major complications such as uveal effusion [9] and PC tear [7] in nanophthalmic eyes. The reported rates of uveal effusion and PC tear in nanophthalmic eyes were up to 9.3% [2, 3, 5, 7, 8] and 11.7% [2,3,5,7,8], respectively. Intravenous mannitol is often administered preoperatively to lower the vitreous pressure prophylactically to decrease the posterior pressure [1,3,[5][6][7]9]. When the phacoemulsification tip becomes occluded by a nuclear fragment, the vacuum increases. Once the occlusion breaks, the surge can lead to a sudden decrease in IOP and ocular collapse. Maintaining a higher bottle height increases infusion pressure, therefore making the AC more stable [7]. During surgeries performed in small eyes, we used to employ the Infiniti Vision System (Alcon Laboratories Inc.) and raised the infusion bottle to the highest level (110 cm), corresponding to an IOP of about 80 mm Hg to control surge. The downside is that patients are less comfortable. The AC is maintained passively and the response to pressure change is slow. The IOP fluctuates a lot and the frequently lowered IOP increases the risk of uveal effusion and PC tear. In the current patients with shallow ACs, the Centurion responded to AC change more rapidly. The Active Fluidics strategy of the system actively monitors pressure changes with the built-in pressure sensors in the phacoemulsification cassette and responds by compressing/decompressing the BSS bag with 2 metallic plates. Hence, the system compensates for any IOP sudden changes in a timely manner (100 ms) (e-mail communication with Alcon Laboratories Inc., 2014), minimizes the surge volume, and ensures that the IOP returns to preocclusion status. Therefore, a lower target pressure of 60 mm Hg, which is predetermined by the surgeon, can be maintained to increase patient comfort and decreases the risk of postocclusion surge. A recent laboratory study [10] that compared the surge performance between the Infiniti Vision System (gravity-based fluidics) and the Centurion (Active Fluidics) found that the latter had less surge in the vacuum limit range of 200-600 mm Hg than the former. The Active Fluidics strategy has an extra feature, the irrigation factor, in the infusion setting to increase the pressure on the BSS bag to maintain the AC stability. Since the infusion resistance is inversely proportional to the size of the phacoemulsification sleeve and incision size, higher resistance leads to a greater pressure decrease when switching the aspiration pump from off to on, because the outflow is greater than the inflow. The irrigation factor can be adjusted intraoperatively until no fluctuation in posterior capsule is seen. In the current cases, we used a value of 1.2 and found a very stable AC and the posterior capsule did not move forward. With the use of all these features in this system, the ACs in the current study were much more stable using a target IOP of 50-60 mm Hg and there was no evidence of posterior pressure at any time, despite the 5 leaking wounds at the sites of the iris retractors. This differs from our previous experience with the passive, gravity-based system. The current surgeries were essentially performed as if the eyes had an AL of 24 mm. A complication associated with implanting a single high-power IOL is the higher incidence of IOL breakage. The current study had a much higher rate of breakage of the IOL haptics compared to previous studies (range, 1.0-8.3%) [2,3,6]. Another IOL had a crack at the optic edge that possibly occurred during IOL injection. We postulated that this occurred because of the substantially thicker optic and the increased effort expended to push the IOL within the cartridge injection system. When haptic breaks, it is not always possible to refold and cut the IOL to remove it. Incision enlargement may be required to remove the IOL, thus increasing the risk of uveal effusion and iris prolapse. We suggest using a thicker OVD, e.g., Healon GV and Healon 5 (Abbott Medical Optics Inc.), with slow, small, intermittent pushing of the IOL in the cartridge, allowing time for the bulky haptic and optic to adapt to the tight cartridge tunnel. A single high-power IOL was implanted in all current eyes, which is advantageous over piggybacking IOLs in short eyes in that it prevents interlenticular opacification [1,12]. In extremely short eyes, the capsular bag may be too small to accommodate 2 IOLs [2]. However, placing 1 IOL in the capsular bag and the second IOL in the ciliary sulcus [1,12] can result in iris chafing, recurrent iritis, and pigment dispersion syndrome [14]. It was particularly important in 1 of the current patients with posterior synechiae, which may indicate previous iritic episodes. Predicting the IOL power in nanophthalmic eyes is more difficult than in normal eyes. Although the assumption of proportionality between the AC depth and AL still holds in nanophthalmic eyes [1,3], when a high-power IOL was used, even a small axial malposition can lead to a substantial refractive error [8,11]. The Hoffer-Q formula has been generally recommended for IOL power calculations in short eyes to better predict the postoperative refraction [1]. Two studies [11,15] have reported that the performances of IOL formulas (Haigis, Hoffer-Q, Holladay-1, Holladay-2, and SRK/T) in short eyes were similar after optimization of the IOL constant. Nevertheless, we found that the Hoffer-Q formula generally provided the least hyperopic error, except in patient 1, who had a very short AL of approximately 16 mm and an average keratometry of >50 D bilaterally. Since we did not have a sufficiently large sample size for IOL constant optimization for short eyes, we warned the patients about the uncertainty of predicting the postoperative refraction and that monovision (targeting for -2 D for the first-eye surgery) might be helpful. In conclusion, we report the results of cataract surgery in nanophthalmic eyes using Active Fluidics technology in a new phacoemulsification system, which is designed to avoid the commonly reported unstable AC-related complications. One high-power IOL was implanted in the capsular bag, but there is an increased risk of development of intraoperative IOL cracks/breakage. Monovision can be offered to patients targeting myopia in the first eye to avoid excessive postoperative hyperopia.
2018-05-08T18:41:04.270Z
0001-01-01T00:00:00.000
{ "year": 2016, "sha1": "e327e8603dabc76feed0e2297115215c6320bca4", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/452158", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e327e8603dabc76feed0e2297115215c6320bca4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267748461
pes2o/s2orc
v3-fos-license
Statut en vitamine B12 cez les diabétiques type 2 traités par metformine par rapport à ceux sans metformine : une étude transversale en Tunisie RESUME Introduction : Des études récentes suggèrent que la prise au long cours de la Metformine peut diminuer le taux plasmatique de la Vitamine B12. Objectif : Evaluer le statut en Vit B12 dans une population tunisienne des diabétiques de type 2 (DT2) traités par Metformine et étudier l’association de leur statut vitaminique avec la dose, la durée de prise de Metformine, ainsi qu’avec les différents paramètres cliniques et biologiques. Méthodes : Cette étude transversale et comparative a inclus 200 patients DT2. Un dosage de la Vit B12 était réalisé chez tous les patients avec un examen neurologique et une numération formule sanguine. Résultats : Le taux moyen de Vit B12 dans notre population était de 398.5 ± 188.3 pg/ml. Le taux moyen de la Vit B12 était significativement plus bas chez les patients traités par Metformine (356,9 pg/ml versus 460,9 pg/ml, p<0,01). Le déficit en Vit B12 était significativement plus fréquent dans le groupe sous Metformine et de même pour l’insuffisance en Vit B12. Le statut en Vit B12 était significativement associé à la durée, la dose journalière et la dose cumulative de Metformine. Le déficit en Vit B12 était associé à l’anémie, la macrocytose et la neuropathie diabétique. En analyse multi variée, le déficit en Vit B12 dans notre population était associé avec la durée de prise, la dose journalière et cumulative de metformine, la neuropathie diabétique, l’anémie et la macrocytose. Conclusion : Notre travail a montré une association de la Vitamine B12 avec la dose et la durée de prise de Metformine chez les DT2, avec des répercussions hématologiques et neurologiques. INTRODUCTION Metformin, discovered in 1922, is one of the first antihyperglycemic molecules used in the treatment of type 2 diabetes (1).Several studies, such as that of the UKPDS, have shown the benefits of metformin on improving glycemic control, reducing the micro and macrovascular complications of diabetes with a low risk of hypoglycemic accidents (2). Consequently, metformin currently represents the first line treatment for T2D by several international recommendations (2-4).In addition to its anti-diabetic role, metformin has beneficial effects on improving the metabolic profile of patients, reducing cardiovascular morbidity and mortality from all causes (5-7).However, some recent studies have shown that long-term use of metformin by diabetic patients may decrease the absorption of Vitamin B12 and increase the risk of vitamin B12 deficiency. Unfortunately, there is a lack of studies in Tunisia on the prevalence of metformin-related vitamin B12 deficiency in T2D individuals.Additionally, there are no guidelines to address how often T2D patients who are being treated with metformin should be screened for vitamin B12 deficiency risk and, if appropriate, prescribed vitamin B12 supplements. This study was done to assess the vitamin B12 status in Tunisian T2D patients treated with metformin, compared to a control group and to specify the correlation of vitamin B12 deficiency with the dose and duration of metformin intake, as well as with the various clinical and biological parameters. METHODS To meet these objectives, we conducted a cross-sectional, comparative and analytical study in our department from September 2019 to June 2020 among type 2 diabetes patients. Patients with severe renal impairment (chronic kidney disease stage 4 and above), hepatic impairment, untreated thyroid disease, myeloproliferative syndrome, iron deficiency anemia, gastric surgery, a vegetarian diet or vitamin supplementation were excluded from the study. Finally, 200 patients were divided in two groups: the first group of 120 diabetic patients whose treatment contains metformin for at least 12 months and the second group of 80 patients not treated with metformin.All patients signed an informed consent. The study was approved by the local Institute Ethical Committee. For each patient, the following data were determined: The age and sex, physical activity, smoking, the alcoholism, the presence of other pathologies associated with diabetes, duration of diabetes, degenerative complications, the antidiabetic treatment (Metformin: duration of treatment (year), the daily dose (mg), the cumulative dose = the daily dose (g) x duration (months). We also collected clinical parameters: weight, height, body mass index, waist circumference, blood pressure, evaluation of the superficial sensitivity by the monofilament testing. Clinical neuropathy was defined as a reduction or absence of light touch sensation to monofilament in either foot (< 8 of 10 applications detected).The DN4 (Douleur Neuropathique 4) score was used to define painful neuropathy. Data entry and statistical analysis was performed using SPSS 20.0 software.Comparisons between qualitative variables were performed by chi-square test and Pearson for non-validity of this test, the Fisher exact test bilaterally.Comparisons between quantitative variables were performed using Student's t test, and in case of invalidity by the nonparametric Mann-Whitney. The study of the correlations between the dose / duration of metformin intake, the levels of Vitamin B12 and the various clinical and biological parameters was carried out using the bivariate correlation test and multivariate analysis by logistic regression method. RESULTS Our study population was divided into 100 women and 100 men with a sex ratio of 1.The mean age of the study population was 59.01 ± 9.40 years with a median of 59.5 years and age extremes between 40 and 81 years.The percentage of smoking patients was 30.8% and that of alcoholism was 6.1%.Table 1 shows the baseline characteristics of the metformin users compared to the non-metformin users (Table 1).The mean level of Vitamin B12 assayed in our population was 398.5 ± 188.3 pg/ml with a significant difference between the 2 groups (356.9 vs 460.9 pg / ml; p <0.01). Vitamin B12 deficiency was found in 12 patients (6%) of our population of which 11 patients were on metformin and only one patient was without metformin.Metformin intake was associated with an increase in the prevalence of Vitamin B12 deficiency (9.2% vs. 1.2%; p <0.01) as well as borderline level (30% vs.7.5%; p<0.01). In the first group, the mean duration of metformin intake was 7.37 ± 5.91 years (range: 1-31 years) with an average daily dose of 1816 ± 681.4 mg / day.An inversely proportional statistical correlation was found between the plasma concentration of Vitamin B12 and the duration of use of metformin (r=-0.606and p<0.01) as well as the cumulative dose (r=-0.609and p<0, 01) (Figure 1). The vitamin B 12 status of our patients on metformin was strongly associated with the duration and dose of treatment use: by comparing patients with a Vitamin B12 deficiency to patients with a normal level, the duration of treatment of metformin was significantly higher (17.7 years vs 4.4 years; p<0.01), and the same for the daily dose of metformin (2422 mg vs 1610 mg; p<0.01) and the cumulative dose (520 g vs 93 g; p<0.01). Mean level of serum Vitamin B12 was found significantly lower in patients with clinical neuropathy (257.8 vs 383 pg/ml; p <0.01) and diabetic painful neuropathy (252 vs 433.6 pg/ml; p <0.01). In our study we found an inversely proportional correlation between the plasma concentration of Vitamin B12 and the duration of metformin use (r = -0.606and p <0.01) as well as the metformin dose (r = -0.609and p <0.01).In multivariate analysis, and after adjustment for age, sex, weight, seniority and balance of diabetes, we found a statistically significant correlation between the prevalence of Vitamin B12 deficiency and the daily dose and the duration of metformin use.Additionally, Vitamin B12 deficiency was observed in our population from a minimum daily dose of 1700 mg / day and a minimum duration of 8 years of taking metformin.While Vitamin B12 borderline level was noted as early as 3 years after treatment. In their meta-analysis, Yang et Our study showed that the level of Vitamin B12 was correlated with the various hematological parameters.This correlation was statistically positive with hemoglobin (r = 0.46; p <0.01), leukocytes (r = 0.19; p = 0.03) and platelets (r = 0.2; p < 0.01), and statistically negative with the MCV (r = -0.59;p <0.01). However, Vitamin B12 deficiency was associated only with anemia (OR = 5; p = 0.04) and macrocytosis (OR = 15.4; p=0.4).The first case of megaloblastic anemia induced by vitamin B12 deficiency secondary to metformin intake was reported in 1980.In this article, the duration of metformin use was 8 years (23).Since then, a few cases have been reported in the same direction (10,11,24).However, to date the number of studies on the association of Vitamin B12 with hematologic parameters in T2D treated with metformin remains low, and more research is needed to substantiate this link. Our study has certain limitations: the relatively small number of patients included and the cross-sectional design of the study makes it impossible to draw conclusions on cause and consequence. CONCLUSIONS Despite the limitations of this study (relatively small number, lack of follow-up), it highlights the importance of looking for a vitamin B12 deficiency in type 2 diabetic patients treated with metformin.The dosage of vitamin B12 in these patients is more interesting with the increase in the duration and dose of metformin as well as the presence of diabetic neuropathy, anemia or macrocytosis.More largescale studies are needed to clarify this link and to specify the methods of screening and vitamin B12 supplementation in diabetic patients treated with metformin. Figure 1 . Figure 1.Correlation of Vitamin B12 level with duration and cumulative dose of metformin in metformin users. Table 1 . Comparison of demographic, anthropometric and clinical characteristics between non-metformin users and metformin users. Table 3 . Multivariate analysis of Vitamin B12 deficiency there is a lack of studies in Tunisia on the prevalence of metformin-related vitamin B12 deficiency in T2D individuals.To our knowledge, our study is the first study in our country.The mechanisms responsible for the vitamin B12 deficiency induced by the intake of metformin are not yet well established and remain a subject of controversy.Indeed, several mechanisms are proposed in the literature(15,18,19): Table 2 . Univariate associations of Vitamin B12 status and the clinical and biological parameters in the Metformin group. competitive inhibition of vitamin B12 absorption, alteration of the structural morphology of enterocytes, alteration of the function of the intrinsic factor and alteration of the bacterial flora.However, another mechanism was recently evoked by certain studies: Metformin can inhibit the calcium-dependent binding of the Vitamin B12 complex and intrinsic factor at Table 4 : Summary table of the various international studies
2024-02-20T06:16:18.669Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "3e8576b5807b9a4c3a61d5231187a1c4df7a60bc", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bc76a7ff83727e1549c934acf95e4c3e4927a5ac", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
237291129
pes2o/s2orc
v3-fos-license
Molecular eco-epidemiology of Paracoccidioides brasiliensis in road-killed mammals reveals Cerdocyon thous and Cuniculus paca as new hosts harboring this fungal pathogen Wild animals infected with Paracoccidioides brasiliensis represent important indicators of this fungal agent presence in the environment. The detection of this pathogen in road-killed wild animals has shown to be a key strategy for eco-epidemiological surveillance of paracoccidioidomycosis (PCM), helping to map hot spots for human infection. Molecular detection of P. brasiliensis in wild animals from PCM outbreak areas has not been performed so far. The authors investigated the presence of P. brasiliensis through nested-PCR in tissue samples obtained from road-killed animals collected nearby a human PCM outbreak spot, Rio de Janeiro state, Brazil and border areas. Eighteen species of mammals were analyzed: Dasypus novemcinctus (nine-banded armadillo, n = 6), Cerdocyon thous (crab-eating fox, n = 4), Coendou spinosus (hairy dwarf porcupine, n = 2), Lontra longicaudis (Neotropical river otter, n = 1), Procyon cancrivorus (crab-eating raccoon, n = 1), Galactis cuja (lesser grison, n = 1), Tamandua tetradactyla (collared anteater, n = 1), Cuniculus paca (paca, n = 1), and Bradypus variegatus (brown-throated three-toed sloth, n = 1). Specific P. brasiliensis sequences were detected in the liver, spleen, and lymph node samples from 4/6 (66.7%) D. novemcinctus, reinforcing the importance of these animals on Paracoccidioides ecology. Moreover, lymph nodes samples from two C. thous, as well as lung samples from the C. paca were also positive. A literature review of Paracoccidioides spp. in vertebrates in Brazil indicates C. thous and C. paca as new hosts for the fungal pathogen P. brasiliensis. Introduction Paracoccidioidomycosis (PCM) is a systemic fungal infection occurring in Latin America caused by dimorphic fungi of the genus Paracoccidioides whose major known hosts are humans and armadillos. The infection occurs through the inhalation of fungal propagules dispersed in the air after activities involving soil disturbance. The human disease can manifest acutely, which is rare but usually more severe, and chronically, after a long latency period, years or even decades. Its epidemiology is changing over the last three decades, mainly related to changes of interaction between humans and the environment, e.g. migration, deforestation, expansion of agricultural frontiers, and climate changes [1]. In 2010, a cluster of acute PCM cases was described in the hyperendemic area of Botucatu, São Paulo state, Brazil, which was associated to climate changes related to a high intensity El Niño Southern Oscillation [2]. According to an extensive review of PCM epidemiology, no outbreaks of this fungal disease were described up to 2017 [1]. Recently, acute PCM outbreaks have been reported in Rio de Janeiro state, Brazil and in Northeast Argentina, associated with the construction of a new road (Raphael de Almeida Magalhães highway, BR-493) and one of the biggest hydroelectric dams of South America, respectively [3,4]. Considering the traditional predominance of PCM chronic forms (around 90% of the cases) and the great mobility observed in Brazilian populations throughout the country, investigation of environmental sources for PCM infection has been challenging. Despite being a worrisome public health problem, the emergence of acute PCM cases seems to be an opportunity to understand better the process of infection and to identify risk areas, thus helping to promote prevention policies. The recent detection of specific DNA sequences of Paracoccidioides brasiliensis in shallow soil samples from the roadside of the acute PCM outbreak area in the state of Rio de Janeiro reinforced the association between the highway construction and the occurrence of these severe acute PCM cases [5]. Moreover, this study discusses that Paracoccidioides spp. might be highly associated to nine-banded armadillos (Dasypus novemcinctus) burrows. These results supported public health local actions in targeted areas, focused on early clinical recognition and laboratorial diagnosis of these rare and severe acute PCM clinical forms intending to prevent complications, sequelae, and eventually deaths. Wild animals infected with Paracoccidioides spp., especially naturally infected D. novemcinctus, represent important indicators of the presence of these fungal agents in the environment, particularly considering the great difficulties to isolate the fungus from soil samples and identify it [6]. Thus, the detection of these pathogens in road-killed wild animals has shown to be a key strategy for eco-epidemiological studies of PCM, helping to map risk areas for human infection [7]. As PCM outbreaks had never been identified up to 2017 [3], molecular detection of P. brasiliensis in wild animals belonging to these hot spot areas has not been performed so far. The original description of the outbreak in the metropolitan area of Rio de Janeiro involved eight patients diagnosed during a one-year period (December 2015-December 2016) [3]. Since then, until December 2020, 20 additional acute PCM cases from this area were diagnosed at the Evandro Chagas National Institute of Infectious Diseases (INI/Fiocruz), a reference center for clinical assistance and research of PCM in this state (unpublished data). Therefore, besides investigating soil samples, it is important to explore other environmental sources such as animals harboring this fungal pathogen, which could contribute to identify additional highrisk areas for PCM infection. This study aims to investigate the occurrence of P. brasiliensis in road-killed wild mammals nearby a PCM outbreak spot in Rio de Janeiro state, Brazil, and border areas. In addition, we intend to contribute to the knowledge of Paracoccidioides spp. eco-epidemiology, identifying distribution areas of animal sources from which P. brasiliensis. have been identified in this study. A literature review of Paracoccidioides spp. identification from these animals in the Brazilian territory is also provided. Study area and animals The road-killed animals were collected from January to December 2020 in the roadsides of the BR-040, RJ-116, and RJ-122, three routinely monitored highways. The first road connects Rio de Janeiro and Minas Gerais states, both traditional endemic areas for PCM in Brazil. All these three highways cross some municipalities of Rio de Janeiro state where the outbreak of acute PCM has occurred. The Núcleo de Estudos de Vertebrados Silvestres performed the collection and transport of the dead animals, under authorization of the Brazilian Institute of the Environment and Renewable Natural Resources (IBAMA). These animals were placed into plastic bags identified by a code, with date, hour, and geographic position of collection, and sent to the Universidade Veiga de Almeida, where they were maintained in freezers (-20˚C) until necropsies were performed, usually a month after collection. For the purposes of this study, we only included not entirely disfigured and recently killed animals (1-10 hours). These animals were defrosted 24 hours before necropsies and their organs (lungs, liver, spleen and mesenteric lymph nodes) were collected and processed for DNA extraction. Maturity stage was based on dental aging, size and weight of the animals [8][9][10][11][12][13][14][15]. Ethical statements. The Animal Ethics Committee (Comissão de Ética em Uso de Animais, CEUA/Fiocruz) granted a formal waiver of ethical approval. The Brazilian Institute of the Environment and Renewable Natural Resources (IBAMA) authorized the collection and transport of the biological materials (Abio number 514/2014). The carcasses of the animals used in this study are in accordance with the Operating License number 1187/2013. Molecular detection and identification. The initial step for DNA extraction was performed by grinding the tissue samples frozen by liquid nitrogen using a mortar and pestle [16]. Further steps were carried out in accordance with the manufacturer instructions using the QIAamp DNA Mini Kit (Qiagen, Germany). The DNA pellet was suspended in 200 μl of ultra-pure water and its quality verified through 1% agarose gel electrophoresis. Molecular analyses followed a nested-polymerase chain reaction (nested-PCR) previously described [17], with minor modifications. Briefly, the PCR was performed aiming the rRNA universal fungal region ITS1-5.8S-ITS2 (Internal Transcribed Spacer) using the primers ITS4 (5'-TCCTCCGCTTATTGATATGC-3') and ITS5 (5'-GGAAGTAAAAGTCGTAACAAGG-3') for the first amplification; PbITSE (5'-GAGCTTTGACGTCTGAGACC-3') and PbITSR (5-AAGGGTGTCGATCGAGAGAG-3') annealing in the ITS-1 and ITS-2 regions for the second, which generates 634 and 387 base pairs (bp) amplicons, respectively. The reactions consisted of 5 μl of DNA in a total volume of 50 μl with final concentrations of 10 mM Tris-HCl (pH 8.3), 50 mM KCl, 2.5 mM MgCl2, 1.5 U of JumpStart Taq polymerase (Sigma-Aldrich, St Louis, MO, USA), 10 μM each outer primer, and 100 μM each deoxynucleoside triphosphate (Invitrogen, Thermo Fisher Scientific, Carlsbad, CA, USA). The first reaction conditions were an initial denaturation step at 95˚C for 5 min; 35 cycles at 95˚C for 30 s, 60˚C for 30 s, and 72˚C for 1 min; and a final extension step at 72˚C for 7 min. The conditions of the second reaction were similar, except for the annealing temperature of 62˚C. The PCR products were submitted to a 1.5% agarose gel electrophoresis. DNA was stained with ethidium bromide (0.5 mg/L), and observed using a UV transiluminator. DNA extraction and nested-PCR experiments were performed in triplicate for each organ of all included animals. DNA bands around 387 bp were excised from the gel, purified with the Illustra TM GFX TM PCR DNA and Gel Band Purification Kit (GE Healthcare, Buckinghamshire, UK). The nucleotide sequences were determined via automatic capillary Sanger sequencing at the sequencing platform of the Fundação Oswaldo Cruz-PDTIS/Fiocruz, using the ABI 3730xl-Applied Biosystems machine and the BigDye Terminator v3.1 cycle sequencing kit (Thermo Fisher Scientific, Waltham, MA, USA). Sequences from both DNA strands were generated and edited with the Sequencher software version 4.6 (Gene Codes Corporation, United States). Contiguous sequences were assembled and had their consensus extracted. Then, they were aligned using the ClustalW algorithm [18] with the MEGA software version 6.0, and compared with sequences deposited in the NCBI database (http://www.ncbi.nlm.nih.gov/BLAST) through BLAST search [19]. The sequences generated in this study were deposited in the same database. Cartographic analysis. Thematic maps including the geographic positions of the animals evaluated in this study and the distribution of the same animals' sources from which Paracoccidioides spp. was identified in the literature was created using the QGIS 3.14.15 software. Base maps were obtained from the Brazilian Institute of Geography and Statistics (IBGE). Results Eighteen animals were included in this study. Data obtained from these animals, including their identification, sex, maturity stage, species, cities and GPS coordinates of the collection sites are detailed in Table 1. Specific P. brasiliensis sequences were detected in organs from four D. novemcinctus (CB1504, CB1506, CB1547, CB1584), two C. thous (CBRJ 116-59, CB1550), and the C. paca (CB1554) (sequence numbers MZ233470-MZ233476). The sequences generated in this study presented 99.68 to 100% of identity with the ITS sequence of P. brasiliensis (MN519724.1). The positivity of the organs evaluated from each animal is also depicted in Table 1. A thematic map illustrating the sites of animal collection from this study, together with their nested-PCR results is shown in Fig 1. The literature search about Paracoccidioides spp. in vertebrates in Brazil up to July 2021 revealed 20 papers demonstrating visceral detection of these fungal pathogens and 21 papers reporting serological reactiveness or delayed hypersensitivity against Paracoccidioides spp. (S1 Table). Among the manuscripts reporting fungal detection, the order Xenarthra prevailed, followed by the order Carnivora and the order Rodentia. Regarding the order Xenarthra, D. novemcinctus was the species from which Paracoccidioides spp. was mostly detected, counting 48 positive nine-banded armadillos. Among the species from the order Carnivora, two presented Paracoccidioides spp. detection: four Canis lupus familiaris and two Procyon cancrivorus, while all the fourteen C. thous previously investigated did not have detectable levels of Paracoccidioides DNA. However, serological reactiveness against Paracoccidioides spp. was reported in two crab-eating foxes (C. thous). Concerning the order Rodentia, Paracoccidioides spp. was detected in three members of the family Cricetidae and two of the family Caviidae. No papers reporting attempts of Paracoccidioides spp. isolation or detection neither serological investigation in C. paca were retrieved using our search strategy. Fig 2 depicts the Brazilian states reporting the detection of Paracoccidioides spp. from these three species found positive in this study. Discussion The identification of P. brasiliensis naturally infected D. novemcinctus reached a turning point in the ecological study of this fungal pathogen, contributing to a better comprehension of its environmental and geographical distribution [20]. Although Paracoccidioides spp. ecological niche has not been precisely identified yet, it has been associated with armadillo and their burrows [21], which is also supported by the high frequency of this fungal infection in these animals [22]. Paracoccidioides spp. belongs to the order Onygenales, originated 150 million years ago while armadillos integrate the order Xenarthra, which arose in South America during the Paleocene, between 65 and 80 million years ago. This long coexistence over millions of years, notably the intense contact of armadillos with soil and their digging behavior may explain the intimate ecological relationship between them, as well as the fact that armadillos are naturally infected, eventually presenting disease [23,24]. Among the seven animals presenting specific P. brasiliensis ITS sequences in the present study, four (57%) were nine-banded armadillos, reinforcing the relevance of these animals in the Paracoccidioides spp. ecology. In addition, PCM infection has also been reported in other wild and domestic animals. For this purpose, many studies were conducted, mostly in areas of high PCM endemicity (South and Southeast Brazil), employing different methodologies such as intradermal and serological surveys, histologic and molecular analyzes, and less frequently isolation in culture [20]. Among wild animals, a higher rate of reactiveness to intradermal tests with paracoccidioidin was observed in terrestrial (83%) than arboreal animals (22%) [25]. This is expected considering that the soil is the potential habitat of Paracoccidioides spp. and, consequently, the natural source of these fungal pathogens in higher aerial concentrations. In the present study, we observed P. brasiliensis DNA detection exclusively in terrestrial wild animals (4 armadillos, 2 crab-eating foxes, and 1 paca). However, among all animals analyzed in this work, only one had arboreal habits (Bradypus variegatus), probably because terrestrial mammals are more exposed to the risk of a road accident. Concerning the animals' scientific order, our results are in accordance with the literature data, showing a higher rate of positivity for identification of Paracoccidioides spp. in Xenarthra members, precisely nine-banded armadillos, from which P. brasiliensis has been detected in several organs [7,22,26]. On the other hand, there are few cases of P. brasiliensis identification in Carnivora members other than C. thous (S1 Table) and no reports of this fungal identification in herbivorous rodent pacas. Armadillos present lower corporal temperatures, ranging from 30-35˚C, which may facilitate fungal survival, whereas crab-eating foxes and pacas present higher temperature levels, between 37-39˚C [7,27]. Surprisingly, we had positive results in lymph node samples from two crab-eating foxes and lung positivity in one paca. This may reflect hot spot PCM areas, where these animals were possibly more exposed to this fungal pathogen. Paracoccidioides brasiliensis detection in the two species herein described (C. thous and C. paca) may be underestimated. Remarkably, the habitat use may drives key ecological opportunities for infection in unrelated hosts, especially those who co-existed for long periods with the pathogen. Canids, including the lineage leading to C. thous, arrived in South America during the last Great American Biotic Interchange during the late Pliocene [28], while C. paca is a member of the Caviomorph lineage of rodents that arrived in South America in the Eocene [29]; both species are widely distributed in the Brazilian territory [30]. Cerdocyon thous and C. paca have terrestrial habitats and behaviors that may lead to a greater exposure to environmental sources of Paracoccidioides spp. For instance, although both present digging behaviors, being capable of tunneling, they usually spend time in dens dug by other animals, notably armadillo burrows aiming to feed, to rest, and self-protect. Therefore, Paracoccidioides spp. infection has perhaps been underrated in these hosts. A previous study demonstrated serological reactiveness against Paracoccidioides spp. in two crab-eating foxes (C. thous), which reveals that this species had already been exposed to this fungal pathogen [31]. Serological tools cannot confirm the presence of the pathogen in the host, just a previous contact. On the other hand, molecular methods can detect the pathogen in biological samples [32]. Up to now, there was no evidence in the literature that this host could harbor this fungus. We propose some few hypotheses to explain this matter. First, as previously mentioned, Paracoccidioides spp. infection is indeed less expected in warmer blooded mammals. Second, the well-known relationship between Paracoccidioides spp. and armadillos may justify a tendency to preferentially include these animals in PCM eco-epidemiology studies. Third, studies investigating Paracoccidioides spp. in road-killed wild animals are restraint in terms of number and variety of animals included due to the anatomical conditions required to prevent environmental cross contamination. In this study, the authors initially intended to investigate only nine-banded armadillos. However, our investigation has been expanded to include wild animals that forage on the ground as well as those with burrowing and aquatic habits, considering the ideal occurrence of Paracoccidioides spp. in higher soil moisture areas [7]. Concerning the geographical origins of the positive animals from this study, most of them habited traditional Brazilian PCM endemic areas in the states of Rio de Janeiro and Minas Gerais [33,34]. Although the results herein presented do not establish a direct association between geographic areas of positive animals and human patients presenting acute PCM related to the previous reported outbreak, we highlight the high level of endemicity of the studied area. In addition, the studied and the outbreak areas share the same climatic conditions and biomes. It is expected that environmental disruptions in these locations may have a significantly higher potential to provoke PCM infection and eventually outbreaks. The high aerial dispersion of fungal infective propagules can possibly reach distant areas, exposing populations of neighbor human settlements or even neighbor cities to the risk of infection and disease. The closer the individuals live or work from disturbed areas, the higher will be the inhaled fungal burden, enhancing the risk of infection. Some limitations of this study are worth to be mentioned: the major representativeness of animals from certain municipalities may be related to a higher frequency of monitored highways in these places, as well as an elevated incidence of accidents involving wild life in some points of these roads. In addition, the limited number of animals included in the study is justified by the anatomical conditions required to perform the molecular evaluation in order to avoid cross contamination related to environmental sources. Despite all precautions, it worth to mention that negative results of the nested PCR might be due to the long periods of transportation, rottenness or due to bad frosting conditions, degrading fungal cells and DNA. Moreover, it is noteworthy that although the sequencing of ITS region can identify both recognized species of the genus Paracoccidioides: P. brasiliensis and P. lutzii, the authors herein investigated only P. brasiliensis due to its high endemicity in the place of the study, whereas P. lutzii mostly occurs in the Midwest region and around the Amazon region of Brazil [34]. Lastly, it is possible that papers describing Paracoccidioides detection in D. novemcinctus, C. thous or C. paca exist in other databases not included in the literature search herein described, or that similar findings of other groups were not published. Even so, this work contributes to improve the knowledge of Paracoccidioides spp. ecology, revealing two new potential animal hosts, and also warning about the threat of anthropogenic actions on the nature without an environmental protection plan. Supporting information S1
2021-08-26T05:25:51.282Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "bd20accd88775470fae7d20dfe230f50266dd04e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0256668&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd20accd88775470fae7d20dfe230f50266dd04e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
182661996
pes2o/s2orc
v3-fos-license
Status of Linguatula serrata infection in livestock: A systematic review with meta-analysis in Iran Objectives The present systematic review attempted to determine the prevalence of Linguatula serrata (L. serrata) infection among Iranian livestock. The L. serrata known as tongue worm belongs to the phylum pentastomida and lives in upper respiratory system and nasal airways of carnivores. Herbivores and other ruminants are intermediate hosts. Methods MEDLINE, Embase, Web of Science, Google Scholar, and the Cochrane Library were searched from Nov 1996 to 22 Apr 2019 by searching terms including “Linguatula serrata”, “linguatulosis”, “pentastomida”, “bovine”, “cattle”, “cow”, “buffalo”, “sheep”, “ovine”, “goat”, “camel”, “Iran”, and “prevalence” alone or in combination. The search was conducted in Persian databases of Magiran, Iran doc, Barakatkns (Iran medex) and Scientific Information Database (SID) with the same keywords. After reviewing the full texts of 133 published studies, 50 studies had the eligibility criteria to enter our review. Results By random effects model analysis, the pooled prevalence of linguatulosis was 25% (95%CI: 18.0–33.0, I2 = 98.67 % , P < 0.001) in goats; 15.0% (95%CI: 10.0–20.0, I2 = 97.95 % , P < 0.001) in sheep; 12.0% (95%CI: 7.0–18.0, I2 = 98.05 % , P < 0.001) in cattle; 7% (95%CI: 2.0–16.0, I2 = 97.52%) in buffalos and 11.0% (95%CI: 6.0–16.0%, I2 = 96.26 % , P < 0.001) in camels. The overall prevalence in livestock was estimated to be 25%. The highest infection rate was recorded in West Azerbaijan Province (68%) and the lowest rate was in Khuzestan Province (0.23%) (P < 0.05). Conclusions We concluded that the high prevalence of L. serrata infection in livestock (mainly ovine linguatulosis) show the endemic status of linguatulosis in several parts of Iran and will pose a risk for inhabitants. Control strategies to reduce the parasite burden among these animals are needed. Introduction Linguatula serrata (L. serrata) is one of the cosmopolitan zoonotic food-borne parasites which belongs to class pentastomida. The shape of this parasite resembles tongue and this is the reason of calling this parasite "tongue worm". The lifecycle of this parasite includes four stages: eggs, larvae, nymphs, and adults. The adults live in the upper respiratory system and nasal airways and frontal sinuses of the carnivores, especially dogs as final hosts. Eggs which discharge with nasopharyngeal secretions of the definitive host can be swallowed by herbivores (as intermediate hosts) such as cattle, buffalo, sheep, goat, etc. Then, the larvae hatch from the eggs and migrate mainly to mesenteric lymph nodes (MLNs) and other visceral organs (such as liver, lung, spleen, heart, etc.). The parasite can be transferred to the final host through consumption of meat or viscera of infected intermediate host (Soulsby, 1982;Oryan et al., 2008;Akhondzadeh Basti and Hajimohammadi, 2011;Hajipour and Tavassoli, 2019). Parasites entered in intermediate host cause pathological lesions and signs. Symptoms depend on the infected organ (Tavassoli et al., 2007a;Tavassoli et al., 2017;Shakerian et al., 2008;Dehkordi et al., 2014). Infection with this parasite causes symptoms in intermediate hosts including, emaciation, pale mucosal membranes, ascites, and serous accumulation in abdominal cavity, peritoneal inflammation, and intestinal adhesion. Important symptoms caused by the disease in sheep include: hyperplasia of pulmonary lymphatic tissue and pneumonia (Oryan et al., 2008;NourollahiFard et al., 2011). Humans can act as both intermediate and accidental final host for L. serrata and that means both larval and adult stages can infect humans (Koehsler et al., 2011). In humans, like other intermediate hosts, parasites mainly live in MLNs. But other organs such as liver, intestine, rarely brain, eye and prostate glands may also be affected (Islam et al., 2018). In some cases, migratory nymphs have recovered from anterior chamber of eye. In addition, other involvements like iritis and secondary glaucoma have been reported (Ryan and Durand, 2011). Human infection occurred via accidental ingestion of eggs passed from an infected dog or through consumption of raw/under-cooked infected viscera of contaminated sheep, goats, and cattle . The most common form of human linguatulosis known as Halzoun syndrome (Marrara syndrome) is transmitted by ingestion of L. serrata nymphs (adult stage) found in intermediate host's organs and resulting in nasopharyngeal linguatulosis with signs of pharyngitis, salivation, dysphagia, and cough which all together cause type I hypersensitivity known as Halzoun syndrome. In case of visceral linguatulosis, the disease remains asymptomatic (Hajipour and Tavassoli, 2019;Shakerian et al., 2008;Meshgi and Asgarian, 2003). Detection of parasite nymphs in intermediate host is performed by biopsy, exploratory laparotomy, postmortem examination, and subsequent histopathology (Hendrix, 1998). In asymptomatic cases, there is no need for treatment as the parasite will degenerate after two years; and in symptomatic cases with high burden of parasite, surgical procedures could be useful (Hajipour and Tavassoli, 2019). It seems that visceral linguatulosis in endemic areas for L. serrata, like the Middle East region, is more than diagnosed cases (Oluwasina et al., 2014;Ravindran et al., 2008). In a study carried out in India, the prevalence of linguatulosis among examined animals was estimated to be about 18% (Sudan et al., 2014). Likewise, researchers in Bangladesh reported 19% in cattle (Ravindran et al., 2008) which reveals the equal prevalence in two neighborhood countries. Some researchers in Bangladesh reported that 50.7% of cattle and 31.0% of goats were diagnosed to be infected and they declared that human populations of the country are at high risk of linguatulosis (Islam et al., 2018). The results of a study conducted in Egypt in 2017 revealed that the total prevalence of linguatulosis was 22.8% in herbivorous animals with highest infection in goats (30%) and lowest in donkeys (8%) (Attia et al., 2017). Human cases have been detected in Asian countries including Turkey, Malaysia, China, India, and Bangladesh (Hajipour and Tavassoli, 2019). In Malaysia, the prevalence of 45.4% in adults has been reported (Prathap and Prathap, 1969). Also, such Middle East countries including Egypt, Tunisia, and Sudan have reported human infection cases (Hajipour and Tavassoli, 2019). Although some human cases have been reported from Iran, there is no clear estimation of the prevalence of the infection in Iranian population. Numerous studies have been carried out on linguatulosis among the ruminants in Iran. Nonetheless, there is no exact estimation about the accurate load of this parasite in animals which is critical for economic burden evaluation and establishment of controlling strategies. Based on numerous impacts of linguatulosis on the animal welfare, economy and public health, further considerations and research are deemed to be a desideratum for the epidemiological features and approaches for monitoring programs in Iran. Considering the widespread distribution of linguatulosis in Asian countries such as India and Bangladesh and trademark between countries, the importance of infection is more remarkable nowadays. As far as the researchers of this study investigated, there is no documented review about the exact prevalence of linguatulosis in livestock in Iran. Therefore, the current study is an attempt to fill out this gap. A. Bibliography We performed bibliographic search according to the following topics: Articles: Complete articles, congress summaries, and unpublished data were considered. Type of studies: All original descriptive studies (designated as cross-sectional) about animal linguatulosis were concerned. Epidemiological parameters of interest: Prevalence of L. serrata infection in animals was considered. A. Data collection We inclusively searched all the mentioned databases and unpublished data. The collected bibliographic references were screened carefully in order to eliminate duplicates, case reports, case series, carnivores, studies out of Iran, and human-based studies. Finally, papers with epidemiological parameters of interest were selected and 50 articles met the inclusion criteria. Those articles reporting the prevalence of linguatulosis in herbivores were included in the study (Table 1). The following data were extracted from the literature: first author, year of publication, animal's gender, prevalence rate, geographical region of study, sample size (the number of examined animals), and the year in which studies were carried out (Tables 1, 2). References of the published data were also surveyed to extend the study and to prevent missing valuable data. Eligible data were recorded in a selection sheet (Appendix). C. Quality of studies The quality of meta-analysis was evaluated with STROBE checklist. A checklist including 22 items was considered for well reporting of observational studies. These items were related to the article's title, abstract, introduction, methods, results, and discussion sections. The score under 7.75 was considered a poor quality, between 7.76 and 15.5 low, between 15.6 and 23.5 moderate, and N23.6 high (Von Elm et al., 2007). D. Statistical analysis In this meta-analysis, the number of examined and the number of positive cases were extracted from each study and then standard error (SE) was calculated using the following equation: SE p ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pð1−pÞ n r (where n and p are the sample size and prevalence of study, respectively). Cochran's heterogeneity statistic (p b 0.1) and the I-squared index (25%: low; 50%: medium and 75%: high) were used to evaluate heterogeneity across effect sizes (ESs). The prevalence for each study and pooled estimate of prevalence were presented in a forest plot in which we reported the results as ES with 95% confidence intervals (CI). When heterogeneity was present, we used a random effects model (DerSimonian-Laird method); otherwise we applied a fixed effects model (Mantel-Haenszel method) to estimate pooled effects size. Subgroup analysis was used to evaluate source of heterogeneity among studies. Potential publication bias was explored using Egger's test (P b 0.1 as significant). The meta-analysis was performed with the trial version of Stata MP Version 14 statistical software. Results Among all databases searched from 1996 to 2019 (~24 years), a total of 50 articles were appropriate to be included in this systematic review and meta-analysis study. All the articles were cross-sectional which had been designated to evaluate the prevalence of L. serrata in herbivores including sheep, goat, cattle, buffalo, and camel in Iran. Totally, 11,807 sheep, 14,084 goats, 8037 cattle, 2188 buffaloes and 3791 camels were examined, respectively (Table 1). The forest plot diagram of this review is shown in Figs. 2-6. The highest infection rate was in goats (25%) and then in sheep (15%), cattle (12%), camels (11%) and the lowest infection rate was in buffaloes (7%), respectively. Most of the studies about goats Table 2). The subgroup analysis showed that the infection rate in male animals was significantly more than females (p = 0.00) except for the sheep (Table 3). Moreover, as data analysis showed, the highest prevalence of 56% was seen in mediastinal lymph nodes in goats while the maximum prevalence of 23% of mesenteric lymph nodes (MLNs) was seen in sheep and the lowest prevalence was about 0.01% in the liver of cattle (Table 3). There was a publication bias according the Egger's test which revealed the significant bias in the studies related to buffaloes (p = 0.009) ( Table 4). This result might be due to the fewer publications about buffaloes (8 studies). Discussion Several studies have been carried out to determine the prevalence of L. serrata among herbivores in Iran, but there is no documented exact estimation about this subject. As the parasite involves the ruminants, the rate of infection is high in regions with farming animal activities. The highest infection rate was reported in goats and in all provinces; Mazandaran with 69.15% infection rate had the highest rate in Iran. This may be related to the climatic condition and humidity, different forage habitats of goats, or more exposure to dogs (Hajipour and Tavassoli, 2019). Overall, Tabriz in East Azerbaijan (68%) and Urmia in West Azerbaijan (60%) had the highest prevalence rates of infection. The reason for high prevalence of infection in these regions may be related to the climatic parameters and high humidity which create optimum condition for parasite eggs survival in the environment. Also, infection with L. serrata seems to be higher in mediastinal and mesenteric regions because the mesenteric lymph nodes, located in the way of portal circulation formerly than other organs. In a study carried out by NourollahiFard in 2010, they examined mesenteric and mediastinal lymph nodes of 450 cattle at different sexes and age groups. In this study, they found that 16.22% of mesenteric lymph nodes were infected with this parasite and the infection rate was increased with age as the higher prevalence of infection was observed in animals aged above four years. Also, the prevalence of L. serrata nymphs in different seasons differed significantly (p b 0.05) and the infection rate was higher in autumn season which may be due to humidity or climatic variations (NourollahiFard et al., 2010a). Rezaei et al. (2011) The results revealed that there was a significant correlation between prevalence rate with age and sex in all animals (P ≤ 0.05). The highest prevalence rate was found in goats (P ≤ 0.05) (Rezaei et al., 2011). In a study carried out in Urmia in 2006 by Tajik Alborzi et al. (Mirzaei et al., 2012;Alborzi et al., 2013). Only 8 studies are available about buffaloes' linguatulosis in Iran and the highest prevalence of 26.6% was reported from MLNs by Rezaei et al. (2011) in Urmia and the rate of 18.75% by Tajik et al. (Tajik et al., 2008;Sinclair, 1954). Sheep were the most studied herbivores in Iran and the highest rate of infection with L. serrata (52.5%) was reported in 2007 by Tavassoli et al. in Urmia (Tavassoli et al., 2007b) followed with 42.69% reported by Rezaei et al. in 2011 in the same region (Rezaei et al., 2011). The high prevalence of linguatulosis in dogs and domestic ruminants revealed that there is a high risk of this infection as an endemic disease in the northwestern region of Iran. Also, the study with serological method carried out by Yektaseresht et al. in 2017 in Fars Province revealed the seropositivity of 46.66% in sheep (Yektaseresht et al., 2017). Since this province is one of the most important foci of animal husbandry, preventive measurements and control of infection with L. serrata should be seriously considered in this province. Prevalence studies of L. serrata from different regions in domestic animals have shown that the infection has a global distribution. The reports indicating the prevalence of 43% in Beirut (Khalil and Schacher, 1965), 72% in certain areas of Britain (Sinclair, 1954), 50.7% in Bangladesh cattle (Islam et al., 2018), 13.8% in Talca, Chile (Parraguez et al., 2017), 14.47% in Iraqi cattle (A1-Sadi andRidha, 1994), and 25% in Egyptian camels (Khalil, 1973). These data show the wide range of infection among animals in the world.PF. In a study carried out in 2017 in Australia by Shamsi et al., the researchers chose a number of definitive hosts for infection, including red foxes, feral cats, wild dogs, and intermediate hosts including cattle, sheep, feral pigs, rabbits, goats, and a European hare from the hilltops of south-eastern Australia for detecting L. serrata among them. Their results showed that totally 14.5% of red foxes (n = 55), 67.6% of wild dogs (n = 37), and 4.3% of cattle (n = 164) were infected. They concluded that common occurrence of the parasite in wild dogs, and less frequently in foxes, suggests that these wild canids can act as a potential reservoir for infection of livestock, wildlife, domestic dogs, and possibly humans. The high rate of linguatulosis in wild dogs and foxes in south-eastern Australia suggests that this parasite is more common than what it was previously estimated. Among all potential intermediate hosts in the area, only 4.3% of cattle were infected with parasites' nymphs which suggest that the search for the host(s) acting as the main intermediate host in the region should be continued (Shamsi et al., 2017). There is a correlation between animal husbandry and canine linguatulosis. Eating the raw offal, especially liver of farm animals, is the main source of canine infection. In the mentioned study, stray dogs were more infected than owned dogs which can be justified with better veterinary cares and feeding in the second group (Oluwasina et al., 2014). Reports from Asian countries, and especially the Middle East region and Iran confirm that linguatulosis poses veterinary and public health importance. In addition, in the Middle East, Halzoun also occurs after consuming uncooked sheep/goats in some religious feasts. Also, a new intermediate host named crested porcupines (Hystrix indica) which consumes meat and viscera has been reported from southwest of Iran (Rajabloo et al., 2014). Several human nasopharyngeal involvement cases have been reported from Iran following the consumption of barbecued liver (Tabibian et al., 2012;Maleky, 2001;Siavashi et al., 2002;Mohammadi et al., 2008). It is believed that some unhealthy mindsets, such as the belief that eating raw liver is nutritionally more efficient, play an important role in human linguatulosis in Iran. In this regard, Montazeri et al. reported two cases of linguatulosis in the nose and mouth of a 28 year old woman and her 11 year old daughter who had a history of eating the raw gut of sheep and complained of coughing, headache, oral and nasal discharge (Montazeri et al., 1997). In addition, Sadjjadi et al. reported a case of pharyngeal linguatulosis in a 35 year old woman in Shiraz (Sadjjadi et al., 1998). In a study carried out in Kerman by Yazdani et al.,year-old woman with history of eating raw liver and complaining of upper respiratory symptoms was reported (Yazdani et al., 2014). Maleky et al. in Tehran reported a 25 year old woman with throat pentastomiasis (Maleky, 2001). Also, two cases of Halzoun syndrome were reported in 2012 from Isfahan by Tabibian et al. They reported an Afghan mother and daughter (aged 34 and 23) in Isfahan with history of eating raw goat liver (Hamid et al., 2012). Siavoshi et al. reported three cases of Halzoun syndrome including a man and two women with history of consuming raw liver (Siavoshi et al., 2002). The latest report of pharyngeal linguatulosis was released by Jahanbakhsh et al. in Kermanshah about a 34 year old man with history of consuming raw goat liver (Janbakhsh et al., 2015). In Turkey, human infestation with L. serrata has also been reported. Yilmaz Sabah, East Malaysia was reported in 2011 with a one-month history of upper abdominal discomfort, weight loss, anorexia, jaundice, and dark urine. After using the Whipple procedure and doing some histopathological examinations, the parasite was diagnosed as a nymph stage of Armillifer moniliformis (Latif et al., 2011). Overall, these results show that special attention should be paid to the public health and animal care in order to prevent the infection in Asian and African countries. In conclusion, the high prevalence of L. serrata infection in the Iranian livestock (mainly ovine linguatulosis) shows the endemic status of linguatulosis in Iran and will pose a risk for the inhabitants. In developing countries, the main reason of getting infected among individuals with low economical income is consuming offal (and especially raw offal) such as tongue, brain, liver, kidney, intestine, and heart. Therefore, an exact inspection of visceral organs and particularly lymph nodes is needed in the slaughter houses to prevent human linguatulosis. Accordingly, people should be aware of the disadvantages and risks of eating Table 3 Pooled prevalence (95%CI) of Linguatula serrata according sex and involved organ. raw / undercooked liver or other internal organs of herbivores. Meanwhile, physicians should also be aware of the illness and consider L. serrata infestation in patients with complains of upper respiratory tract symptoms in endemic areas. Altogether, our data provide some valuable information regarding the epidemiology of linguatulosis in domestic ruminants in Iran which will likely be very favorable for management and control programs of this disease. Therefore, feeding dogs with the offal of infected animals should be prevented in order to control the infection in ruminants.
2019-06-07T21:33:45.267Z
2019-05-19T00:00:00.000
{ "year": 2019, "sha1": "89e8d4e2537c857cd21c74bd70803a0200cded29", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.parepi.2019.e00111", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e545c58351fafd1b4fcbda2a78028e78abb9fcdc", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251816900
pes2o/s2orc
v3-fos-license
Simplex cerebral cavernous malformations with MAP3K3 mutation have distinct clinical characteristics Objectives To investigate the clinical characteristics of cerebral cavernous malformations (CCMs) with MAP3K3 somatic mutation. Methods We performed a retrospective review of our CCMs database between May 2017 and December 2019. The patients with simplex CCMs identified to harbor a MAP3K3 or CCM gene somatic mutation were included. Clinical characteristics were recorded. Univariate and multivariate logistic analyses were used to assess the risk factors associated with hemorrhage events of CCMs. To explore the underlying mechanism, we transfected MEKK3-I441M-overexpressing and CCM2-knockdown lentiviruses into human umbilical vein endothelial cells (HUVECs) and investigated thrombomodulin (TM) and tight junctions (TJs) protein expression by western blotting and immunofluorescence. Finally, immunohistochemistry was used to validate TM and TJs protein expression in surgical samples. Results Fifty simplex CCMs patients were included, comprising 38 MAP3K3 mutations and 12 CCM gene mutations. Nine (23.7%) patients with MAP3K3 mutations and 11(91.7%) patients with CCM gene mutations exhibited overt hemorrhage, respectively. Multivariate logistic analyses revealed that MAP3K3 mutation was associated with a lower risk of hemorrhage events. In the vitro experiments, ZO-1 expression was not reduced in MEKK3-I441M-overexpressing HUVECs compared with wild type, whereas it was significantly decreased in CCM2-knockdown HUVECs compared with control. In the MEKK3-I441M-overexpressing HUVECs, TM expression was increased, and the NF-κB pathway was significantly activated. After treatment with an NF-κB signaling inhibitor, TM expression was further upregulated. Meanwhile, TM expression was increased, but the NF-κB pathway was not activated in CCM2-knockdown HUVECs. Accordingly, immunohistochemistry showed that ZO-1 expression in the MAP3K3-mutant samples was significantly higher than that in the CCM-mutant samples. TM expression in the MAP3K3-mutant lesions was significantly lower than that in the CCM-mutant samples. Conclusion Simplex CCMs with MAP3K3 mutation occasionally present with overt hemorrhage, which is associated with the biological function of MAP3K3 mutation. Introduction Cerebral cavernous malformations (CCMs) of the central nervous system are vascular anomalies affecting 0.16 to 0.5% of the general population (1,2). These lesions show cerebral venous capillary dysplasia with endothelial clusters filled with blood and susceptible to hemorrhage (3,4). Several studies have reported that the hemorrhage rates of CCMs vary from 1.6 to 4.5% per patient-year (3,5,6). Because of lesion bleeding, CCM lesions frequently lead to epileptic seizures, headaches, focal neurological deficits, or life-threatening strokes (7,8). In this study, we retrospectively analyzed the clinical data of 50 patients with simplex CCMs identified to harbor a CCM gene or MAP3K3 somatic mutation. Meanwhile, we investigated the expression of thrombomodulin (TM) and tight junctions (TJs) proteins in HUVECs in the vitro experiments and surgical samples between MAP3K3 and CCM gene mutations. Our study indicates that, compared with CCM gene mutations, MAP3K3 mutant simplex CCMs have different clinical characteristics. Study design and patients We performed a retrospective review of our CCMs database between May 2017 and December 2019. This study was performed according to an institutional review board-approved protocol in compliance with local and institutional regulations for the study of human subjects. Written informed consent was obtained from all participating patients (or guardians of patients). The patients with simplex CCMs identified to harbor a MAP3K3 or CCM gene mutation were included consecutively. Patients with available presurgical MRI with poor quality or a gamma knife radiosurgery history were excluded. Data collection The demographic and clinical information of patients with simplex CCMs including age, sex, main complaint and lesion location, size, Zabramski type, concurrent with developmental venous anomaly, and overt hemorrhage was recorded and analyzed. Based on the 1994 Zabramski classification, all CCMs lesions were defined as Type I-IV (17). According to previous studies, a hemorrhage event was defined as a symptomatic event with radiographic evidence of overt intracerebral hemorrhage (5,11). Western blotting Proteins from cultured cells were extracted using radioimmunoprecipitation assay lysis buffer and were quantified using a bicinchoninic acid protein assay kit. Next, equal amounts of protein were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred to a polyvinylidene fluoride transfer membrane. After blocking, the membranes were incubated with the following primary antibodies: CCM2 Immunohistochemistry Histological sections were obtained from our sample bank as described in our previous study (13,18) overnight at 4 • C and then were incubated with a biotinylated secondary antibody at room temperature for 1 h followed by horseradish peroxidase-labeled streptavidin for 30 min. After the sections were washed with Tris buffer, they were stained with 3,3 ′ -diaminobenzidine, and the nuclei were counterstained with hematoxylin. Images were acquired using a Zeiss Axio Scope A1 microscope. Two authors (R. H. and J. W.) blinded to the mutation status analyzed the immunohistochemical staining for ZO-1, TM, Claudin-5, and VE-cadherin in the recruited patients. Three randomly selected fields in nonadjacent tissue sections per tissue specimen were analyzed as described previously (18,19). A positive reaction was indicated by a brown color using 3,3 ′ -diaminobenzidine. ImageJ (NIH Image, Bethesda, MD) was used to calculate the integrated optical density. Statistical analysis Continuous variables were presented as medians and interquartile range (IQR) or as means ± SD, and categorical variables were expressed as percentages. The Wilcoxon ranksum test, t-test, chi-squared (χ 2 ) test, and Fisher's exact test were used accordingly. Univariate and multivariate logistic regression analyses were used to assess the risk factors for hemorrhage events of CCMs. Variables with p < 0.20 in the univariate analysis were then used in the multivariate analysis. Analyses were performed using the statistical software SPSS 24.0 (IBM Corp, Armonk, NY, USA) and PRISM (GraphPad, version 8.0). A 2-tailed p < 0.05 was considered statistically significant. . /fneur. . Figure 1A (MAP3K3 mutation lesion) and Figure 1B (CCM gene mutation lesion), respectively. These findings implied that MAP3K3 mutations were associated with a lower risk of hemorrhage events, which was different from CCM gene mutation. MAP Additionally, we explored the differences observed in MAP3K3 and CCM gene mutation related to clinicopathological features (hemorrhagic episodes, lesion size, and symptoms) between the brainstem and supratentorial CCM lesions. In our study, there were 9 lesions with CCM gene mutation located in the brainstem and supratentorial and 37 lesions with MAP3K3 mutation located in the brainstem and supratentorial. In the CCM gene mutation group, no differences were observed in hemorrhagic episodes and symptoms between the brainstem and supratentorial CCM lesions; the difference in lesion size between the brainstem and supratentorial CCM was not available because of limited data. In the MAP3K3 mutant group, patients with lesions located in the . /fneur. . brainstem were more likely to undergo focal neurological deficit than supratentorial CCM lesions, and there was no difference observed in hemorrhagic episodes and lesion size between the brainstem and supratentorial CCM lesions (Supplementary Table 2). MAP K mutation has di erent e ects on ZO-expression compared with CCM knockdown in HUVECs Recurrent hemorrhage was the major presentation feature of CCMs, and the mechanisms underlying the hemorrhage of CCMs include loss of cell-cell junctions and local increase in the antithrombotic molecule. CCM endothelium was associated with unstable endothelial cell-cell contacts and locally elevated expression of anticoagulant endothelial receptors TM (20-24). We hypothesized that MAP3K3 mutations were associated with a lower risk of hemorrhage events that might result from the biofunction features of the MAP3K3 mutation concerning the expression of TM and TJ proteins. To investigate the expression of cell-cell junction proteins in MAP3K3 c.1323C>G mutation and CCM2 knockdown endothelial cells, we infected HUVECs with lentivirus overexpressing MEKK3-I441M (MAP3K3 encodes MEKK3), wild-type MEKK3 (WT), knockdown CCM2 (shCCM2), and negative control (shNC), respectively. Compared with the WT group, the expression of ZO-1, an essential cell-cell junction protein that plays a vital role in TJs formation, was not reduced after MEKK3-I441M overexpression, as shown by Western blotting and immunofluorescence staining (Figures 2A,C and Supplementary Figures 1A,C), and after MEKK3-I441M overexpression, Occludin expression was reduced, whereas Claudin-5 and VE-cadherin expression levels were not reduced (Figure 2A and Supplementary Figure 1A). Previous studies have indicated that high KLF2 and KLF4 expression levels may result in decreased ZO-1 expression and that p38 activation can increase ZO-1 expression (20, 25, 26). Our previous findings suggested that MEKK3-I441M enhances ERK5-KLF2/4 and p38 signaling, while CCM2 knockdown only activated ERK5-KLF2/4 signaling (13). Therefore, we hypothesized that MEKK3-I441M could downregulate ZO-1 expression by increasing KLF2/4 expression while upregulating ZO-1 expression by activating p38 signaling. Western blotting showed that the KLF2, KLF4, and phospho-p38 levels were significantly increased after MEKK3-I441M overexpression compared with those in the wild-type group (Figure 2A and Supplementary Figure 1A Consistent with previous studies (20, 27), after CCM2 was inactivated in HUVECs by lentivirus, Western blotting showed that ZO-1 expression was significantly decreased compared with that in the controls ( Figure 2D and Supplementary Figure 1D), and immunofluorescence staining also showed that ZO-1 expression was obviously decreased after CCM2 knockdown (Supplementary Figure 3A). In addition to ZO-1, Claudin-5, Occludin, and VE-cadherin also showed decreased expression levels after CCM2 knockdown ( Figure 2D and Supplementary Figure 1D). These findings suggest that MAP3K3 mutation has different effects on ZO-1 expression compared with CCM2 knockdown. MAP K mutation has distinct e ects on TM expression compared with CCM knockdown in HUVECs Thrombomodulin (TM) is a 557-amino acid protein with a broad cell and tissue distribution consistent with its wide-ranging physiological roles. TM is expressed on the lumenal surface of vascular endothelial cells in both large vessels and capillaries, and its primary function is to mediate endothelial thromboresistance (28). Previous studies have shown that the TM levels are increased in human CCM lesions, as well as in the plasma of patients with CCMs. In mice, endothelial-specific genetic inactivation of KRIT1 or PDCD10, which causes CCM formation, results in increased levels of vascular TM. Increased TM expression occurs because of the upregulation of transcription factors KLF2 and KLF4 consequent with the loss of KRIT1 or PDCD10. Increased TM expression contributes to CCM hemorrhage (24). To investigate the expression of TM in MAP3K3 c.1323C>G mutation and CCM2 knockdown endothelial cells, we infected HUVECs with lentivirus overexpressing MEKK3-I441M, WT, shCCM2, and shNC, respectively. Previous studies have shown that KLF2 and KLF4 can increase TM expression and that activation of the NF-κB pathway can decrease TM expression (24, 29). In our study, compared with that in wild-type cells, after overexpression of MEKK3-I441M, the expression of KLF2, KLF4, and phospho-NF-κB was significantly increased, while Western blotting and immunofluorescence staining showed that TM expression was elevated in MEKK3-I441M-overexpressing HUVECs ( Figures 3A,C and Supplementary Figures 2A,C). Interestingly, after treatment with pyrrolidinedithiocarbamate ammonium, an NF-κB signaling inhibitor, TM expression was further upregulated in MEKK3-I441M-overexpressing HUVECs ( Figures 3B,C and Supplementary Figures 2B,C). Consistent with a previous study (24), Western blotting in our study showed that compared with the control, TM expression was significantly increased and phospho-NF-κB . /fneur. . . Panel A shows that the expression of phospho-p (p-p ), phospho-ERK (p-ERK ), KLF , and KLF was increased, but the expression of ZO-, Claudin-, and VE-cadherin was not decreased in MEKK -I M-overexpressing HUVECs, as shown by Western blotting. Panels B and C show that after treatment with doramapimod (Dora), an inhibitor of p signaling, the expression of ZO-was significantly downregulated. (D) Western blotting showed that compared with shNC, the expression of ZO-, Claudin-, Occludin, and VE-cadherin was significantly decreased in CCM -knockdown HUVECs. HUVECs were infected with CCM -knockdown lentivirus (shCCM ) or a negative control (termed shNC). One representative experiment of is shown. Scale bar, µm. was not activated in CCM2-knockdown HUVECs ( Figure 3D and Supplementary Figure 2D). Immunofluorescence staining confirmed that TM expression was increased after CCM2 knockdown (Supplementary Figure 3B). These findings suggest that MAP3K3 mutation has distinct effects on TM expression compared with CCM2 knockdown. . /fneur. . . Panel A shows that the expression of phospho-NF-κB (p-NF-κB) and TM was highly increased compared with that of WT. Panels B and C show that after treatment with pyrrolidinedithiocarbamate ammonium (PDTC), an NF-κB signaling inhibitor, TM expression was further upregulated. (D) Western blotting showed that compared with shNC, TM expression was highly increased in CCM -knockdown HUVECs. HUVECs were infected with CCM -knockdown lentivirus (shCCM ) or a negative control (termed shNC). One representative experiment of is shown. Scale bar, µm. Comparison of the expression of ZOand TM in surgical CCM samples with MAP K and CCM gene mutations To validate the different effects of MAP3K3 or CCM gene mutations on ZO-1 and TM expression, we performed immunohistochemical staining in surgical samples with MAP3K3 mutation, CCM gene mutations, and normal arteries, including 3 MAP3K3 mutant samples, 3 CCM-mutant samples, and 3 superficial temporal arteries as controls. Immunohistochemical staining showed that ZO-1 expression in the samples harboring CCM gene mutations was significantly lower than that in the control samples (p < 0.0001, t-test), whereas ZO-1 expression in the MAP3K3-mutant samples was not different from that in the control samples (p = 0.3810; t-test) ( Figure 4A). The expression of Claudin-5 and VE-cadherin in the lesions with CCM gene mutations was significantly lower than that in the control samples (p < 0.05; t-test), while no difference was found in VE-cadherin expression in the MAP3K3-mutant samples compared with that in the control . /fneur. . The TM expression level in both CCM gene and MAP3K3 mutant samples was significantly higher than that in the control samples (p < 0.0001; t-test); however, the level of TM in the CCM gene mutant lesions was significantly higher than that in the MAP3K3 mutant lesions (p < 0.0001; t-test) ( Figure 4D). These findings suggest that the expression level of TM is different between CCM gene mutant lesions and MAP3K3 mutant lesions. Additionally, we performed immunohistochemical staining to detect the expression of angiogenic markers as Endoglin, VEGF, PCNA, HIF-alpha1, and Flk1 in CCM mutant lesions and normal superficial temporal arteries as control. Compared with control, the expression of Endoglin, VEGF, and PCNA was increased in CCM mutant lesions, and there was no significant difference in HIF-alpha1 and Flk1 expression between CCM mutant lesions and control (Supplementary Figure 4). Our results were consistent with previous studies (30)(31)(32)(33)(34)(35). Discussion In this study, we demonstrated that MAP3K3 mutation presents distinct clinical characteristics compared with CCM gene mutations: MAP3K3 mutation leads to less destruction of the brain-blood barrier, and less local anticoagulant molecule accumulation in the endothelium may explain its lower risk of hemorrhage events. Our results may imply that simplex CCMs have two distinct clinical subtypes. Currently, molecular classifications are widely used in intracranial tumors, particularly in gliomas. A study reported that the molecular classification of glioblastoma involving IDH1, PDGFRA, EGFR, and NF1 substantially benefits the prediction of prognosis and response to therapy in glioblastoma patients (36). Vascular anomalies can be caused by inherited or somatic genetic mutations (13,37,38). The identification of inherited and somatic mutations in vascular anomalies has led to the evaluation of tailored strategies with preexisting cancer drugs that interfere with these signaling pathways (39). However, the molecular classification of simplex vascular diseases has not yet been well established. In this study, we indicated that simplex CCMs might comprise two clinical subtypes relevant to CCM genes and MAP3K3 somatic mutations. The two subclasses of simplex CCMs had different risks of hemorrhage events: CCM gene mutant lesions were susceptible to frequent overt hemorrhage, whereas MAP3K3 mutant lesions rarely led to overt hemorrhage and remained stable. Furthermore, somatic mutations of the two genotypes demonstrated different effects on the anti-coagulation and TJs in the endothelium, at least partially explaining the underlying mechanism of the specific clinical manifestations. Our findings may contribute to predicting the prognosis and treatment choices in patients with simplex CCMs. Our study had some limitations. The genotypes of simplex CCMs are difficult to obtain in patients under long-term observation: we have difficulties to have surgical samples from long-term follow-up patients. Therefore, we are not able to investigate the lesion evolution with confirmed genotypes in a long-term follow-up in a large cohort, and larger cohort studies are needed to further strengthen the results of this study. The expression levels of TJs proteins and TM consequent to CCM gene mutation and MAP3K3 mutation were limited to in vitro experiments, and the findings required future studies in animal models. Conclusion Compared with CCM gene mutations, simplex CCMs with MAP3K3 mutation occasionally present with overt hemorrhage, which is associated with the biological function of MAP3K3 mutation in the endothelium. Future studies in animal models and larger cohort CCMs are needed to further strengthen the results of this study. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author/s. Ethics statement The studies involving human participants were reviewed and approved by Institutional Review Board of Tiantan Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. The animal study was reviewed and approved by Institutional Review Board of Tiantan Hospital. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions RH and JW designed the study, conducted experiments, analyzed and interpreted the data, and drafted the manuscript for intellectual content. Y-FS, J-CW, HL, and Y-MJ collected and interpreted the data and revised the manuscript for intellectual content. H-YX, J-ZZhan, S-ZZ, and Q-HH collected the data and revised the manuscript for intellectual content. SW and J-ZZhao designed the study and revised the manuscript for intellectual content. YC provided overall oversight of the research. All authors contributed to the article and approved the submitted version. . /fneur. . of their affiliate organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2022-08-26T13:53:26.318Z
2022-08-26T00:00:00.000
{ "year": 2022, "sha1": "e9dd6df7259606c7ebfc6c3ed59c666919e2c5ef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e9dd6df7259606c7ebfc6c3ed59c666919e2c5ef", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
207357577
pes2o/s2orc
v3-fos-license
Intranasal Corticosteroids in Management of Acute Sinusitis: A Systematic Review and Meta-Analysis PURPOSE Acute sinusitis is a common condition in ambulatory care, where it is frequently treated with antibiotics, despite little evidence of their benefit. Intranasal corticosteroids might relieve symptoms; however, evidence for this benefit is currently unclear. We performed a systematic review and meta-analysis of the effects of intranasal corticosteroids on the symptoms of acute sinusitis. METHODS We searched MEDLINE, EMBASE, the Cochrane Central register of Controlled Trials (CENTRAL), and Centre for Reviews and Dissemination databases until February 2011 for studies comparing intranasal corticosteroids with placebo in children or adults having clinical symptoms and signs of acute sinusitis or rhinosinusitis in ambulatory settings. We excluded chronic/allergic sinusitis. Two authors independently extracted data and assessed the studies’ methodologic quality. RESULTS We included 6 studies having a total of 2,495 patients. In 5 studies, antibiotics were prescribed in addition to corticosteroids or placebo. Intranasal corticosteroids resulted in a significant, small increase in resolution of or improvement in symptoms at days 14 to 21 (risk difference [RD] = 0.08; 95% CI, 0.03–0.13). Analysis of individual symptom scores revealed most consistently significant benefits for facial pain and congestion. Subgroup analysis by time of reported outcomes showed a significant beneficial effect at 21 days (RD = 0.11; 95% CI, 0.06–0.17), but not at 14 to 15 days (RD = 0.05; 95% CI, −0.01 to 0.11). Meta-regression analysis of trials using different doses of mometasone furoate showed a significant dose-response relationship (P=.02). CONCLUSIONS Intranasal corticosteroids offer a small therapeutic benefit in acute sinusitis, which may be greater with high doses and with courses of 21 days’ duration. Further trials are needed in antibiotic-naïve patients. a Cochrane review of 4 RCTs showed a small benefi cial effect on improvement of symptoms at 15 to 21 days; however, interpretation was limited by both high heterogeneity and differing outcome measures used in the primary studies. 13 A recent large RCT found no difference between intranasal corticosteroids and placebo for sinusitis. 14 This trial was not included in the recent Cochrane review. 13 Given the confl icting evidence, there is a pressing clinical need to clarify whether intranasal corticosteroids should be prescribed for patients with acute sinusitis. Accordingly, we undertook a systematic review of the most recent evidence to attempt to resolve this question. Search Strategy and Selection We included in our meta-analysis RCTs that compared intranasal corticosteroids with placebo in children or adults who had clinical symptoms and signs of acute sinusitis or rhinosinusitis, in outpatient (ambulatory) settings. We excluded studies examining patients with chronic/allergic sinusitis and studies performed exclusively in patient populations selected because of chronic underlying health conditions (eg, immunocompromised patients). We searched MEDLINE, EMBASE, the Cochrane Library including the Cochrane Central register of Controlled Trials (CENTRAL), the Database of Reviews of Effectiveness (DARE), and the National Health Service Health Economics Database from the beginning of each database until February 2011 using a maximally sensitive strategy. 15 Medical Subject Heading (MeSH) terms used included rhinosinusitis, sinusitis, and corticosteroids (including dexamethasone, betamethasone, prednisone, and all variations of these terms) and viral and bacterial upper respiratory tract pathogens (full search strategy available from authors). Two authors independently reviewed the titles and abstracts of electronic searches, obtaining full-text articles to assess for relevance where necessary. Disagreements were resolved by discussion with a third author. We performed citation searches of all full-text papers retrieved. Data Extraction and Quality Assessment Two authors independently assessed the methodologic quality of studies. Quality was assessed using the criteria of allocation concealment, randomization, comparability of groups at baseline, blinding, treatment adherence, and percentage participation. Two authors independently extracted data using an extraction template. In both data extraction and quality assessment, disagreements were documented and resolved by discussion with a third author. Primary outcomes included the proportion of participants with improvement or complete resolution of symptoms. Secondary outcomes included mean change in symptom scores over 0 to 21 days, adverse events, relapse rates, and days missed from school/ work. Where necessary, we used Grab It XP Microsoft Excel software (http://www.datatrendsoftware.com) to extract data from fi gures. Data Synthesis and Analysis For pooled analysis of dichotomous outcomes, we calculated the risk difference (RD), 95% CI, and number needed to treat (NNT). For continuous variables, we used weighted mean difference and 95% CIs. We tested dose response by undertaking a post hoc subgroup analysis according to intranasal corticosteroid dosage. We used meta-regression in Stata (StataCorp, LP) to test subgroup interactions on the outcomes and the I 2 statistic to measure the proportion of statistical heterogeneity for each outcome. 16 Where no heterogeneity was present, we performed a fi xed-effect meta-analysis. Where substantial heterogeneity was detected, we looked for the direction of effect and considered the reasons for this heterogeneity. Where applicable, we used a random-effects analysis or considered not pooling the outcomes and reporting the reasons for this. Study Characteristics We identifi ed 3,257 potentially relevant study records, of which 21 were relevant to acute sinusitis/rhinosinusitis ( Figure 1). We excluded 15 of these studies for the following reasons: 5 were abstracts only with no full paper published or available from the authors, 3 were not limited to acute sinusitis, 3 examined oral steroids, 3 did not directly compare steroids and placebo, and 1 examined prevention of acute sinusitis. IN T R A NA S A L COR T ICOS T EROIDS A ND ACU T E SINUSI T IS iclav, or cefuroxime) to patients in both groups. One of these trials 19 prescribed intranasal xylometazoline hydrochloride to all participants before administration of the study spray for the fi rst 3 days. Two trials reported outcomes based on computed tomography scans of sinuses. 18,21 All 6 included studies demonstrated adequate allocation concealment, blinding, percentage participation, and comparability of groups both at baseline and in provision of care apart from the intervention; however, 3 studies did not report the method of randomization ( Table 2). We therefore performed a sensitivity analysis excluding these studies. Resolution or Improvement of Symptoms at Days 14 to 21 In 5 RCTs 14,17,18,20,21 that assessed resolution or improvement of symptoms at days 14 to 21, intranasal steroids had a modest clinical benefi cial effect, with an RD of 0.08 (95% CI, 0.03-0.13; P = .004; I 2 = 47%) and an NNT of 13 (95% CI, ). This overall result was similar even with the removal of the 2 trials of lower quality, 18,21 with an RD of 0.07 (95% CI, 0.01-0.12; P = .02; I 2 = 43%). Given that both analyses showed heterogeneity, however, we performed subgroup analyses on outcome timing and on dosage. In the 2 trials 14,20 that reported the proportion of participants with persistent symptoms at 10 days after onset of treatment, there was no benefi t of intranasal corticosteroids, with an RD of 0.06 (95% CI, -0.09 to 0.22; P = .41; I 2 = 47%). Two trials reported a small but signifi cant 7% absolute improvement in physician evaluation scores at 21 days in patients receiving intranasal steroids vs placebo (Nayak et al 18 : 61% vs 53%, P = .006; Meltzer et al 21 : 68% vs 61%, P <.01). Individual Symptom Scores Three RCTs reported individual symptom scores in 5 groups of patients who received different doses of mometasone furoate compared with placebo 17,18,21 For each group, the symptoms of facial pain, nasal congestion, headache, rhinorrhea, postnasal drip, and cough IN T R A NA S A L COR T ICOS T EROIDS A ND ACU T E SINUSI T IS were reported on a scale of 0 (none) to 3 (severe) at baseline and averaged across the fi rst 15 days of therapy (see the Supplemental Appendix at http:// www.annfammed.org/content/10/3/241/suppl/DC1 for full data). Compared with their counterparts who received placebo, patients who received intranasal corticosteroids in these 3 trials reported signifi cantly greater improvement in facial pain (3 of the trials), con-gestion (3), rhinorrhea (2), headache (1), and postnasal drip (1) (all P <.05). Adverse Events One trial reported that no adverse events occurred with steroid therapy 19 ; 2 trials reported no serious adverse events in either group 14,20 ; and the remaining trials reported that adverse events were mainly mild or c Results of the 2 arms were combined for the overall analysis. d A third arm evaluated amoxicillin 500 mg 3 times a day; therefore, these patients also received placebo capsules as a control for amoxicillin. Relapse and Recurrence Three trials 17,19,20 reported the rate of relapse or recurrence of acute sinusitis up to 2 months after initiation of treatment. Recurrence occurred in 5% to 15% of IN T R A NA S A L COR T ICOS T EROIDS A ND ACU T E SINUSI T IS patients taking intranasal corticosteroids and 4% to 37% taking placebo. Key Findings This systematic review demonstrates that intranasal corticosteroids offer a small but signifi cant symptomatic benefi t in acute sinusitis. This effect is most marked when patients are given longer durations of treatment (21 days) and higher doses of the medication. Our analysis of individual symptom scores suggests that facial pain and nasal congestion may be most responsive to intranasal corticosteroids. In our main analysis, we found that whereas 66% of patients would experience improvement or resolution of symptoms at 14 to 21 days using placebo, an additional 7% of patients would achieve this outcome with corticosteroids, equating to an NNT of 13. This 7% gain is a relatively small increase in the context of a self-limiting condition, and this clinical benefi t must be set against potential harms and economic implications. Our included trials reported no serious adverse events associated with intranasal corticosteroid use and no increase in frequency of nonserious adverse events compared with placebo. Other potential harms might include effects from systemic absorption; however, the single included trial addressing this outcome found no clinically relevant changes in the hypothalamicpituitary-adrenal axis, 20 and 2 recent reviews found no evidence of suppression of this axis or of growth suppression with intranasal corticosteroids. 22,23 Only 1 included trial assessed the potential benefi t of intranasal corticosteroids for work and quality of life outcomes in acute sinusitis. 20 In this trial, the corticosteroid group had a signifi cantly higher subjective level of work performance (median, 100% vs 90% for placebo); however, there were no differences in work attendance or changes in quality of life as measured by the 20-item Sino-Nasal Outcome Test (SNOT-20) 24 and the 12-Item Short Form Health Survey. 25 An individual patient with acute sinusitis may therefore experience negligible adverse effects of intranasal corticosteroids in return for a small increase in likelihood of earlier resolution. This may be an acceptable trade-off for some patients. The therapeutic benefi t at the population level is currently unclear. Our subgroup analysis suggests the benefi t of intranasal corticosteroids is most marked at 21 days, with an additional 11 patients experiencing symptom resolution for every 100 treated. In contrast, this effect was not signifi cant at 15 days. Our subgroup analysis had only a small number of trials, however, and further research is needed to clarify the clinical benefi t at 15 days or less (as discussed below). Clearly, patients are likely to experience pronounced symptoms in the fi rst 7 to 14 days of their illness and may be less willing to consider a therapy that does not offer an increased likelihood of improvement in this earlier time period. We found evidence of a dose-response relationship for mometasone furoate nasal spray: larger doses were associated with a greater likelihood of symptom resolution. We had insuffi cient data to assess whether other types of intranasal corticosteroids showed a similar effect, or whether this higher dose was associated with an increase in adverse events. On the basis of our review, when intranasal corticosteroids are used, we recommend doses of 800 μg of mometasone furoate daily. Comparison With Existing Literature The small benefi t of intranasal corticosteroids for the broad measure of symptom resolution or improvement at 14 to 21 days was similar in direction and size to that found in a recent Cochrane review. 13 In both cases, however, marked heterogeneity was present. We have demonstrated that this heterogeneity arises from both the variation in the timing of the outcome measure and the dose of intranasal corticosteroids Daily Dosage of Mometasone Furoate μg Relative Risk of Symptom Resolution Nayak et al 18 Meltzer et al 17 Meltzer et al 21 IN T R A NA S A L COR T ICOS T EROIDS A ND ACU T E SINUSI T IS used. We found larger effect sizes in subgroup analyses by dose and timing of outcome measure. The recent Cochrane review may therefore have underestimated the benefi t of intranasal corticosteroids. Williamson et al 14 acknowledged that their RCT was underpowered to detect clinically useful effects, and the study may have used an inappropriately low dose of budesonide. 26 Limitations Important limitations of this systematic review include fi rst, that 5 of the studies prescribed antibiotics to both steroid and placebo groups. Williamson et al 14 found no interaction between antibiotic therapy and steroid therapy using a factorial design, which argues against a synergistic effect of these drug classes. Second, included studies varied in the types and doses of steroids, duration of therapy, and outcome measures reported. Particularly, the defi nition of resolution of symptoms varied among the studies, and all measures of resolution involved subjective assessment. These factors prevented pooling of all outcomes and are likely to have contributed to the heterogeneity of the data. Third, included studies were underpowered to detect rare adverse effects of corticosteroid, as well as relapse rates and days missed from work or school. Fourth, the limited number of trials meant we were unable to assess publication bias using funnel plots or place undue weight on the fi ndings from small subgroup analyses. Finally, in 4 of the 6 included trials, radiologic or endoscopic evidence of acute sinusitis was an inclusion criterion. In ambulatory care, it is impractical and inappropriate to perform radiologic investigations on patients with symptoms of sinusitis. Recommendations for Research This review highlights the need for adequately powered RCTs comparing intranasal corticosteroids with placebo in the absence of antibiotics for symptom relief in acute sinusitis. We recommend that trials should use at least 21 days of therapy with high-dose mometasone furoate nasal spray. Inclusion criteria should be based on a clinical scoring system rather than radiologic evidence. Self-report and telephone follow-up should be used to assess the time to complete resolution of symptoms and also the time to onset of symptom resolution, which will be particularly important in clarifying whether there is benefi t at time points earlier than 21 days. Recording the duration of symptoms at baseline will also improve our understanding of patterns of symptom resolution. As acute sinusitis is diagnosed in an estimated 31 million Americans annually, 1 a full assessment of economic implications is important. Such assessment should look at the cost of 21 days of therapy with high-dose mometasone furoate (equivalent to 3 bottles containing 140 × 50-μg doses) and the indirect cost savings in terms of attendance and performance at work or school and quality of life measures (eg, with the SNOT-20 score). 24 These data will improve our understanding of whether the small benefi t of this therapy for the individual has larger benefi ts at the population level. Antibiotics are widely prescribed for acute sinusitis despite limited evidence of benefi cial effect; thus, measuring the extent to which intranasal corticosteroids reduce antibiotic prescribing will be highly relevant to clinical practice and policy. A systematic review using individual patient data may improve our ability to combine the data from existing research. Finally, a double-blind, placebo-controlled trial of the benefi t of oral steroids in acute sinusitis has not yet been performed. Since delivery of intranasal corticosteroids to the nasal mucosa may be reduced by nasal congestion, and this may be a factor responsible for our fi nding of a nonsignifi cant benefi t at 15 days, oral drug delivery might offer earlier and greater symptomatic relief. In summary, on the basis of the current evidence, we believe that intranasal corticosteroids offer a small therapeutic benefi t in acute sinusitis and may be most helpful for symptoms of facial pain and nasal congestion. This benefi t may be greater with courses of 21 days in duration and with high-dose mometasone furoate. Future trials in antibiotic-naïve patients that clarify the time-course of clinical benefi t and the impact on work and quality of life will be important to guide management of this common condition in family practice. Disclaimer: Neither the British Society for Antimicrobial Chemotherapy nor the National Institute of Health Research had any role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.
2017-09-06T09:04:13.471Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "39354ae8505dac60b81d16c1840bff43d8c765fc", "oa_license": null, "oa_url": "http://www.annfammed.org/content/10/3/241.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dae40d730b4bbb8f466ddc080272b3579fe34fb3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256989493
pes2o/s2orc
v3-fos-license
The bacterial MrpORP is a novel Mrp/NBP35 protein involved in iron-sulfur biogenesis Despite recent advances in understanding the biogenesis of iron-sulfur (Fe-S) proteins, most studies focused on aerobic bacteria as model organisms. Accordingly, multiple players have been proposed to participate in the Fe-S delivery step to apo-target proteins, but critical gaps exist in the knowledge of Fe-S proteins biogenesis in anaerobic organisms. Mrp/NBP35 ATP-binding proteins are a subclass of the soluble P-loop containing nucleoside triphosphate hydrolase superfamily (P-loop NTPase) known to bind and transfer Fe-S clusters in vitro. Here, we report investigations of a novel atypical two-domain Mrp/NBP35 ATP-binding protein named MrpORP associating a P-loop NTPase domain with a dinitrogenase iron-molybdenum cofactor biosynthesis domain (Di-Nase). Characterization of full length MrpORP, as well as of its two domains, showed that both domains bind Fe-S clusters. We provide in vitro evidence that the P-loop NTPase domain of the MrpORP can efficiently transfer its Fe-S cluster to apo-target proteins of the ORange Protein (ORP) complex, suggesting that this novel protein is involved in the maturation of these Fe-S proteins. Last, we showed for the first time, by fluorescence microscopy imaging a polar localization of a Mrp/NBP35 protein. Results The Mrp ORP proteins are two-domain proteins. Querying the INTERPRO portail indicated that Mrp ORP belongs to the Mrp/NBP35 ATP-binding proteins (IPR019591), a very large protein family ubiquitous in the domains of Life (Supplementary Table S1). This protein family encompasses the prokaryotic Mrp and ApbC, and the eukaryotic Nbp35 and Cfd1 proteins involved in Fe-S cluster biogenesis 12,16,24 . Most of the 17,511 members of the Mrp/NBP35 ATP-binding protein family contained a single conserved functional domain, the P-loop containing nucleoside triphosphate hydrolase domain (P-loop NTPase, IPR027417). However, in a few cases this domain is found associated with other domains (Supplementary Table S2). In Mrp ORP , the P-loop NTPase domain is associated with a dinitrogenase iron-molybdenum cofactor biosynthesis domain (Di-Nase, IPR003731), which is usualy found in proteins involved in the biosynthesis of the iron-molybdenum cofactor (FeMo-co), such as NifB and NafY 25 . The association between P-loop NTPase and Di-Nase domains was observed in 99 members of the Mrp/Nbp35 ATP-binding family (Supplementary Table S3). They corresponded mainly to proteins from anaerobic organisms, such as Thermodesulfobacteria, Clostridia and Desulfovibrio (Supplementary Table S3). The phylogenetic analysis of the Mrp/NBP35 ATP-binding protein family showed that sequences from the three domains of Life are mixed on the tree (Fig. 1), indicating that horizontal gene transfers among domains occurred during the diversification of this protein family. According to this phylogeny, the Mrp ORP protein is more related to the eukaryotic Nbp35 and Cfd1 than to the bacterial ApbC and Mrp (Fig. 1, purple triangles). The 99 sequences harboring an association between the P-loop NTPase and Di-Nase domains do not form a monophyletic group (Fig. 1, pink triangles). In fact, they belong to different parts of the tree, indicating that the association between these two domains occurred several times independently. The phylogenetic analysis of the 110 sequences displaying the highest sequence similarity with the P-loop NTPase domain of Mrp ORP disclosed its closest relatives that are mainly from Deltaproteobacteria and revaled again phylogenetic relationships at odd with the current systematics confirming that the evolution of the Mrp/NBP35 ATP-binding family has been heavily impacted by horizontal gene transfers ( Supplementary Fig. S1). Again, sequences harboring both the P-loop NTPase and Di-Nase domains appeared intermixed with sequences containing only the P-loop NTPase domain, confirming that such an association occurred mainy times during evolution. Genes coding for Mrp ORP proteins from the sulfate reducer deltaproteobacteria DvH and DdG20 were both located in the orp genes cluster ( Supplementary Fig. S2) 19,22 . The Mrp ORP proteins from DvH and DdG20 have a molecular mass of 50 kDa and 43 kDa, respectively, with a P-loop NTPase domain of 30 kDa and a Di-Nase domain of 13 kDa for Mrp ORP with DvH exhibiting a suplementary linker between the two domains (Fig. 2). The sequence alignment of the two Mrp ORP with E. coli Mrp, S. enterica ApbC, S. cerevisiae Nbp35 and Cfd1, showed that the typical deviant Mrp Walker A (GKGGhGK[ST]), Walker B motifs, and CXXC motifs are conserved in Mrp ORP (Fig. 2) 12,16,26 . We found also that in the Mrp ORP , four cysteine residues were present in the N-terminal part of the P-loop NTPase domain of Mrp ORP proteins (Fig. 2, blue crosses) with only one of them conserved in eukaryotic Nbp35 (Fig. 2, black asterisk). These cysteine residues might form a non-canonical motif: CX 3 CX 20 CXC in Mrp ORP of DvH and CXCX 5 CX 4 C in Mrp ORP of DdG20. Yet, the association between a P-loop NTPase and a Di-Nase domains raises the question about the role of Mrp ORP proteins. We then investigated the biochemical properties of this new type of Mrp-like protein. The conserved CXXC motif from the P-loop NTPase domain of Mrp ORP binds a Fe-S cluster. We, first, investigated the presence of a Fe-S cluster bound to Mrp ORP . The UV-visible spectrum of the aerobically isolated DdG20 Mrp ORP purified from E. coli, showed no absorbance corresponding to a Fe-S signature (Fig. 3 clusters signature but with a A 400 /A 280 ratio of 0.18 lower than the full-length protein (Fig. 3, Inset). The two conserved cysteine residues, Cys215 and Cys218, of the Mrp ORP CXXC motif were then replaced by alanine residues by site directed mutagenesis (Fig. 2, red asterisks). The UV-visible absorption spectrum of the corresponding reconstituted variant protein Mrp ORP C215A/C218A exhibited a drastic decrease in the absorbance at 400 nm with an A 400 /A 280 ratio of 0.08 compared to 0.3 for the wild-type protein (Fig. 3, dashed line). From these results, we conclude that the CXXC motif of the P-loop NTPase domain of Mrp ORP is involved in the binding of a Fe-S cluster. The Di-Nase domain of Mrp ORP binds a 3Fe-4S cluster. We noticed that the reconstituted Mrp ORP C215A/C218A exhibited a weak absorbance around 400 nm compared to the apo-protein spectrum and that the P-loop NTPase domain had a ratio A 400 /A 280 lower than the full-length Mrp ORP (Fig. 3). To explore the hypothesis that the Di-Nase domain could bind a Fe-S cluster, the strep-tagged-Di-Nase domain of Mrp ORP (Mrp ORP _CT) was anaerobically produced and isolated from DvH. The as-isolated domain has a brown color and exhibited an UV-visible spectrum with a broad absorption band at 420 nm, with a shoulder at 325 nm ( Fig. 4A, solid line), and a A 420 /A 277 ratio equal to 0.57. These features disappeared upon reduction with sodium dithionite (Fig. 4A, dashed line). The Mrp ORP _CT contained 3.2 ± 0.1 Fe per polypeptide chain, and an extinction coefficient of 15200 M −1 cm −1 at 420 nm (with an extinction coefficient per iron of around 4750 M −1 cm −1 ), a value within the expected range for Fe-S cluster-containing proteins. As previous studies described Mrp/Npb35 proteins (Nbp35, Ind1 and ApbC) as dimeric proteins 9,12,16,26 , the quaternary structure of Mrp ORP _CT was analyzed by gel filtration (Supplementary Fig. S3). The Mrp ORP _CT domain eluted mainly as a dimer ( Supplementary Fig. S3). The Mrp ORP _CT exhibits an EPR signal in the as-isolated form, in the perpendicular-mode, with g values at 2.012, 2.009 and 1.96, that decreases in intensity by increasing the temperature (data not shown), while the dithionite-reduced form is EPR silent (Fig. 4B) 1+ , this center presents three high-spin ferric ions, which are spin coupled, with a S total = 1/2 state, and upon reduction to the [3Fe-4S]° oxidation state, by one electron, yields an integer spin (S total = 2) species, which is not observed in the perpendicular measurement mode (Fig. 4B). The g-values are not similar to the ones found in typical [3Fe-4S] cluster containing proteins, but this protein does not present in its primary sequence the usual binding motif for this type of metal cluster. Nevertheless, the g-values determined for Mrp ORP _CT are close to the ones reported for ThiI that has been shown to bind a [3Fe-4S] cluster 27 . Therefore, the spectroscopic data together with the presence of only 3 conserved cysteine residues in the primary sequence of this domain (Fig. 2), support the hypothesis of a cuboidal [3Fe-4S] 1+ cluster (S total = 1/2) being present in Mrp ORP _CT, that can be reduced to [3Fe-4S]° (S total = 2) 28 . The cyclic voltammogram of Mrp ORP _CT presents a reversible signal at −445 ± 10 mV (signal I in Fig. 4C) and another at −645 ± 10 mV (signal II in Fig. 4C) of which it is only observed the cathodic counterpart. The first Mrp ORP transfers its Fe-S cluster to apo-aconitase. We then analyzed whether holo-Mrp ORP was able to transfer, in vitro, Fe-S clusters to apo-protein, such as Aconitase B (AcnB). AcnB is known to be active only when its [4Fe-4S] cluster is inserted into the protein 29 . AcnB was purified aerobically from recombinant E. coli and Fe-S cluster was completely removed to obtain an inactive apo-AcnB. Reconstituted Mrp ORP was then incubated with Apo-AcnB in an anaerobic chamber in the presence of DTT and the enzymatic activity of AcnB was determined at periodic time intervals. The AcnB activity increased with time in the presence of a fixed concentration of holo-Mrp ORP (Fig. 5A). No significant AcnB activities were observed when apo-Mrp ORP was used instead of holo-Mrp ORP (data not shown) or when 15 μM of Fe 2+ and S 2− were added instead of holo-Mrp ORP (Fig. 5C). Using a fixed concentration of apo-AcnB with various concentration of holo-Mrp ORP, we determined the amount of holo-Mrp ORP necessary to activate apo-AcnB (Fig. 5B, full square). Approximately 4 μM of holo-Mrp ORP were required to activate 2 μM of AcnB, i.e. in a ratio of 2 molecules of holo-Mrp ORP for 1 of AcnB (Fig. 5B, full square). As the presence of about 4 moles of iron and sulfide atoms are necessary to aconitase to be active 29 , a homodimer of holo-Mrp ORP sharing 4 iron and 4 sulfide atoms are transferred to apo-aconitase. The holo-Mrp ORP is then able to transfer its Fe-S cluster to apo-aconitase. Additionally, we found that the reconstituted P-loop NTPase domain of Mrp ORP transferred its Fe-S cluster to apo-AcnB resulting in an AcnB activity comparable to the full-length protein (87.4%) (Fig. 5C). When the same experiment was performed with the holo-Di-Nase domain of Mrp ORP an aconitase activity of 44% of the full-length Mrp ORP activity was observed (Fig. 5C). Altogether, these results show that, in vitro, the Fe-S cluster that is transferred from Mrp ORP to AcnB is preferentially the one bound to the P-loop NTPase domain. Mrp ORP is able to transfer its Fe-S clusters to its physiological ORP partners. We previously showed that Mrp ORP interacts in vivo with the ORP complex that contains Fe-S binding proteins, especially the Orp3 and Orp4 proteins that each exhibits two [4Fe-4S] ferredoxin-like motifs 19 . We thus tested whether Mrp ORP was able to transfer its Fe-S clusters to its physiological partner proteins. For this purpose, from DvH cell extract, the His-tagged Orp3, that we systematically co-purified with its partners Orp4 and Orp8, was treated to remove metal centers (Fig. 6, dashed line). In vitro Fe-S transfer assays were performed using the reconstituted holo-form of a Strep-tagged Mrp ORP and the His-tagged apo-Orp3-Orp4-Orp8 complex. The reconstituted holo-Mrp ORP and apo-Orp3-Orp4-Orp8 were incubated for 90 min under anaerobic conditions and the proteins were separated using a Ni-NTA column. After separation, the UV-visible spectrum of the eluted fraction exhibited strong absorption bands at 420 nm and 325 nm with an A 400 /A 280 ratio of 0.42, a spectrum similar to the spectrum of anaerobically purified holo-Orp3-Orp4-Orp8 proteins exhibiting A 400 /A 280 ratio of 0.53 (Fig. 6, solid line and inset). These results demonstrate that, in vitro, Mrp ORP can efficiently transfer Fe-S cluster to its physiological partner, the ORP complex. Mrp ORP exhibits a polar localization in DvH. To further characterize Mrp ORP, we then assessed its cellular localization in DvH using fluorescence microscopy imaging (Fig. 7). To achieve this goal, the full length Mrp ORP was fused to the green fluorescent protein (GFP). In order to express the fusion mrp ORP -gfp gene from the native promoter, the fusion was introduced into the DvH chromosome at the orp locus, replacing the endogenous wild-type mrp ORP gene. Because GFP does not fluoresce in the absence of oxygen, cells were first grown under anaerobic conditions, and the pictures were acquired less than 10 min after contact with air. From our previous study, we showed that this time of air incubation doesn't affect the localization of a FtsZ-GFP fusion in DvH 30 . The profile of the fluorescence signal observed during the initiation step of the cell cycle of DvH for Mrp ORP appeared to be localized at one pole for 78% of the cells (Fig. 7A). Western blot analysis of the Mrp ORP -GFP using anti-GFP antibodies revealed only one band corresponding to the fusion protein, Mrp ORP -GFP, suggesting that the integrity of the fusion proteins was conserved ( Supplementary Fig. S4). The P-loop NTPase domain was mostly located at one (58% of cells) or two poles (37% of cells) (Fig. 7B) whereas the fluorescence of the GFP-Di-Nase fusion was diffused in the cytoplasm (Fig. 7C). No growth defect was observed whatever the recombinant strain used. These results revealed a polar spatial localization of Mrp ORP linked to the Mrp/NBP35 domain. Discussion P-loop NTPases are one of the largest class of proteins with subgroup members involved in a wide variety of essential cellular functions 8 . The Mrp/NBP35 ATP-binding protein subclass comprises proteins present in all three kingdoms of life and mainly involved in Fe-S cluster biogenesis [7][8][9][10][11]15 . In this study, we characterised Mrp ORP , a novel type of Mrp/NBP35 ATP-binding protein. Mrp ORP is distinct form the other members of this family by the fact that it associates a P-loop containing nucleoside triphosphate hydrolase domain (P-loop NTPase) with a dinitrogenase iron-molybdenum cofactor biosynthesis domain (Di-Nase). The phylogenetic analysis of the Mrp/NBP35 ATP-binding protein family showed that the association between both domains occurred several times independently. Characterization of the reconstituted wild-type and mutant Mrp ORP proteins, with the results described from biochemical analyses of other members of the Mrp/NBP35 ATP-binding protein family, indicated that the conserved CXXC motif of the P-loop NTPase domain coordinates a [4Fe-4S] cluster between two Mrp ORP molecules. Interestingly, our data suggest that, in spite of the lack of classical Fe-S binding motif, the Di-Nase domain does have a 3Fe-4S cluster that can exist in two redox states [3Fe-4S] 1+ and [3Fe-4S] 0 , with a reduction potential of is put aside by the fact that similar data were obtained from two different preparations and all the manipulation were performed inside the anaerobic box in a one-day purification procedure. However additional work is needed for definitive identification of this cluster. From our preliminary results, the cluster in the Di-Nase domain of Mrp ORP seems to be clearly different from the heterometalic sulfide cluster (S 2 MoS 2 CuS 2 MoS 2 ) noncovalently bond to the polypeptide chain of Orp8, another Di-Nase one-domain protein belonging to the ORP complex 31,32 . It is also different from the IssA protein from Pyrococcus furiosus belonging to the same family and that binds thioferrate through a cationic sequence in the C-terminal tail not found in Orp9 33 . A multiple alignment of the Di-Nase domain of proteins associating this domain with the P-loop NTPase domain as in Mrp ORP, shows two cysteine_histidine rich conserved motifs: the CXHFGHCE motif located at the beginning of the Di-Nase domain and the CDH sequence located at the end of the domain (Supplementary Fig. S5). As cysteine and histidine residues can coordinate a Fe-S cluster, our hypothesis is that these conserved residues are involved in the binding of the [3Fe-4S] cluster in the Di-Nase domains of Mrp ORP . Altogether, these results allow us to propose that Mrp ORP is a novel member of the Mrp/NBP35 ATP-binding family which can bind at least two Fe-S clusters, one interdomain [4Fe-4S] cluster in the P-loop NTPase domain and one [3Fe-4S] cluster in the Di-Nase domain. We further showed that the Fe-S cluster of Mrp ORP can be transferred to apo-proteins, such as the aconitase and the ferredoxin-like proteins (Orp3 and Orp4) of the ORP complex previously shown to interact with Mrp ORP in vivo 19 . The genomic clustering of mrp ORP with the ORP encoding genes, further, is in total agreement with the idea that Mrp ORP might be dedicated to the maturation of the Fe-S containing metalloproteins belonging to the ORP complex. To date, the only known target identified for bacterial Mrp is the TcuB protein, a protein necessary for tricarballylate catabolism 10 . We propose here the ORP complex as a novel target of Mrp/NBP35 ATP-binding proteins. We then investigated the cellular localization of Mrp ORP that was mainly observed at one pole of the cell. Such polar localization for a Fe-S carrier has never been described before and questions the localization of others prokaryotic Mrp/NBP35 ATP-binding protein because we showed that the polar localization is probably linked to the P-loop NTPase domain. Interestingly, this localization is found to be in adequation with the putative apo-target localization, as Orp3 and Orp4 exhibit a C-terminal amphipatic helix shown in MinD to be responsible for polar binding (Fig. S6) 34 . Polar localization was previously observed for protein involved in several biological processes, such as cell division, chemotaxis, signal transduction, cellular differentiation, virulence and bacterial respiration [35][36][37][38][39][40] but never reported for a Fe-S protein maturation factor. We demonstrated that the Fe-S cluster present in the Di-Nase domain is not efficienly transferable to apo-aconitase. Then, the role of this domain is still unclear. The presence in the Di-Nase domain of a high content of conserved proline residues (TPPPHXPGXXP), that have been shown to be involved in protein-protein/domain interaction, might be responsible for the specificity of interaction of Mrp ORP with dedicated apo-partners to which the Fe-S is transferred (in magenta in Supplementary Fig. S5) 41 . Alternatively, the Fe-S cluster present in the Di-Nase domain might have a structural role in Mrp ORP . Hence, the presence of these unusual [3Fe-4S] clusters has been observed in enzymes, such as nitrate reductase, [NiFe] hydrogenase and ThiI 27,42,43 . Although, their role in those proteins has not been established, those centers have been considered to be involved in electron transfer and recently in sulfur transfer as in Thil 27,43 . It has been shown that Nbp35 proteins possess an extra stable 4Fe-4S cluster absent in others Mrp/NBP35 ATP-binding proteins characterized to date 13,24 . The role of this 4Fe-4S located in the N-terminal extension of Nbp35 is still unclear. Curiously, Mrp ORP contains also four cysteine residues included in a non-canonical motif in the N-terminal part of the P-loop NTPase with only one cysteine residue conserved in Nbp35 (Fig. 2). This feature added to the phylogenic position of Mrp ORP , suggest that bacterial Mrp ORP are closer to eukaryotic Nbp35 than bacterial Mrp and ApbC. Fe-S cluster biogenesis in anaerobic bacteria is poorly documented, while these organisms rely heavily on Fe-S cluster enzymes 44 . This study starts to fill this gap by describing that Mrp ORP is likely an Fe-S cluster biogenesis factor in DvH and DdG20. Genome scanning analysis revealed that DvH possesses a minimal ISC system constituted by a cysteine desulfurase and a scaffold protein (NifU type) 45 . In addition, we detected homologues of the E. coli SufB and SufD proteins that might constitute a minimal SUF system, reminiscent of what is observed in archaea. To date, Mrp/NBP35 ATP-binding proteins in Fe-S biogenesis have been proposed to act as Fe-S scaffold and carriers 12,16,24 . Interrestingly, we noticed redundancy of Mrp proteins in DvH and other SRM. In DvH, two other Mrp/NBP35 ATP-binding proteins are detected, DVU1847 and DVU2330, and are composed solely of the P-loop NTPase domain containing the conserved deviant Walker box and the CXXC motif. Dvu2330 belonged to an operon encoding proteins involved in the biogenesis of Fe-S hydrogenases and Dvu1847 is included in an operon encoding a L-isoaspartate O-methyltransferase. The outstanding redundancy of Mrp/ NBP35 ATP-binding proteins in DvH and other SRM raises the question of the role of each of these proteins in these anaerobic microorganisms. Our phylogenomic study shows clearly that the three Mrp/Nbp35 ATP-binding proteins from DvH are closer to the eukaryotic Mrp/NBP35 than the bacterial and archaeal proteins. Future studies will determine whether the assembly of Fe-S cluster on Mrp ORP is dependent of the general Fe-S biogenesis of DvH (ISC or SUF) or if Mrp ORP acts in parallel of these systems. Methods Bacterial Strains, Plasmids and Growth Conditions. Strains and plasmids used in this study are listed in Table S4. Escherichia coli DH5α and TG1 strains were grown in Luria-Bertani (LB) medium at 37 °C with the appropriate antibiotic when required (0.27 mM for ampicillin, 0.15 mM for chloramphenicol). Cultures of DvH were grown in medium C 46 at 33 °C in an anaerobic atmosphere supplemented with 0.17 mM of kanamycin or 0.15 mM of thiamphenicol when required. Anaerobic work was performed using an anaerobic chamber (COY Laboratory Products or MBraun) filled with a 10% H 2 -90% N 2 mixed-gas atmosphere. Before placement inside the anaerobic chamber, solutions were made anoxic by flushing with N 2 for removal of O 2 . Solutions, glass and plastic materials were equilibrated for at least 12 hours inside the anaerobic chamber before use. Construction of Plasmids Used for Protein Production in E. coli. Standard protocols were used for cloning, ligation and transformations. Custom oligonucleotides used are listed in Table S5. All restriction endonucleases and DNA modifications enzymes were purchased from New England Biolabs. Plasmids DNA were purified using the High Pure Isolation Plasmid Kit (Roche Diagnostics). PCR products were purified using MiniElute kits (Qiagen). For construction of pJF119-2109His, pJF119-3202His and pJF119-Nter3202His the appropriated primers described in Table S5 were used to amplify the dvu2109 and dde3202 genes from genomic DNA. The obtained PCR products and the pJF119 plasmid were digested with EcoRI and BamHI restriction enzymes and ligated into the multiple cloning site of the plasmid to obtain pJF119-2109His, pJF119-3202His and pJF119-Nter3202His, respectively. For all constructs, successful ligations were confirmed via DNA sequencing and subsequently transformed into TG1 E. coli cells. Construction of Plasmids Used for Protein Production in DvH. For construction of pBMC-6C3::3202strep, pBMC6C3::Cter2109strep and pBMC6C3::2103His the appropriated primers described in Table S5 were used to amplify the dde3202, dvu2109 and dvu2103 genes from genomic DNA. The obtained PCR products and the pBMC6C3 plasmid were digested with NdeI and SacI restriction enzymes and ligated into the multiple cloning site of the plasmid to obtain pBMC6C3::3202strep, pBMC6C3::Cter2109strep and pBMC-6C3::2103His, respectively. For all constructs, successful ligations were confirmed via DNA sequencing and subsequently electroporated into DvH cells. Site Directed Mutagenesis. Simultaneous mutations of cys215 and cys218 residues from Mrp ORP were generated by oligonucleotide-directed mutagenesis using pBMC6C3::3202His as the PCR template and the Q5 site directed mutagenesis kit from Biolabs. The primers 3202cysmutF and 3202cysmutR were designed using the online NEB primer design software NEBaseChanger TM . ploop-ntpase-gfp and di-nase-gfp were constructed and inserted into the mrp ORP locus. With this construction, although the wild-type copy of mrp ORP is still present, it does not present the σ 54 binding site unlike the Mrp ORP -GFP allowing the expression of the fusion of interest in physiological conditions. The mrp ORP amplicon obtained by using the primer pair NterDVU2109_XhoI/CterDVU2109_NdeI and the plasmid pNot19Cm-Mob-XS-gfp 30 were cut with XhoI and NdeI. A gel extraction of the plasmid pNot19Cm-Mob-XS-gfp was done to insert mrp ORP into this plasmid using the XhoI and NdeI sites to obtain the plasmid pNot19Cm-Mob-XS-mrp ORP -gfp. To obtain, pNot19Cm-Mob-XS ploop-ntpase-gfp and pNot19Cm-Mob-XS-di-nase-gfp, the amplicons ploop-ntpase-gfp and di-nase-gfp were amplified by PCR using pNot19Cm-Mob-XS mrp orp -gfp as template and the primer pairs Nter2109-XhoI/CterGFP-SpeI and domCter2109-dir-XhoI/CterGFP-SpeI, respectively. The amplicons were cut with XhoI and SpeI and inserted into the plasmid pNot19Cm-Mob-XS. The 3 plasmids were then transferred into E. coli WM3064 and subsequently transferred by conjugation to DvH cells. Cells carrying the chromosomal recombination with the target fusion were selected for their resistance to thiamphenicol and checked by PCR using primers pair: DVU2108_UP/CterGFP-SpeI. A western blotting on DvH cells was also down by using an anti-GFP antibody, as described by Fievet et al., 2015 to control the production of Mrp ORP -GFP fusion 30 Protein purity was analyzed in a 12.5% Tris-Tricine SDS-PAGE. The fraction containing the protein was concentrated with centrifugal filter units (cut-off of 5 kDa) and freezed in liquid nitrogen until further use. Aconitase (AcnB) was purified as described for AcnA 47 . After purification of recombinant proteins, the eluted fractions were buffer exchanged with the buffer specified using a Hitrap Desalting column (GE Healthcare). Fractions that contained protein of interest at >95% purity, by SDS-PAGE analysis, were pooled and concentrated over a 10 kDa molecular mass cutoff membrane. Finally, the proteins were stored in liquid nitrogen. Protein concentration was determined using a Pierce TM 660 nm Protein Assay (Thermo) Pierce colorimetric assay. Bovine serum albumin (2 mg/mL, Sigma) was used as a standard. [Fe-S] Cluster Reconstitution. [Fe-S] cluster reconstitution was performed anaerobically in an anaerobic chamber (COY) at 18 °C as follows. Protein was reduced anaerobically with 5 mM DTT for at least 1 hour prior to Fe 2+ and S 2− addition. After pre-reduction, FeCl 3 was added to five-fold excess and incubated for approximately 2 minutes before an addition of a 5-fold excess of Li 2 S. The solution was incubated for 4 hours before excess salts and unbound iron were removed using a Hitrap Desalting column (GE Healthcare). For enzymatic reconstitutions, 5 mM L-cysteine and IscS (20 μM) were added in place of Li 2 S. Quaternary structure determination. The quaternary structure of Mrp ORP_ CT was determined using a Superdex 200 10/300 GL size exclusion column (GE-Healthcare). The mobile phase used was 100 mM Tris-HCl, pH 7.5 and 500 mM NaCl and protein was injected on the colum at a flow-rate.of 1 mL/min. The standard used to create a standard curve were β-amylase (200 kDa) albumin (66 kDa) and carbonic anhydrase (29 kDa in the flow-through, while Orp3-His-tagged was eluted with buffer C containing 100 mM of imidazole. The eluted fraction containing Orp3 with co-eluted Orp4 and Orp8 was analyzed and an UV-visible spectrum was recorded. Aconitase Activity. Aconitase 52 . The corresponding sequences were aligned with MAFFT using the accurate option L-INS-I option, which allowed accurate multiple alignment construction. The resulting alignment was trimmed with BMGE (BLOSUM30 option) and used to infer a maximum likelihood tree with IQ-TREE v1.5.3 53 using the LG + I + G4 evolutionary model as suggested by the model selection tool implemented in IQ-TREE (BIC criterion). The robustness of the resulting tree was assessed with the non-parametric bootstrap procedure implemented in IQ-TREE (100 replicates of the original dataset) 54 . Microscopy Experiments. The 3 fusion GFP strains were grown until the middle of the exponential growth phase (OD 600nm of approximately 0.4 to 0.5) in medium C. Cells were concentrated 2 times by centrifugation. The buffer used for this concentration contained 10 mM Tris-HCl (pH 7.6), 8 mM MgSO 4 , 1 mM KH 2 PO 4 . In order to stain DNA, this buffer was supplemented with 5 ng/μL of 4' ,6-diamidino-2-phenylindole (DAPI). After 20 min of incubation in the dark, the cells were washed three times in TPM buffer. The DNA was stained under anaerobic conditions to limit the exposure of the cells to air. The pictures were acquired after 10 min of air exposure, which was required for oxygen GFP maturation. The cells were placed between a coverslip and an agar pad of 2% agarose. Pictures were acquired with a Nikon TiE-PFS inverted epifluorescence microscope, 100x NA1.3 oil PhC objective (Nikon), and Hamamatsu Orca-R2 camera. For fluorescent images, a Nikon intenselight C-HGFI fluorescence lamp was used. Specific filters were used for each wavelength (Semrok HQ DAPI/CFP/GFP/YFP/ TxRed). Image processing was controlled by the NIS-Element software (Nikon). Electrochemical Measurements. All the electrochemical experiments were conducted inside an anaerobic chamber (MBraun) at room temperature, with 100 mM Tris-HCl 8.1, 500 mM NaCl, 2.5 mM desthiobiotin, 3 mM DTT used as electrolyte properly flushed with argon before entering in the chamber. A three-electrode configuration system, containing a reference electrode, Ag/AgCl (+205 mV vs SHE), a secondary electrode, platinum wire, and a work electrode, PGE, were used. To measure the cyclic voltammograms a µautolab (ECO Chemie, Utrecht, The Netherlands) was used being the data collect and analyzed on GPES software package (ECO chemie). Cyclic voltammetric measurements were performed on a potential window from +0.1 to −0.9 V (vs SHE), and the scan rate dependence investigated between 0.005 and 0.1 V s −1 .
2023-02-19T14:11:23.201Z
2019-01-24T00:00:00.000
{ "year": 2019, "sha1": "7b9a2f742cbda517ff09171ea7d18af097aecd9f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-37021-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "7b9a2f742cbda517ff09171ea7d18af097aecd9f", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
160987902
pes2o/s2orc
v3-fos-license
“Seeing These Good Souls Adore God in the Midst of the Woods” The Christianization of Algonquian Nomads in the Jesuit Relations of the 1640s Up to 1647, Jesuit missionaries in New France attempting to evangelize nomadic Algonquians of North America’s subarctic region were unable to follow these peoples, as they wished, in their seasonal hunts. The mission sources, especially the early Jesuit Relations , indicate that it was Algonquian neophytes of the Jesuit mission villages of Sillery and La Conception who themselves attracted other natives to Christianity. A veritable Native American apostolate was thus in existence by the 1640s, based in part on the complex kinship networks of the nomads. Thus it appears that during that decade, the Jesuits of New France adopted a new strategy of evangelization, based partly on the kinship networks of the nomads, which allowed for the natives’ greater autonomy in communicating and embracing Catholicism. A difficulty faced by the Jesuit editors of the Relations was how to concede to the culture of the nomads without offending their devout, European readers of the era of the “great confinement,” upon whom the missionaries depended for financial support. One way the Jesuits favorably portrayed nomadic neophytes—who were often unaccompanied by a missionary in their travels—was by underscoring the importance during hunting sea-son of memory-based and material aids for Catholic prayer (Christian calendars, icons, rosaries, crucifixes, oratories in the woods, etc.). Thus, in the Jesuit literature, the gradual harmonization between Native American mobility and the Catholic liturgy was the This article concerns religious and spiritual aspects of Jesuit missionary interactions with Algonquian peoples in North America in the early seventeenth century. An attempt is made here to reconstruct from Jesuit writings-particularly selections from early volumes of the Jesuit Relations (1632-1673) which were published annually in Paris-the manner in which nomadic peoples of America's Subarctic adopted the Catholicism preached by the missionaries. Of course, evidence from the Relations cannot itself prove why and how Christianity was effectively received by the nomads. However, the Relations do enable us to place the Jesuits' adaptations to the cultural universe of the natives within their original ideological context. What was the symbolic and spiritual context in which they described their developing work of evangelization among the nomadic Algonquians in the 1640s? How do the Jesuits' assertions revise our understanding of other strategies adopted in the spiritual thought of the time? Ethnohistorical approaches to the mission have, at best, been concerned only in superficial ways with the Jesuits' spiritual and psychological motives, thus hampering more comprehensive and balanced considerations of relations between the missionaries and the natives. Yet one way to more fully restore the Native American dimension of the mission's history is to better grasp the importance of the natives in the spiritual history of the Jesuits. This article, thus, while not offered as a definitive historical reconstruction based equally on archaeological, ethnological, and anthropological data, sets up a range of possibilities regarding the natives' reception and transformation of Christianity, by means of empirical and circumstantial data drawn from the missionary accounts themselves. The history of the evangelization of America's northeastern natives, insofar as it has been written from the perspective of "conversion" and of adherence to a new religion, has failed to date to offer an operational model for Native Americans' evangelization under the French colonial regime. As Kenneth Morrison has argued, "conversion poorly describes the complex processes of religious change" observed in the mission setting.1 Indeed, it may have been that Christianization did not effect ruptures with the natives' original cultural universe, or with a non-Christian past, or even that it did not cause a confrontation between two cultures after the 1630s. As it is the purpose of this article to demonstrate, the Jesuit sources suggest this was strongly the case among the nomadic Algonquians in the 1640s. Failure of the Jesuit Strategy of Sedentarization After an unsuccessful journey into the wilderness to evangelize Canada's hunter-gatherer Montagnais people in the winter of 1633, the superior of the Jesuit mission to New France, Paul Le Jeune (1591-1664), began discouraging other missionaries from accompanying these nomads in their seasonal travels.2 He began instead to call for the Montagnais' fixed settlement and Frenchification. To this end, between 1637 and 1641, he established two mission villages, Sillery near Quebec and La Conception at Trois-Rivières.3 Conceived as a village after the European mode, Sillery was equipped with French-style houses, a chapel, and a hospital for its residents, a small population of native converts to Christianity and their families. To cover the expenses of this project, Le Jeune was able to turn to a network of pious Catholic elites in France who were already somewhat apprised of the North American situation, thanks to the annual publication in Paris since 1632 of the missionaries' Relations de la Nouvelle France. This strategy of incorporating Native Americans into French settlements in the Saint Lawrence River Valley quickly proved itself ill-adapted to the commercial needs of New France, which at the time was very dependent upon the The fact that tensions existed between religious and commercial priorities generally, and specifically between the goal of the natives' sedentarization and the exigencies of the fur trade, cautions against our taking the Jesuits' descriptions of Sillery and La Conception at face value.7 The Jesuit project of settling and civilizing the natives contradicted the missionaries' move to adapt to local circumstances. It should be noted as well (although this alone does not explain the shift in strategy), that epidemics, as well as the Iroquois Wars that menaced the Laurentian colony beginning in 1641, also did not favor the permanent settlement of native families won over to Christianity.8 At the end of the 1630s, the Jesuits abandoned their civilizing program.9 The mission shifted, differently, to a situation in which the neophytes of Sillery and La Conception began mediating between the Jesuits and the northern, subarctic populations-especially the Montagnais, Atikamekws, and 10 Dawson, Fourrures et forêts métissèrent les Montagnais. In this era, the names of Algonquian groups were unstable. Hence the generic terms used by the Jesuits in the 1640s: "some small Nations that are scattered here and there throughout the country" (JR 22:219); "nations of the north" (JR 29:65). In 1650, in an essential document to which we will return, Jean de Quen enjoined the missionaries of Tadoussac to draw up an inventory of these groups, being careful to distinguish "by nations and by families." Jean de Quen, "Règlement de la mission de Tadoussac Pessamits, the groups most often cited by the Jesuits, although the names used in the mission sources are varied and ambiguous.10 The part-time residents of the mission villages, known as domiciliés, began to stand in for the Jesuits in evangelizing these groups. As the missionaries reported in the Relation published in 1642, "it must be confessed, that it was not we who won them [the Saguenay natives], but our neophytes, or new Christians of the residence of Saint Joseph [Sillery]."11 In other words, the Jesuits began to recognize dispositions in the natives which expressed Christian spirituality in a way that was "natural" to them rather than of European origins. This shift in perspective corresponded with one in Europe, where the social context of Catholicism and its ecclesiastical controls were giving way to a more personalized and free sort of piety.12 This is not to say, however, that the Jesuits' increased openness to interiorized expressions of Native American Christianity, as evidenced in mission writings from 1640 onward, stemmed mostly from this development in European Catholicism, rather than in equal measure from North America's native cultures. Indeed, the missionaries' shift toward a more personal piety occasioned new opportunities for the Jesuits-however transitive-to open up further to the spirituality of the nomads. Therefore, the changing parameters of the Jesuit apostolate invite us to give credit to their own accounts of how they evangelized the Algonquians. They ask us, as well, to consider intersubjectivity and symbolism present in their Relations. The Jesuits described some neophytes as preaching the Gospel to their compatriots. One such lay preacher was Charles Meiaskouat, a Montagnais of Saguenay settled at Sillery, who accompanied Jean de Quen in his first mission to Tadoussac in 1641.13 During that journey, Meiaskouat convinced other peoples of the interior to ally themselves to Sillery. Likewise in subsequent years, the Jesuits relied on neophytes to carry "the name of Jesus Christ into all these little nations, with whom they have commerce."14 In 1642, Paul Ragueneau wrote to Le Jeune, "Of course, by thoroughly converting one nation we greatly further the conversion of others for which we do not even labor. I am quite convinced of this."15 Innu Kinship Networks In the Relations, the appearance of a native apostolate implies that successful Christianization no longer demanded the natives' sedentarization.16 Instead, it appears to have been based on what Rémi Savard has called "the astonishing scalability of the kinship system" of the Canadian natives.17 What the Jesuits at times presented as new alliances between the domiciliés and the "interior" or "northern" nations seem in reality to have obeyed already established Innu kinship patterns-relationships that crisscrossed the Quebec-Labrador peninsula which were themselves focused on small groups that shifted seasonally.18 Territory and kinship being inseparable, the mission residents visited their "relatives" in the backcountry for several months.19 Around 1642, the Jesuits grasped the complexity of the northern Algonquian kinship networks, which centered on multi-family units subdivided into hunting groups. Thus, on October 4, 1642, Le Jeune announced a new strategy of evangelization: "[W]e have never seen more clearly how to instruct them, and the Gospel has never been expounded here more peacefully, than since about eight months."20 From that point forward, the Jesuits' strategy did not fundamentally change: the missionaries evangelized the Abenakis in the same way in the second half of the century.21 Sillery and La Conception: Places of Exchange and Transformation In lieu of French-styled villages-a project advanced by Le Jeune in the 1630s-the residences of Sillery and La Conception arose as spaces of encounter between groups of hunters who were often "related." Above all, they were spaces of transformation of the Algonquians into partners for the work of Christianization. going to visit her nephew and nieces."23 All of this seems to indicate that a Native American apostolate was constituted through the agency of neophytes who for many years frequented the Jesuit residences of the Laurentian colony. These "ambassadors of the faith" went from cohabiting with the Black Robes to converting their "relatives" with greater ease than the missionaries could have done.24 They were not Frenchified but instead became indispensable cultural and spiritual mediators. The domiciliés of Sillery enjoyed a significant degree of independence from the Jesuits. In 1640, when the Montagnais returned to the mission at the end of an epidemic, they assembled together without any priests present in order to decide upon the manner of their settlement and to elect their "civil chiefs." The Jesuits helped organize the assembly but did not participate in it.25 The choice of a "prayer captain," too, was left exclusively to the natives' discretion.26 Furthermore, even Sillery's four French-styled houses and two chapels were not places of Frenchification.27 Instead, social cohesion in the mission was achieved in deference to the indigenous community's preferences and to its leaders.28 Indigenous social structures perdured at Sillery. journal of jesuit studies 1 (2014) 281-300 29 Major themes in the study of Sillery include discipline and self-discipline as adopted by the native Christians. Cohabitation among Christians and non-Christians had its price, it seems. JR 20: 143-183, 22:61-63, 67, 83-85, 117-121; 24:35, 45-49; 25:147-151; 29:79-81. In the Relations, this factionalism exacerbated tensions within the village, setting up captains as "inquisitors." However, these extreme disciplinary attitudes may be explained by the legalistic ethic in Algonquian culture, of which Le Jeune wrote in 1642: "Add to this the erroneous idea that he had in his head, like some other savages, namely, that newlybaptized Christians are soon attacked by death, or by some serious illness, if they fail, however slightly, in keeping the promise they have made to God to follow his will" (JR 22:103). In this case, it is understandable why the Jesuits did not ascribe any role to themselves in creating the penitential climate at Sillery. 30 The brochure of the Société Notre-Dame de Montréal offers a good example of this new exigency, through the figure of a God who was managerial and rational in his mercy, and upon whom laypersons wanting to help their fellows had to model themselves-something the anonymous author called "l'ordre de la charité. A major result of this was that domestic spaces, as described in the Relations, were invested with a sacred significance ordinarily reserved, back in Europe, for Catholic shrines. Early in the 1640s, in their homes made of bark, neophytes who were temporarily enclosed in the "French" village employed their bodies in exterior expressions of Catholic piety: kneeling on the ground, praying aloud with their hands joined together, and holding rosaries or wearing the beads around their necks in a particular way. At the same time, the Jesuits presented the larger village structure as disposing native souls toward civil obedience, while the village hospital healed and consecrated Algonquian bodies. Commenting on such elements of life at Sillery, the Jesuits in their Relations presented edifying examples to their metropolitan readers without strict regard for accurately portraying the natives' experiences of piety. At the same time, they provided their confrères and the colonial French, through the same pages, enough information with which they could judge for themselves the new mode of Christianizing the nomads. Consequently, from 1641 onward in the Relations, the sacred space of the interior of the neophytes' cabins at times rubbed up against a profane French model represented by the village (houses, hospital, and also a prison after 1643). The Jesuits thus attributed rigorous discipline in Sillery not only to pastoral necessities but also to civil ones.29 Ironically, then, the importance granted in European discourses to a civilizing framework and love of social order, as these were favored by elites of the era of Louis XIII and of the Regency, authorized the Jesuits to formulate their first program of accommodating American nomadism.30 Clair journal of jesuit studies 1 (2014) 281-300 31 "Their first and last action every day is to kneel before a crucifix or a picture which they fasten to a piece of bark, and there say their prayers" (JR 25:163); "On Sunday morning, they met all together in a cabin, and hung to a pole, planted in the middle of it, an embossed crucifix, which all venerated on bended knees, and with clasped hands-with as much respect as if they were before the altar on which the Blessed Sacrament is kept" (JR 26:77 At Sillery, there was, on the one hand, a repressive, profane space conforming the natives' behaviors to French regulations, and, on the other, a sacred space in which the Algonquians took possession of Christianity in their own ways. In the Relations, crucifixes and rosaries were often associated with native objects, while the body language of devotion as familiar to seventeenthcentury Catholics was employed to "sanctify" domestic spaces.31 In December 1642, an Algonquian woman who could not attend Mass at the chapel of the hospital nuns, "stayed in her cabin […] and behaved as if she had been at Mass. She set up an image of our Lord, knelt before it […], recited her beads, rose as is customary at the Gospel, adored our Lord as is done at the elevation, and sang as they are accustomed to do after Mass-insomuch that, when the Father went to see her, she told him that she had been to Mass in her cabin […]."32 This attitude, which is a bit strange in view of the stress the Jesuits had put on building mission chapels in the 1630s, announced as well the adoption of new objects of piety by the Christian Algonquians. These included a series of temporary religious decorations created spontaneously by the nomads.33 Such a process of sacralizing domestic space by means of objects of devotion was nothing new: it characterized European Catholicism in the seventeenth century, signaling the advent of a more personalized and this-worldly oriented spirituality.34 In this case, however, it seems that the phenomenon encouraged the Jesuits to articulate a spiritual discourse and aesthetic that were particularly indigenous. Algonquian oratories were adorned, so to speak, by the missionaries' pens in the Relations in a diffuse, impressionistic way that suggests an interaction between the colonial and indigenous worlds by which native Christians were immanently and subjectively taking hold of the divine.35 As a result of their new openness to native expressions of spirituality, the Jesuits abandoned the mission's civilizing component.36 However, references to it still appeared in the Relation of 1643, due to the uncertain outcome of a trial involving the heirs of Noël Brûlart de Sillery, the benefactor of the village that bore his name, who were contesting his bequest of 32,000 livres to the Jesuits of New France.37 Construction of a planned chapel was delayed as a result of this suit. The chapel would only be completed in 1647, thanks to the charity of another benefactor, Michel de Marillac.38 It is therefore inapt speak of a failure of Frenchification in this era, even though Le Jeune had pleaded in its name, several years earlier, when soliciting funds from European dévôts. In the Relations, the evolution of the Jesuit apostolate unfolds gradually, based on the funds that were available for it. The Jesuits' financial dependency is seen in their having maintaining both the project of "reduction" or sedentarization at Sillery over several years, as supported by Brûlart de Sillery, and that of a seminary for young natives, which was sponsored by an aristocratic couple in France, the Rouault de Gamaches.39 It seems therefore that in the 1640s, the Consider also the baptismal ceremonies of the Atikamekws in 1643 which took place at the Jesuit residence in Quebec and at the Ursuline chapel, and also those of the Hurons challenge for the missionaries was how to make their concessions to native cultures without offending their devout audience in Europe, upon whom they relied to finance their enterprise. How, for example, could they portray Christianized Algonquians as fully Christian, when the natives of the mission rarely received the sacraments of communion and confession, and did not live amidst the structures of French civil society? How could they present nomadism in a new light to an audience that, in France, favored the "great confinement" of socially deviant and marginal populations?40 Was not the primary objective of the Society of Jesus, in the wake of the Council of Trent, to bring the faithful closer to the sacraments, and to make the laity more obedient to the institutional, visible church?41 Around 1640, the Jesuits were still unprepared to reveal to their French public their concessions to native sensibilities. At this point, they simply acknowledged the special spirituality of the Algonquian neophytes by appealing to the hackneyed theme of the primitive church.42 The Relations insisted that the neophytes frequented the sacraments and desired to go as often as possible to the chapels of Sillery or, if not there, to the other shrines in the Laurentian colony on high holy days, especially Christmas and Easter.43 This last point in itself is interesting, as it suggests that the missionaries did not deem it so important for the neophytes to remain in the village to assist at feast day offices.44 "To announce the day of a solemn festival," wrote Le Jeune in 1642, "is Seeing These Good Souls Adore God journal of jesuit studies 1 (2014) 281-300 who went to Sillery for instruction in the years 1642-1643, which were carried out with magnificence in the church at Quebec. JR 24:77,83,117. 45 JR 22:45. 46 JR 22:93,23:317,24:28;Dragon,Trente robes noires,44. 47 On the calendars, writings, and objects of piety, in particular the Rosaries in the homes of the domiciliés and related peoples (the Atikamekws in particular), see JR 18:171; 20:181, 189, 199-201; 275, 293; 22:45-47, 57-59; 113, 221-223; 23:315-317; 24:25-27, 59, 63, 81, 83, 91-93 ("It is incredible how much these good people are inclined to this devotion of saying the Rosary […] and how eager they are to have them-especially those which are rather large and handsome, to wear them suspended about their necks"), 95-99, 143; 25:161, 189, 211; 26:77 ("his paper that served him as a calendar, and enabled him to distinguish the festival days, affected him more than that of the other things"); 114, 131; 27:143; 29:111; 31:173; 33:31. 48 On the permissive attitude adopted by the Jesuits toward the domiciliés only discussed here in relation to the theme of the "liberty" of movement allowed to the natives in the 1640s, see Allan Greer, La Nouvelle-France et le Monde (Montreal: Boréal, 2009), 85. to give them joy; they strive to observe the feasts according to the seasonsthey ask for a list of the days, or for a small calendar, especially when they go to hunt or to trade for any length of time."45 From December to February, the neophytes hunted in the forests in the vicinity of Quebec. They attended Mass several times a week at Sillery or at Quebec. However, from February to April they hunted for moose and beaver hundreds of kilometers away from the colony. They did not return until the month of April.46 A significant part of their lives thus was spent with no priests present among them, and without the benefits of the Catholic sacraments. Nevertheless, religious practice was possible for the natives by means of memorybased and material aids for Catholic prayer, which they highly favored. References to liturgical calendars, rosaries, and pictograms used for prayer are numerous in the Relations.47 Furthermore, the adaptation to the Algonquian context seen in the early 1640s-a defining period for the North American Jesuit apostolate of the second half of the century-rested on harmonizing the mobility of the nomadic peoples with the Catholic liturgy. Calendars and devotional practices were critically important in these circumstances.48 Jean de Quen's Règlement de la mission de Tadoussac (c.1650), an exceptional document that offers insight into the mission apart from the Relations, attests to the primacy accorded to such mnemonic devices in the evangelization of the nomads. The distribution of calendars, which were written alphabetically or with pictograms, assured the missionaries of the sustainability of Catholic practices among the nomads during hunting season: "We must adjust the calendars […] which we give to the savages for their winter travels; that is to say, marking which feast days are days of abstinence, days of fasting, and the like, so that when they meet in the woods and show one another their calendars, they see that we are uniform in our rules."49 In the Relations from 1644 onward, the calendars and the rosaries were designated by the term "meubles de dévotion" [furnishings for devotion] as necessary for all nomadic Christians.50 The document of 1650 attests, consequently, to a method of apostolate put into practice the preceding decade. How was Christianity able to move to the center of life among the Algonquians when they did not always live close to a mission village or a chapel? It appears that, in their mode of Christian practice as the Jesuits cultivated it and permitted it to flourish, a spatial paradigm came to be outweighed by a more inclusive, temporal paradigm. In a certain sense, a new respect for the temporality of Christianity offered the missionaries, in the Relations, a way to silently pass over the natives' non-compliance with the norms of seventeenthcentury sacramental Catholicism. An Algonquian Christianity without sacraments, without priests, and without chapels for a good part of the year is represented in the Relations. In 1648, according to Jean de Quen, Jérôme Lalemant went so far as to note, "Although these persons are very far from our churches, they are very near to their God, who amply supplies the deficiencies of his ministers, when such remoteness is in the order of his providence."51 In other words, a territory unknown to the French and the Jesuits-the boreal forests-provided the framework for the neophytes' spiritual practice. At the same time, this context is missing in the Relations: the Jesuits commented rather on the nomads' postures and gestures of prayer, their hymns, particular prayers that they mumbled or declaimed, and their handling of objects of devotion. Descriptions in this vein were based on second-hand accounts-that is, the words of a Native American Other, presented in the form of direct discourses. Thus, differently than in the Relations of the 1630s, the missionaries in the 1640s did not describe what they themselves saw, but reported on what they heard from native Christians. The Relations of the 1640s thus became locations of native expression, just as the Algonquians emerged as actors in their own Christianization. The role of the missionaries consisted therefore in translating words spoken to them in Native American tongues, reformulating a native discourse on native activities for a French metropolitan public.52 Relays of information between the indigenous and metropolitan discourses were numerous and complex. In them, there were appropriations of the Christianity of others-an idiosyncratic and new Christianity, presented as such by the Jesuits at the end of the 1640s. This shift toward a Native American form of Christianity raises, of course, the problem of its character and authenticity apart from how it was recreated in the narrative and historical context of the Relations. Allocentrisms-namely the transformations of speakers' own words into the words of persons being spoken about-are so common in the texts that the Jesuits themselves seem to disappear behind the Christian natives, giving voice to a new kind of sacrality, exploring a new symbolic continent that is more temporal than spatial, more devotional than ecclesial. Everything transpires in the texts as if the figure of the Native American accompanied, even nourished, a form of spirituality back in Europe which was more interior and less attached to institutions.53 Before 1650, groups of Algonquian neophytes, often without priests among them, were presented in the Relations as devout assemblies. In addition to the instructions given by the neophytes during feast days, Christian practice took place in huts that were often transformed into small bark oratories adorned with pious images, crucifixes or rosaries, animal skins, and wampum necklaces.54 The cabin thus became a kind of natural extension of the interior piety of the worshipper.55 Just one example, but a significant one because it concluded a long symbolic process of sacralization of Algonquian domestic space, was the construction of what was almost a Catholic shrine, completed by the Atikamekws without French assistance, and without the Jesuits' even knowing its location: [A] captain commands his people to make a fine and large cabin, which should be used only for prayer; the young men go after bark, and the women [S]ome Christian Hurons, chancing to be in that great company, and seeing that it was a question of prayer, produce their crosses and their rosaries, protesting aloud that they were Christians.56 This Algonquian oratory, built independently of the Jesuits, differed in many ways from French oratories. First and foremost, it was the result, and not the cause, of native piety, departing therefore from the original intentions of the missionaries in the 1630s, who saw the decoration of mission chapels as necessary for maintaining piety among both the neophytes and the French colonists, and also for generating new faith among unbelievers.57 Also characteristic of many native Christian habitations was the thin boundary with the outside world established by the oratory: natural light as well as branches and other elements of the natural world belonged to the interior, whose boundary with the outside world, consequently, dissolved. Together with it dissolved a distinction between sacred and profane space-a distinction that, differently, was so constitutive of sacred space in Western Christianity. The Algonquian interior created an indistinct and shifting zone between itself and exterior space, the world and the body, objects and subjects, Native Americans and Christians. On this subject, Philippe Descola has proposed that the cognitive and perceptive system of the North American natives seems to have corresponded with these "nameless" decorations, borne of "ornamental" intentions: "the identity of beings and the texture of the world were fluid and contingent, resisting all classifications that seek to establish what is real solely on the basis of appearances."58 Fluidity, moreover, was conducive to decompartmentalizing artifacts: in the passage quoted above, no distinction is made between artifacts of native and European origin which were at the disposal of the Hurons. Also characteristic of the oratories were their sensory and kinesthetic qualities: permeating the interior space were the scent of spruce branches, small objects of Catholic piety, crucifixes and rosaries often worn around the neck, necklaces made from shells, and furs that were seen by the Jesuits as part of the cabins' religious journal of jesuit studies 1 (2014) In sum, during the 1640s, the Christian interior was "corporealized." It was, consequently, upholstered-less in the interior of the oratory, as was the case at this time in public and private chapels in France, than on the bodies of the Christian natives, who were themselves the depositaries of a new sacrality.61 The Jesuits employed a precious and baroque vocabulary in their descriptions, similar to what was used in the same era by poets and spiritual directors to describe interior mental, private, or ecclesial experience of devotion; nevertheless, through it, the Jesuits conveyed the particular flavor of Algonquian spirituality.62 Thus it is likely that for the neophytes, their cabins became a space of sacrality continuous with their own, personal sacral spheres. For the Jesuits, nomadism was no longer an obstacle to the Catholic religion, insofar as it was transmuted-thanks in part to the success of Devout Humanism in France in the era-into an emanation of interiority, a "sanctuarization" of the individual body and soul, liberated from an ancient model of sacrality which was consubstantial with the architecture and ecclesial structures of the Middle Ages.63 In this case, the primacy accorded to Christian temporality in the Jesuits' evangelization of the nomads favored the description of temporary decorations conceived and created by the natives. In doing so, the spatial paradigm-the principle value assigned to "place" in Catholicism-shifted toward the village, a profane and socioeconomic space, producing a symbolic vacancy in the religious sphere: rare are any mentions of chapels, strictly speaking, or places designated for catechesis or veneration, in the Relations' accounts of spiritual practice among the natives at Sillery or Trois-Rivières. The place of worship receded as an essential framework for the body in prayer; calendric and festive time was introduced, allowing expressions of the novelty of certain ritual and spiritual practices. In this turn observed in the mission sources of the 1640s, the idea of an emancipation of the Algonquians vis-à-vis Western civilization was apparent. A definition of a "liberty" that was intrinsic to the natives was established: a liberty of action, a freedom to leave and to return. In the Relations, the dependency of the body on a particular place was dissolved.64 The Jesuit writers offered a counterweight, or a retraction, of the confining rules governing life in the mission villages, signified by the expression "at our very doors" [dedans nos portes] at the beginning of the fourth chapter of the Relation of 1643: Continuation of the good sentiments and actions of the Christians of Saint Joseph [the title given to the chapter]. As soon as the ships weighed anchor before Quebec, to return to France, the majority of the savages of this residence launched their bark canoes to go and hunt moose-anticipating their usual time of departure by three months, through fear of the Iroquois. These had threatened to come and attack them at our very doors, and would have deprived them of the liberty of hunting far back in the forest.65 66 On this philosophical notion of Jesuit provenance, see Jacob Schmutz, "L'invention jésuite du 'sentiment d'existence,' ou comment la philosophie sort des collèges," Dix-septième siècle 237/4 (2007): 615-631. 67 JR 25: "'We are sorry to leave you,' they said […]" (161); "As soon as the river began to be free by the departure of the ice, our hunters embarked to come back and see us" (171). 68 Compare this to the young native American girls educated in the Ursuline residence, where the "liberty" evoked by Le Jeune consisted less in that of following their parents in their hunts, as was nevertheless the case for Agnès Chabouekouechich, than in freeing them from the enclosure of the convent by "elevating" their thoughts toward God For "liberty" [liberté] to exist in the sense employed in this passage, a spatialtemporal "continuity" that permitted the natives to "leave" a place must have been perceived. Many chapters for the same year which concerned Sillery were entitled "continuation of good sentiments." The "sentiment" appears to have been the solution for reconciling the spatial to the temporal.66 Sentiments approved by the missionaries were crucial during the period of natives' temporary removal from the space of the mission village. The Jesuit narrator tried to express the religiosity of the neophytes, seen both in their ability to leave the village and in their scrupulous obedience of its rules when living there. To do this, he accumulated anecdotes pertaining to particular individuals. It still seems that it was difficult in that era to give a clear idea of the "liberty" of the nomads while also respecting the religious, civil, and political ethic of the metropolis. The risk was great of effectively rendering the missionary useless, since the law then moved to the side of the governor and the merchants, while the religious moved to the side of the native Americans. How to describe the Jesuits' participation in the natives' spirituality, if not by describing the figure of the missionary as in the background, waiting for the return of the neophytes?67 All the inhabitants quitted Sillery at the moment of the departure of the French ships, with some returning to the boreal forest, and others traveling across the ocean to France. Each person returned to his place of origin from the place where he "was"; each "anticipated" for some months the return to a "place." The "space" of a voyage was not a "place" but rather the "self." Everything there was in continuity: the ships, the forest, the voyage. Sentiment proceeded from movement, from displacement. The natives' liberty of action was a liberty of thought.68 As beings with imagination, they carried Clair journal of jesuit studies 1 (2014) 281-300 was granted to the consciences of the neophytes ("a moment to say a short prayer") unless a liberty of action is recognized for them. 69 Ibid., 161-163. departed places within themselves during their travels. Temporary religious decorations and objects of piety worn next to the skin by the nomads constituted many metaphors of an ecclesial Christianity transforming into a nomadic Christianity-a visible, monumental church into an invisible, corporeal, and spiritual church. The editors of the Relations "liberated" (so to speak) the inhabitants of the village through written description, by accepting their departure from the village and even seeing a miracle in it: It is a marvelous effect of grace that men born in the most cruel barbarism […] who have been but recently baptized, should nevertheless retain the innocence and grace of their baptism for six months, without instruction or any sacrament, with greater facility and perfection than many Christians do in France […]. I think that Heaven takes pleasure in seeing these good souls adore God in the midst of the woods.69
2019-05-22T13:34:19.650Z
2014-03-12T00:00:00.000
{ "year": 2014, "sha1": "f5fbc9bc948074b7eabafeeb7b9fe2539a98a946", "oa_license": "CCBYNC", "oa_url": "https://brill.com/downloadpdf/journals/jjs/1/2/article-p281_8.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c385f48b34aafded3cc3314c0c9bacdd7dcf288c", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
16009926
pes2o/s2orc
v3-fos-license
Olfactomedin-like 2 A and B (OLFML2A and OLFML2B) expression profile in primates (human and baboon) Background The olfactomedin-like domain (OLFML) is present in at least four families of proteins, including OLFML2A and OLFML2B, which are expressed in adult rat retina cells. However, no expression of their orthologous has ever been reported in human and baboon. Objective The aim of this study was to investigate the expression of OLFML2A and OLFML2B in ocular tissues of baboons (Papio hamadryas) and humans, as a key to elucidate OLFML function in eye physiology. Methods OLFML2A and OLFML2B cDNA detection in ocular tissues of these species was performed by RT-PCR. The amplicons were cloned and sequenced, phylogenetically analyzed and their proteins products were confirmed by immunofluorescence assays. Results OLFML2A and OLFML2B transcripts were found in human cornea, lens and retina and in baboon cornea, lens, iris and retina. The baboon OLFML2A and OLFML2B ORF sequences have 96% similarity with their human’s orthologous. OLFML2A and OLFML2B evolution fits the hypothesis of purifying selection. Phylogenetic analysis shows clear orthology in OLFML2A genes, while OLFML2B orthology is not clear. Conclusions Expression of OLFML2A and OLFML2B in human and baboon ocular tissues, including their high similarity, make the baboon a powerful model to deduce the physiological and/or metabolic function of these proteins in the eye. Background The olfactomedin (OLFM) family is a group of glycoproteins originally identified in bullfrogs (Rana catesbeiana) as the major component of the olfactory mucus layer, which surrounds the chemosensory dendrites of olfactory neurons [1]. Subsequently these proteins were found in the brain of species ranging from Caenorhabditis elegans to Homo sapiens [2]. OLFM shares a domain in the C-terminal of ~250 amino acids known as OLF [3]. Accordingly, in this domain, the OLF family was classified into seven subfamilies by phylogenetic analysis (as roman numerals from I to VII) [4]. Biological functions of proteins, which posses the OLF domain, remain for the most part elusive. A growing body of evidence indicates that these proteins may play very important roles in normal development and pathology [5]. For example, mutations in the OLF domain of myocilin were closely Open Access Biological Research *Correspondence: iramrodriguez@gmail.com 6 Departamento de Genética, Universidad Autónoma de Nuevo León, Hospital Universitario "Dr. José Eleuterio González", 64460 Monterrey, Nuevo León, Mexico Full list of author information is available at the end of the article associated with primary open angle glaucoma [6]. Noelin-1 (OLFM-1) was found to play an important role in vertebrate neural crest development [7] and it is involved in frog neurogenesis (Xenopus laevis) [8]. Olfactomedinlikes (OLFML) are other members of the OLF family. Some of their members are the glycoproteins OLFML2A and OLFML2B, also known as photomedin-1 and photomedin-2, respectively, which were described in mice at 2005 [3]. OLFML2A and OLFML2B proteins are members of the subfamily IV [2]. The human OLFML2A and OLFML2B genes are located on chromosomes 9q33.3 and 1q23.3, respectively. These two genes are composed of at least 8 exons, and have a length of 37.3 and 40.7 kb, respectively [9]. In a mouse, OLFML2A and OLFML2B cDNAs have open reading frames (ORF) encoding 681 and 746 amino acid (aa) residues, respectively. Both proteins have a signal sequence at their N-terminal followed by two tandem CXCXCX 9 C motifs, a putative coiled-coil region, a serine/threonine-rich region, an OLF domain in their C-terminal, and two or three potential N-glycosylation sites. Both genes are expressed in the adult retina of mouse where they show mutually exclusive expression patterns. OLFML2A was predominantly detected in the photoreceptor layer, while OLFML2B is present in ganglion cells and inner nuclear layers, the inner segment of photoreceptor layer, and retinal-pigmented-epithelium [3]. Currently, the function of OLFML2 (A and B) proteins are still not clear and it is unknown if they are expressed in the retina of other mammals. Based on the above, the aims of this study were: (1) clone and sequence OLFML2A and OLFML2B cDNAs from the retina of baboons (Papio hamadryas) and humans; and (2) identify the cell layers where these proteins are expressed as a key to elucidate OLFML function in eye physiology. Baboon's biological specimens Animal protocols were designed and developed according to the ethical guidelines of the Institutional Animal Care and Use Committee of the Texas Institute of Biomedical Research (TIBR). The baboon (Papio hamadryas) colony is preserved at the Southwest National Primate Research Center in San Antonio, Texas, USA. All the animals have the same diet and share similar environmental conditions. Baboons were gang-housed and fed ad libitum, a standard low-fat chow diet (Harlan Tecklad 15% monkey diet, 8715). The complete eyes of three adult female baboons (15, 16 and 18 years old) were collected. One eye was frozen in liquid nitrogen for RNA extraction and the other was included in formaldehyde at 4% for the immunofluorescence assays. Human's biological specimens Biopsies from human eyes were collected at the Department of Ophthalmology of the "Dr. José Eleuterio González" University Hospital of the Universidad Autónoma de Nuevo Leon, in Monterrey, Mexico. Specimens came from programmed eye surgery procedures where ocular tissues were removed by medical indication. All patients signed an informed consent according to the ethics committee guidelines of the institution. Biological samples were immediately immersed in RNA later solution (Ambion Inc., Austin, TX, USA) for RNA extraction and stored at −70 °C until their use. Paraffin-embedded biopsies, were used in immunofluorescence assays, these biopsies were provided by Pathology Department of the Instituto de Oftalmología Fundación de Asistencia Privada Conde de Valenciana IAP, at Mexico City. The characteristics of human eyes specimens are shown in Table 1. Reverse transcription and polymerase chain reaction Each ocular piece from three adult baboons was dissected. The retina, cornea, lens, sclera, iris, choroid and optic nerve were separated. Total RNA was extracted from the eye tissues samples with Trizol reagent according to the manufacturer's instructions (ThermoFisher Scientific Walthman, MA, USA). RNA was treated with RQ1 DNAse (Promega, Madison, WI, USA) for 15 min at 37 °C to remove traces of genomic DNA. For the assessment of RNA purity and integrity, we used standard methods of spectrophotometery and gel electrophoresis, respectively. Complementary DNA (cDNA) was synthesized using total RNA (1 µg), High Capacity cDNA Reverse Transcription kit (ThermoFisher Scientific) and oligo (dT) 12-18 primer (ThermoFisher Scientific) in 60 µL of total volume reaction. A primer set to amplify baboon OLFML2 A and B transcripts was designed using human sequences as templates; accesion numbers NM_182487 and AK316154, respectively. Such design was performed using an online tool [10], for OLFML2A the sense primer: 5′-CAGGCAGAGCGGGCGAAG-3′ and the anti sense primer: 5′-AATATTTGCGGACTGGGTCA-3′; while for OLFML2B, sense primer: 5′-AAGGGGCTGAGGACA CTCTT-3′ and anti sense primer: 5′-GGAGGATGAGACC AGCACAT-3′. PCR was carried out using 100 ng of cDNA, 0.4 µM of each primer and GoTaq PCR master mix kit (Promega, Valencia, CA, USA). The amplification reaction was carried out in a thermal cycler Veriti 96-Well Thermal Cycler (ThermoFisher Scientific). The amplification used program was as follows: an initial denaturation step of 4 min at 94 °C, 40 cycles of 30 s each at 94 °C, 30 s at 60 °C, 90 s at 72 °C, and finally an elongation step of 6 min at 72 °C. The PCR products were visualized on 0.8% agarose gels stained with ethidium bromide and visualized under UV light. Molecular cloning and sequence analysis The amplified products were cloned in the 3.5-kb XL-TOPO vector and transformed into electrocompetent Escherichia coli bacteria strain Top 10 according to the manufacturer's specifications (Invitrogen, Carlsbad, CA, USA). Positive clones were sequenced using Big Dye Terminator Cycle Sequencing Kit v3.1 using specific oligos and/or M13 universal primers. The reactions were analyzed in the ABI PRISM 3100 Genetic Analyzer using the Sequencing Analysis Software v5.3 (Applied Biosystems, Foster city, CA, USA). The information obtained from the sequencing assays was subjected to a BLAST test to determine identity. Phylogenetic analysis The sequences obtained from clones were aligned with the human orthologous reported gene (GenBank: NM_182487 and AK316154) using the CLUSTAL W program [11] followed by manual corrections in case of need. Protein sequences were derived by conceptual translation of the coding sequences. From amino acidic sequences, a phylogenetic tree was built with MEGA 6.06 software [12] using the maximum likelihood (ML), neighbor-joining (NJ) and UPGMA methods; then a bootstrap test was done with 1000 replicates [13]. Sequences used in this study are listed in Table 2. Seeking to identify the evolutionary forces that underlie the process of divergence in the OLFML2A and OLFML2B primate genes, we tested the hypothesis of positive or adaptive evolution (d N > d S ), purifying selection (d N < d S ), and neutrality (d N = d S ). For this purpose, first, we calculated the non-synonymous (causes an amino acid change) d N and synonymous (does not cause an amino acid change) d S distances respectively, by the the Li-Wu-Luo method (Kimura 2-parameters) [14] from OLFML2′ coding sequences from apes, OWM and NWM with their lemur counterpart. Second, we tested whether d N is significantly greater, lower or equal, respectively, than d S using a codon-based Z test of selection as implemented in MEGA 6.06 software [12]. Differences were considered statistically significant at a P < 0.05. RT-PCR, molecular cloning and sequence analysis The OLFML2A and OLFML2B coding sequences were as expected (1947 and 2262 bp, respectively) and no other bands were detected as possible isoforms. In human samples, we detected amplification in retina, cornea and lens. While in baboon, amplification was detected in retina, cornea, lens and iris (Table 3; Fig. 1). The novel baboon OLFML2A and OLFML2B mRNA sequences were deposited in the GenBank database under the accession numbers KU587785 and KU587786, respectively. Such sequences have full CDS, which codifies for predicted proteins of 648 and 753 amino acids in length, respectively. The baboon OLFML2A and OLFML2B CDSs nucleotide sequences have 96% similarity with the human orthologous. In the amino acidic sequence, the similarity between baboon and humans is 98%. Two phylogenetic trees were built, one for each protein. The OLFML2A tree (Fig. 2) shows four clades in a lineage-specific manner. These clades match in a lineage-specfic manner, they correspond to apes, OWM, NWM, and lemur (out-group). It confirms the orthology between primate OLFML2A genes. While the OLFML2B tree (Fig. 3) shows the same clades, but they are not in a lineage-specific manner. NWM act as out-grop, thus orthology is not clear. Bootstrap values are shown on the tree's branches. Similar results were obtained using maximum likelihood (ML), neighborjoining (NJ) and UPGMA phylogenetic methods. We confirmed that OLFML2A and OLFML2B evolution fit the hypothesis of purifying selection (d N > d S , P < 0.05). See Table 4. Localization of OLFML2A and OLFML2B proteins in the eye of baboons and humans by immunofluorescence assays After identifying the ocular tissues which express both mRNAs, and in an effort to determine the cell type of the retina that expresses the genes, we performed the immunoreactivity (IR) analyses in baboon and human retina. IR signal of baboon retina for OLFML2A and OLFML2B are shown in Figs. 4 and 5 respectively. While in humans IR of OLFML2A and OLFML2B is observed in Fig. 6, also see Table 3. More analysis is required in normal human tissue. Only in baboon retina we did a doubleimmunolocalizaction of OLFML2A and OLFML2B with β-tubulin 3 beta chain, which is a cytoskeletal protein that is currently a neuronal cell marker in the developing and mature human nervous system to diferenciate retinal ganglion cells and astrocytes (Figs. 4, 5). Discussion Olfactomedin was originally identified as the major component of the mucus layer that surrounds the chemosensory dendrites of olfactory neurons [15]. Subsequently, a vast numbers of proteins that share a ~250 amino acid domain homologous to olfactomedin were discovered in animals ranging from nematodes to humans [4]. One of these proteins was the olfactomedin like 2 proteins (OLFML2A and OLFML2B), also known as photomedins (−1 and −2, respectively), which were first identified and characterized in mouse retina [3]. But so far, it is not known if some primates such as baboon and human express these photomedins, whereas some olfactomedinproteins like myocilin are associated with eye diseases such as glaucoma. Based on the above in the present study we cloned, sequenced and characterized the olfactomedin-like 2 cDNAs (OLFML2A and OLFML2B) in baboon (Papio hamadryas) and human from different ocular tissues. This is the first study that identifies expression of these genes in the eye of primates. Nevertheless, in mouse retina by northern blot analysis two RNA transcripts for OLFML2A (5 and 3.5 kb) and one for OLFML2B (3.5 kb) have been reported [15]. The authors suggest that these two RNA species for OLFML2A could be due to alternative splicing [3]. Also in human podocyte cells have been reported two OLFML2A mRNA variants [9]. However, we did not find in our study other transcripts of genes that may indicate the presence of isoforms derived by alternative splicing. This may be due to differences in the animal model studied; the tissues, and also the RNA transcripts found in mouse were not cloned and sequenced [3]. However, it is well known that some members of the subfamily of olfactomedin are expressed in the eye Inner nuclear layer (INL) Outer nuclear layer (ONL) 15,16]. Similarly, the expression pattern suggests that mechanisms of regulation of gene expression are similar in the two species. It also suggests that the OLFML2A and OLFML2B genes might have similar physiological effects. However more studies are needed. The most extensively studied olfactomedin protein to date is myocilin (MYOC), which was first discovered in human trabecular meshwork cells [17,18]. Several studies suggest that MYOC plays an important role in regulation in ocular hypertensin. Ocular hypertension is a major risk factor for glaucoma, leadin cause of blindness [15,19]. Trabecular meshwork is a connective tissue that regulates the outflow at the iridocorneal angle of the eye and, hence, controls intraocular pressure [19], aqueous humor is continually produced by the ciliary body and it is in direct contact with the anterior surface of the lens, iris, and corneal endothelial cells, before draining out of the eye via the trabecular meshwork [20]. MYOC expression was observed in cornea, ciliary body, iris, sclera, optic nerve and retina, in human and mouse eye [21][22][23]. It is known that these tissues get their nutrientes from the aqueous humor and also export their metabolites, allowing an exchange with neighboring tissues [20]; it is also known that glaucoma is a group of progressive neurodegenerative multifactorial diseases, characterized by the loss of retinal ganglion cells (RGCs), optic nerve excavation, and axonal degeneration leading to irreversible vision loss [24]. Interestingly, we find the expression of photomedins ( Table 3) in some of the tissues that express MYOC (cornea, lens, iris, and retina). The function of MYOC is still not known [19], however its been reported that MYOC may interact with other olfactomedin knows as optimedin (OLFM3), these wo proteins are expressed in human trabecular meswork and retina, and may be involved in glaucoma disease [25]. It would be interesting to study the interaction of myocilin and photomedins and see the correlation with these proteins in ocular pathologies. The functional roles of the olfactomedin proteins in the retina are still not known [17]. Olfactomedins appear to be critical mediators for development of nervous systems and hematopoiesis [19]. Some others members are identified to be associated with human disease processes like glaucoma and cancer [4,19]. Overexpression studies and inhibition of protein expression in zebrafish embryos showed that Noelin (olfactomedin 1), has profound effect on eye development, eye size, the projection field of retinal ganglion cells to the optic tectum, and extension and branching of retinal ganglion cell axons [19,26]. Further studies showed in zebrafish that Noelin promotes retinal ganglion cell axon growth [27]. OLFM1 and OLFM2 are preferentially expressed in the developing retinal ganglion cells [16] in rat and mouse. In zebrafish eye OLFM2 was detected in the retinal ganglion cell layer and the inner nuclear layer [28]. OLFML2 in humans was found by RT-PCR in corneal endothelium, uvea, lens and retina-RPE. In baboon it was found in cornea, lens, iris, and retina-RPE [17]. Others authors have reported the expression of OLFM4 in mouse Müller glial cells. In the retina, OLFML2A was selectively expressed in the outer segment of photoreceptor cells and OLFML2B was expressed in all retinal neurons in a mouse. These proteins bound to other proteins like chondroitin sulphate-E and heparin suggest that photomedins-1 and -2 are extracellular proteins capable of binding of proteoglycans [3]. OLFML3 may play a possible role in angiogenesis in ocular tissues and it has been proposed that this protein may play a role in anterior segment and retinal diseases [17]. Rods layer (RL) Pigmented epithelium (PE) Positive selection (d N > d S ) implies that the substitutions, mostly non-synonymous, are functional and benefit the organism, conferring some evolutionary advantage. While purifying selection (d N < d S ) indicates that evolutionary pressure has been relaxed. The d N and d S rates show that the evolutionary force, actually acting on these, is the purification of the selection (P < 0.05). It fits the hypothesis that purifying of the selection is a clue that these genes are functional in the studied species, because there are not functional genes that do not fit this hypothesis. Similar expression profiles of human and baboon OLFML2A and OLFML2B genes, suggests that they have similar binding sites for known transcriptional factors. The phylogenetic relationship between NWM, OWM and apes OLFML2A proteins was determined to Fig. 4 OLFML2A immunodetection in the retina of adult baboons. Confocal images of double stained retina sections to identify cells expressing OLFML2A (red 1st Ab: rabbit polyclonal anti-human OLFML2A 1:500; 2nd Ab: goat anti-rabbit IgG-Cy3 ® 1:4000), β-Tubulin (1st Ab: mouse monoclonal anti-mammal Tubulin 3 beta chain 1:250, 2nd Ab: goat anti mouse IgG FITC 1:250) and Glial Fibrillary Acid Protein (1st Ab: mouse monoclonal anti-GFAP 1:300, 2nd Ab: goat anti mouse IgG FITC 1:250) in baboon retina. Cells nuclei were labeled with DAPI (blue). GCL ganglion cell layer, IPL inner plexiform layer, INL inner nuclear layer, ONL outer nuclear layer, RL rod layer, PE pigmented epithelium evaluate their evolution in primates. The phylogenetic tree (Fig. 1) shows three clades in a linage-specific manner. These clades correspond to NWM, OWM and apes, finally galago (out-group). The tree's topology, branch length, and bootstrap values are similar using either phylogenetic method (ML/NJ/UPGMA). This confirms a clear orthology within the OLFML2A gene. While OLFML2B orthology is not clear (Fig. 2). The tree's topology does not fit in a lineage-specific manner. It could be for many reasons, such as Ma's night monkey sequence is shorter than the rest or it may be because more species should be included in the study. Given these finding together, olfactomedins play essential roles in development and cell differentiation, also their effects are mediated through intercellular interactions, sometimes with other proteins or extracellular matrix components [15,19], and some olfactomedin are implicated in important pathologies. OLFML2A and OLFML2B seem to play an important role in ocular tissues, however the functions of these olfactomedins are still unknown. Therefore, further studies are needed to elucidate the role of these proteins in embryonic development, investigate its biological function, their protein interactions and diseases. Conclusions The function of olfactomedin proteins in the eye, especially OLFML2A and OLFML2B, is still unknown; a lot of work is needed to clarify their actual role. Due to the high similarity between baboon and human olfactomedin expression, the baboon is a powerful model to deduce the physiological functions of these proteins in the eye. Confocal images of double stained retina sections to identify cells expressing OLFML2A (red, 1st Ab: rabbit polyclonal anti-human OLFML2A 1:500; 2nd Ab: goat anti-rabbit IgG-Cy3 ® 1:4000) and β-Tubulin (green, 1st Ab: mouse monoclonal anti-mammal Tubulin 3 beta chain 1:250) in baboon retina. c Confocal images of double stained retina sections to identify cells expressing OLFML2A (red, 1st Ab: rabbit polyclonal anti-human OLFML2A 1:500; 2nd Ab: goat anti-rabbit IgG-Cy3 ® 1:4000) and Glial Fibrillary Acid Protein (green, 1st Ab: mouse monoclonal anti-GFAP 1:300, 2nd Ab: goat anti mouse IgG FITC 1:250) in baboon retina. Cells nuclei were labeled with DAPI (blue). GCL ganglion cell layer, INL inner nuclear layer, ONL outer nuclear layer, PL photoreceptor layer
2017-08-03T02:58:01.979Z
2016-11-08T00:00:00.000
{ "year": 2016, "sha1": "99a2cd4f880a190f7baa7bf5d8b79fb497eb1c3c", "oa_license": "CCBY", "oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-016-0101-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99a2cd4f880a190f7baa7bf5d8b79fb497eb1c3c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221725112
pes2o/s2orc
v3-fos-license
Novel Activated Carbon Nanofibers Composited with Cost-Effective Graphene-Based Materials for Enhanced Adsorption Performance toward Methane Various types of activated carbon nanofibers’ (ACNFs) composites have been extensively studied and reported recently due to their extraordinary properties and applications. This study reports the fabrication and assessments of ACNFs incorporated with graphene-based materials, known as gACNFs, via simple electrospinning and subsequent physical activation process. TGA analysis proved graphene-derived rice husk ashes (GRHA)/ACNFs possess twice the carbon yield and thermally stable properties compared to other samples. Raman spectra, XRD, and FTIR analyses explained the chemical structures in all resultant gACNFs samples. The SEM and EDX results revealed the average fiber diameters of the gACNFs, ranging from 250 to 400 nm, and the successful incorporation of both GRHA and reduced graphene oxide (rGO) into the ACNFs’ structures. The results revealed that ACNFs incorporated with GRHA possesses the highest specific surface area (SSA), of 384 m2/g, with high micropore volume, of 0.1580 cm3/g, which is up to 88% of the total pore volume. The GRHA/ACNF was found to be a better adsorbent for CH4 compared to pristine ACNFs and reduced graphene oxide (rGO/ACNF) as it showed sorption up to 66.40 mmol/g at 25 °C and 12 bar. The sorption capacity of the GRHA/ACNF was impressively higher than earlier reported studies on ACNFs and ACNF composites. Interestingly, the CH4 adsorption of all ACNF samples obeyed the pseudo-second-order kinetic model at low pressure (4 bar), indicating the chemisorption behaviors. However, it obeyed the pseudo-first order at higher pressures (8 and 12 bar), indicating the physisorption behaviors. These results correspond to the textural properties that describe that the high adsorption capacity of CH4 at high pressure is mainly dependent upon the specific surface area (SSA), pore size distribution, and the suitable range of pore size. Introduction Fossil-based fuels are still the most dominant fuel for vehicles. Their combustion releases harmful by-product gases like oxides of sulfur and nitrogen [1], smoke, and particulate matters as well as carbon monoxide [2]. Carbon dioxide as the main combustion product is the primary reason for greenhouse Polymers 2020, 12,2064 3 of 17 (99.999%), nitrogen (N 2 ; 99.999%), carbon dioxide (CO 2 ; 99.999%), and methane (CH 4 ; 99.999%) gases were purchased from Alpha Gas Solution Sdn. Bhd. Graphene Preparation from Rice Husk Ash Rice husk ashes (RHA) were produced by heat treating the rice husk under air environment at 200 • C, followed by grinding for several minutes to form powder. The transformation of RHA into graphene-based structure was done using chemical activation method [16]. In this method, 1:5 ratio of RHA:KOH was placed compactly in a porcelain crucible, covered with a ceramic wool. The crucible was then put into a larger graphite crucible by covering the top with carbon powder and ceramic wool (1:1) to prevent the oxidation during high-temperature treatment. Subsequently, the RHA sample was annealed at 850 • C with heating rate of 5 • C/min under air environment. Later, deionized (DI) water was used to wash the resultant RHA for several times to remove the excess KOH and other impurities. The sample was then centrifuged and sonicated to obtain the supernatant. The obtained supernatant was then filtered using vacuum filter and left to dry overnight in an oven at 80 • C. The graphene-derived RHA obtained are known as GRHA [18]. Synthesis of Reduced Graphene Oxide (rGO) Natural graphite powder was used as the precursor in the synthesis of graphene oxide (GO) through Hummer's method [19]. In brief, 150 mL of H 2 SO 4 (95-98%) was added into the mixture of graphite powder and NaNO 3 (1/1, weight/weight ratio. The solution was stirred at temperature below 20 • C in an ice bath. Then, 18 g of KMnO 4 were slowly added into the solution also under low temperature. After that, the temperature of the solution was slowly increased. As the temperature reached 35 • C, the mixture was then stirred for another 30 min. Then, DI water (300 mL) was added to form a yellowish-brown solution. Subsequently, the beaker was removed from the ice bath and the temperature of the solution was slowly increased to 98 • C and the mixture was stirred again overnight. Next, 300 mL of 30% H 2 O 2 were introduced into the mixture. After yellow color bubbles appeared in the solution, 5% of HCl (1000 mL) was subsequently added in order to remove the metal ions and acid. The solution was later washed with DI water for several times until a neutral pH was achieved. The suspension was filtrated via vacuum filtration and the obtained GO was further dried under vacuum at 50 • C for 24 h. The GO sample was activated by using CO 2 at 900 • C. Finally, the thermal reduction method by Zhao et al. (2010) [20] qA conducted in order to attain the (rGO). Fabrication of Activated Carbon Nanofibers' Nanocomposites (gACNFs) Fifty mL of dope solution of 8 weight percent (w%) PAN in DMF were used to produce nanofibers (NFs) through electrospinning. Prior to electrospinning of NFs' composite, 1 w% GRHA (relative to the polymer weight) was first dispersed in DMF and left for simultaneous stirring and sonicating for a few hours under room temperature. Then, PAN was added into the solution and was continuously stirred for another 24 h to obtain a homogenous solution. The same method was repeated for rGO/ACNF composite by excluding the addition of GRHA or rGO for pristine NFs. Electrospinning and Pyrolysis of Nanofibers The applied electrospinning parameters were obtained from various previous works [21]. In brief, the injection flow rate was 1.0 mL/hour, the high-voltage power supply was 10 kV, and the distance between the tip of the needle and collector was 15 cm. Furthermore, the chamber condition was set at 50% relative humidity (RH) and 32.5 • C [22]. The pristine NFs were denoted as NF, composite NFs with GRHA, and rGO were denoted as GRHA/NF and rGO/NF, respectively. The electrospun NFs were subjected to three stages of pyrolysis process to produce ACNFs. It started with thermal stabilization (oxidation), carbonization, and activation. Prior to heating, the NFs' samples were placed in the porcelain combustion boat and then put inside the horizontal quartz tubular furnace (Carbolite CTF 12/65/550 with Eurotherm 2416 CC temperature control system). The stabilization was started from Polymers 2020, 12, 2064 4 of 17 room temperature until 275 • C under the flow of air at heating rate of 2 • C/min. Then, the stabilized NFs were further carbonized until 600 • C under N 2 atmosphere at heating rate of 5 • C/min and were physically activated with CO 2 until 700 • C at heating rate of 5 • C/min. The resting time and gas flow rate were fixed at 30 min and 0.2 L/min, respectively, throughout the pyrolysis process. The fabrication parameters of all samples are summarized in Table 1. Characterizations The thermal behavior of the samples was analyzed using thermogravimetric analysis (TGA) under nitrogen atmosphere with heating rate of 10 • C/min at range of 50-700 • C (TG analyzer with differential scanning calorimeter (DSC; model STA8000). The structural variation of the ACNFs' samples was identified by using Raman spectrometer (RAMAN plus Nanophoton). X-ray diffraction (XRD, Rigaku SmartLab) analysis was performed using Cu Kα (λ=1.54184 Å) at scanning rate of 1.5 o /min. The IR spectra of the ACNFs were obtained by pressing the powdered ACNFs into potassium bromide (KBr) pellets using Fourier-transform infrared (FTIR, Thermo Scientific/Nicolet iS10) analysis with scanning range of 4000-1000 cm −1 . The diameter and morphology of as-prepared ACNF samples were analyzed using scanning electron microscope (SEM; JSM 6701-F, JEOL, Japan) equipped with electron dispersive X-ray (EDX; Hitachi Co. Ltd., Japan) to determine the elemental mapping of the samples. Prior to N 2 adsorption measurements, the ACNFs were first degassed in a processor at 350 • C under vacuum 1 × 10 −1 kilopascal (kPa) for 3 h. After pretreatment, pore texture characterizations were carried in a porosity analyzer MicrotracBEL Belsorp-max with N 2 (99.9999% purity) at a temperature of −196 • C for adsorption-desorption experiments. According to the data, the SSA, total pore volume, and mean pore diameter of ACNFs were calculated by the Brunauer-Emmett-Teller (BET) method. The micropore surface area and micropore volume of ACNFs were determined by the t-plot and Barrett-Joyner-Halenda (BJH) methods, respectively, according to the BELSORP analysis program software. All characterizations of SSA, pore volume, and pore size distribution of the resulting ACNF samples from N 2 adsorption-desorption measurements were performed in at least triplicate. Methane Adsorption Performance via Volumetric Method The 0.3 g of each ACNF sample was weighed and dried in a vacuum oven for 24 h at 150 • C. After completely drying, the ACNFs were weighed again and further loaded into the adsorption cell, detailed in previous work [23]. Meanwhile, in the loading cell, CH 4 was injected until reaching desired pressures (4, 8, and 12 bar). To start the adsorption test, the valve between the adsorption and loading cells was opened to let the CH 4 from the loading cell pass through the ACNFs located in the adsorption cells. The pressure changes in both cells were recorded continuously at 5-min intervals until the equilibrium pressure was achieved, indicated by constant pressure reading for about 10 min. The adsorbed amount of CH 4 was calculated according to Nasri et al. (2014) by using Equation (1). where q is the amount of CH 4 adsorbed, m is the mass of the adsorbents (g), V is the volume (cm 3 ), R is the gas constant, P is the pressure (bar), T is temperature (K), a is adsorption cell, l is loading cell, i is initial state, eq represents the equilibrium state of the final adsorption, and Z is the compressibility factor. Adsorption Kinetics The adsorption of CO 2 onto the ACNFs was modeled by using pseudo-first-or pseudo-second-order kinetic as in Equations (2) and (4), respectively. Equation (2) can then be rewritten into linear form, as in Equation (3). Equation (4) can then be rewritten into linear form, as in Equation (5). where q t is the amount of adsorbed CH 4 at any time (mmol/g), q e is the amount of adsorbed CH 4 at equilibrium (mmol/g), and k 1 and k 2 are rate constant for pseudo-first-and pseudo-second-order model, respectively. Physicochemical Properties of the gACNFs TGA thermograms of the pristine and the composite NFs are shown in Figure 1. All samples show two stages of decompositions. The first stage (~5 wt.%) occurred at 285-320 • C and slowed down at 340-550 • C. The first-stage weight loss can be ascribed to the decomposition of inorganic components and loss of moisture of the PAN polymer [24,25]. PAN-based NFs were found to degrade at a slightly lower temperature (95 to 120 • C) [26]. However, in this study, the degradation started at a higher temperature (285 • C), most likely because of the cross-linking of PAN chains forming an aromatic ladder structure to avoid melting of NFs, as reported earlier [27]. Formation of stable 3D cyclized cyano groups' structure in the chain segments of the PAN polymer was also possible during the cross-linking in the oxidative atmosphere at lower temperature (200-300 • C) [28]. The second stage of weight loss starts around 500 • C, with a dramatic weight loss (>50%) when the temperature gradually increased up to 700 • C. At 700 • C, both pristine ACNF and rGO/ACNF exhibited similar carbon yield, which was~25.1 wt.%, while the yield for GRHA/ACNF was~44.5 wt.%, almost twice the others'. The high yield of the GRHA/ACNF was possibly due to the presence of silica that improved the thermal stability [29,30]. The second stage of degradation can also be ascribed to further aromatization of the formed cyclic structures. At higher temperatures, above 700 • C, hydrogen was evolved and the rings became aromatic [26,31]. Raman spectra of the pristine ACNFs and modified ACNFs are presented in Figure 2. In Raman spectra, there are three major important bands, known as D, G, and 2D bands, to determine the crystallinity of the graphite-based materials. From the spectra in Figure 2, the most prominent peaks can be observed at 1350, 1590, and 2680 cm -1 in all samples, which represent D, G, and 2D band, respectively [32,33]. All samples exhibited high D and G band and extremely broad 2D band. The presence of D band in the spectra was attributed to the existence of disordered carbonaceous structure, while the G band indicated the presence of ordered graphitic structure [34]. Meanwhile, Polymers 2020, 12, 2064 6 of 17 2D band was produced due to phonon-scattering process, also associated with the presence of graphene layers in materials [35]. Physicochemical Properties of the gACNFs TGA thermograms of the pristine and the composite NFs are shown in Figure 1. All samples show two stages of decompositions. The first stage (~5 wt.%) occurred at 285 o C-320 °C and slowed down at 340 o C-550 o C. The first-stage weight loss can be ascribed to the decomposition of inorganic components and loss of moisture of the PAN polymer [24,25]. PAN-based NFs were found to degrade at a slightly lower temperature (95 to 120 o C) [26]. However, in this study, the degradation started at a higher temperature (285 o C), most likely because of the cross-linking of PAN chains forming an aromatic ladder structure to avoid melting of NFs, as reported earlier [27]. Formation of stable 3D cyclized cyano groups' structure in the chain segments of the PAN polymer was also possible during the cross-linking in the oxidative atmosphere at lower temperature (200-300 o C) [28]. The second stage of weight loss starts around 500 o C, with a dramatic weight loss (>50%) when the temperature gradually increased up to 700 o C. At 700 o C, both pristine ACNF and rGO/ACNF exhibited similar carbon yield, which was ~25.1 wt.%, while the yield for GRHA/ACNF was ~44.5 wt.%, almost twice the others'. The high yield of the GRHA/ACNF was possibly due to the presence of silica that improved the thermal stability [29,30]. The second stage of degradation can also be ascribed to further aromatization of the formed cyclic structures. At higher temperatures, above 700 °C, hydrogen was evolved and the rings became aromatic [26,31]. Raman spectra of the pristine ACNFs and modified ACNFs are presented in Figure 2. In Raman spectra, there are three major important bands, known as D, G, and 2D bands, to determine the crystallinity of the graphite-based materials. From the spectra in Figure 2, the most prominent peaks can be observed at 1350, 1590, and 2680 cm -1 in all samples, which represent D, G, and 2D band, respectively [32,33]. All samples exhibited high D and G band and extremely broad 2D band. The presence of D band in the spectra was attributed to the existence of disordered carbonaceous structure, while the G band indicated the presence of ordered graphitic structure [34]. Meanwhile, 2D band was produced due to phonon-scattering process, also associated with the presence of graphene layers in materials [35]. The D band was higher than the G band, indicating more disordered structures in the ACNFs ( Figure 2). This result is supported with the "R-value", or intensity ratio, of the samples. The smaller the "R-value", the more ordered graphite crystallites are [34]. The R-values of the pristine ACNFs, rGO/ACNF, and GRHA/ACNF were 1.17, 1.40, and 3.17, respectively, which indicated that the addition of rGO or GRHA promoted the formation of more disordered or defective graphitic structures in the ACNFs. According to Liu and Wilcox (2011) [36], the gas adsorbates showed stronger binding interactions with the defective site on the surface of adsorbents as compared to the surface of perfect adsorbents. Figure 3 shows the XRD spectra of the NFs prior to and after activation of ACNFs. It shows the materials containing random microcrystalline carbon fragments are in amorphous forms, possibly due to the existence of various inorganic compounds and impurities. However, there are two distinct broad peaks at 17.6 o and 28 o in all samples prior to activation, most likely corresponding to the crystallographic planes (100) and semi-crystalline PAN (110) [37,38]. After activation, the spectra exhibit very broad diffraction peaks with the absence of a sharp peak. This result reveals that all the resultant ACNFs were predominantly amorphous. The spectra showed one major, high, and broad peak at 26 o and another weak, broad peak at 43 o . In comparison to the study conducted by Dong et The D band was higher than the G band, indicating more disordered structures in the ACNFs ( Figure 2). This result is supported with the "R-value", or intensity ratio, of the samples. The smaller the "R-value", the more ordered graphite crystallites are [34]. The R-values of the pristine ACNFs, rGO/ACNF, and GRHA/ACNF were 1.17, 1.40, and 3.17, respectively, which indicated that the addition of rGO or GRHA promoted the formation of more disordered or defective graphitic structures in the ACNFs. According to Liu and Wilcox (2011) [36], the gas adsorbates showed stronger binding interactions with the defective site on the surface of adsorbents as compared to the surface of perfect adsorbents. Figure 3 shows the XRD spectra of the NFs prior to and after activation of ACNFs. It shows the materials containing random microcrystalline carbon fragments are in amorphous forms, possibly Polymers 2020, 12, 2064 7 of 17 due to the existence of various inorganic compounds and impurities. However, there are two distinct broad peaks at 17.6 o and 28 o in all samples prior to activation, most likely corresponding to the crystallographic planes (100) and semi-crystalline PAN (110) [37,38]. After activation, the spectra exhibit very broad diffraction peaks with the absence of a sharp peak. This result reveals that all the resultant ACNFs were predominantly amorphous. The spectra showed one major, high, and broad peak at 26 o and another weak, broad peak at 43 o . In comparison to the study conducted by Dong et al. (2014) [39], they detected the crystalline graphite peak at 2θ = 28 o , and this slightly shifted to the left peak, obtained in the present study, indicating the enlargement of the distance between the graphene layers. These two peaks at 26 o and 43 o correspond to the crystallographic planes of (002) and (100) in graphitic structures, respectively. A shoulder at 43 o in all resultant ACNFs indicates the absence of a repetitively stacked graphitic structure [40]. shows the presence of alkynes (C≡C), nitrile groups (C≡N) [28], and alkanes' stretch (C-H) [41,42]. Moreover, the presence of the asymmetric bending and stretching vibration of surface hydroxyls and adsorbed water was indicated by the appearance of band at 3200-3600 cm −1 [43]. However, most of the described peaks disappeared due to the decomposition of PAN and removal of transition compounds during the high activation temperature, leaving only carbon and hydrogen bonds at 1217, 1582, 1750, 1982, and 2180 cm −1 , as shown in Figure 4a. shows the presence of alkynes (C≡C), nitrile groups (C≡N) [28], and alkanes' stretch (C-H) [41,42]. Moreover, the presence of the asymmetric bending and stretching vibration of surface hydroxyls and adsorbed water was indicated by the appearance of band at 3200-3600 cm −1 [43]. However, most of the described peaks disappeared due to the decomposition of PAN and removal of transition compounds during the high activation temperature, leaving only carbon and hydrogen bonds at 1217, 1582, 1750, 1982, and 2180 cm −1 , as shown in Figure 4a. Figure 4b reveals the FTIR spectra of pristine ACNFs, rGO/ACNFs, and GRHA/ACNF after activation. All three samples exhibited similar peaks but with different intensities. The pristine ACNFs exhibited the highest intensities. The appearance of peaks at 1217, 1582, 1750, 1982, and 2180 cm −1 in the spectrum verifies the existence of C-O stretching vibrations of epoxide groups, aromatic -C=C-bonds, C=O stretching, and alkynes' (C≡C) stretches, respectively [44]. The disappearance of C≡N after the activation indicates the production of ring structures in PAN-based ACNFs [24]. As both applied additives were carbon-based materials, there was no "extra" peak observed unless the appearance of a weak and small peak of asymmetric stretches of Si-O-Si at 1040 cm −1 [45] due to the presence of Figure 4b reveals the FTIR spectra of pristine ACNFs, rGO/ACNFs, and GRHA/ACNF after activation. All three samples exhibited similar peaks but with different intensities. The pristine ACNFs exhibited the highest intensities. The appearance of peaks at 1217, 1582, 1750, 1982, and 2180 cm −1 in the spectrum verifies the existence of C-O stretching vibrations of epoxide groups, aromatic -C=C-bonds, C=O stretching, and alkynes' (C≡C) stretches, respectively [44]. The disappearance of C≡N after the activation indicates the production of ring structures in PAN-based ACNFs [24]. As both applied additives were carbon-based materials, there was no "extra" peak observed unless the appearance of a weak and small peak of asymmetric stretches of Si-O-Si at 1040 cm −1 [45] due to the presence of silica in the GRHA samples. These obtained results correspond to the EDX analysis, as it confirmed the existence of C and O in all samples with different percentages. Morphologies and Structures The morphologies of all resultant NFs are shown in Figure 5 and Figure 6. Most of the NFs were stuck with each other, forming an interconnected/fused fibrous structure with a wide range of diameter. It is believed that the formation of the fused fibrous structure could be due to the insufficient solvent evaporation from the polymer jets [46]. Yet, this structure showed insignificant effect to its performance. As these resultant NFs were further carbonized, the fiber diameter was reduced, resulting in high surface area. The changes in porous characteristics and surface area of the NFs had significant effects on gas adsorption, as detailed later. Morphologies and Structures The morphologies of all resultant NFs are shown in Figures 5 and 6. Most of the NFs were stuck with each other, forming an interconnected/fused fibrous structure with a wide range of diameter. It is believed that the formation of the fused fibrous structure could be due to the insufficient solvent evaporation from the polymer jets [46]. Yet, this structure showed insignificant effect to its performance. As these resultant NFs were further carbonized, the fiber diameter was reduced, resulting in high surface area. The changes in porous characteristics and surface area of the NFs had significant effects on gas adsorption, as detailed later. Figure 5 shows the morphology of the NFs prior to and after activation. Prior to activation, the NFs exhibited smooth, straight, and almost aligned structure with a minimum amount of beads. The average diameter of the NFs ranged from 400-550 nm. After activation at 800 • C, the structure of the NFs became coarser and wrinkled, with the appearance of several beads. The fiber diameter also shrank to 300-500 nm, due to the vulnerability of the surface toward the heat treatment (loss of water content) and breakage of the hydrogen bonds at increasing temperature, as reported earlier [47]. Moreover, the addition of either rGO or GRHA into the NFs further decreased diameters, to 250-400 nm (up to 50%). This is because the properties of graphene with high conductivity would affect the properties of dope solution, including the electrical conductivity, which had a major impact on the fibers' diameter [33]. Even though the range of the fiber diameter obtained was not in nanoscale, which is <100 nm, NFs' term has been used throughout this study, referring to the incorporation of nanomaterials, such as GRHA and rGO, to produce NFs' composites. Figure 5 shows the morphology of the NFs prior to and after activation. Prior to activation, the NFs exhibited smooth, straight, and almost aligned structure with a minimum amount of beads. The average diameter of the NFs ranged from 400-550 nm. After activation at 800 o C, the structure of the NFs became coarser and wrinkled, with the appearance of several beads. The fiber diameter also shrank to 300-500 nm, due to the vulnerability of the surface toward the heat treatment (loss of water content) and breakage of the hydrogen bonds at increasing temperature, as reported earlier [47]. Moreover, the addition of either rGO or GRHA into the NFs further decreased diameters, to 250-400 nm (up to 50%). This is because the properties of graphene with high conductivity would affect the properties of dope solution, including the electrical conductivity, which had a major impact on the fibers' diameter [33]. Even though the range of the fiber diameter obtained was not in nanoscale, Figure 5 shows the morphology of the NFs prior to and after activation. Prior to activation, the NFs exhibited smooth, straight, and almost aligned structure with a minimum amount of beads. The average diameter of the NFs ranged from 400-550 nm. After activation at 800 o C, the structure of the NFs became coarser and wrinkled, with the appearance of several beads. The fiber diameter also shrank to 300-500 nm, due to the vulnerability of the surface toward the heat treatment (loss of water content) and breakage of the hydrogen bonds at increasing temperature, as reported earlier [47]. Moreover, the addition of either rGO or GRHA into the NFs further decreased diameters, to 250-400 nm (up to 50%). This is because the properties of graphene with high conductivity would affect the properties of dope solution, including the electrical conductivity, which had a major impact on the fibers' diameter [33]. Even though the range of the fiber diameter obtained was not in nanoscale, Figure 6 shows the microstructure morphologies of pristine and composite ACNFs (rGO/ACNFs and GRHA/ACNFs) after activation. No major change was observed on morphologies of either composite ACNFs as compared to its original pristine ACNFs (coarser and wrinkled). However, it slightly affected the diameter of the ACNFs, in which the composite ACNFs possessed a smaller diameter. Surprisingly, the composite ACNFs, in Figure 6b,c, exhibited a beadless structure, an observation for the first time reported in literature. A smooth structure with no beads or agglomeration is needed in order to obtain ACNFs with high SSA as there was no bead that blocked the surface area during the adsorption process. The mean diameter of rGO/ACNFs and GRHA/ACNFs ranged between 300 to 500 nm and 200-350 nm, respectively. The existence of each element in the resultant ACNFs was confirmed with EDX analysis. Figure 6d shows the EDX mapping of rGO/ACNF with 92 atomic percent (at.%) of carbon and 8 at.% of oxygen. Because rGO (carbon-based materials) was used as additive, there were no other elements or impurities detected. Meanwhile, for GRHA/ACNF, the EDX mapping obtained from our preliminary studies, as previously reported by Othman et al., was used for comparison with rGO/ACNF. From their report, it can be observed that the GRHA/ACNF composites possessed three important elements in their structures, which were 94.19 at.% of carbon, 5.43 at.% of oxygen, and 0.38 at.% of silicon [48]. There was still a small amount of silicon observed in the structure, as this proved the existence of the silica in the GRHA derived from the rice husk ashes (RHA). Figure 7 shows the SSA and the porous structure behavior of all resultant ACNFs determined by nitrogen (N 2 ) adsorption isotherms. The sharp adsorption of N 2 curve at low pressure of <0.1 bar. indicates the micropore filling and monolayer adsorption phase [49]. As the pressure increased over 0.1, the isotherms became nearly plateau (ranging from 0.15-0.95), which was due to the multilayer adsorption on the mesopores of the ACNFs. However, as the saturation pressure approached, a significant improvement on N 2 adsorption is observed between pristine ACNFs and composite ACNFs, which increased from 60 cm 3 /g up to 84 cm 3 /g and 117 cm 3 /g for rGO/ACNF and GRHA/ACNF, respectively. To some extents, the adsorption isotherms of those three ACNF samples (ACNF, rGO/ACNF, and GRHA/ACNFs) were identical, which were the combination of both Type I and Type IV, indicating the presence of micropores and mesopores [25,50]. Even though all the plotted curves exhibit similar characteristics, the quantity of N 2 adsorbed varied in each sample, denoting the pore structures' variations. Interestingly, the adsorbed amount of N 2 obtained by GRHA/ACNFs was twice the value of the pristine ACNF and slightly higher than the rGO/ACNFs. These findings are in agreement with the SSA results (will be discussed later). Table 2 summarizes the porous structure parameters, including SSA, total pore volume (TPV), micropore volume (Vmicro), and average pore diameter (DPave) of pristine and composite ACNFs prior to and after activation. It shows that activation increased the SSA of all ACNFs dramatically, thanks to the creation of new micropores' structures [51]. There was no significant increment in the SSA in all composite NF samples prior to activation. However, the value of the SSA was twice the SSA value Table 2 summarizes the porous structure parameters, including SSA, total pore volume (TPV), micropore volume (V micro ), and average pore diameter (DP ave ) of pristine and composite ACNFs prior to and after activation. It shows that activation increased the SSA of all ACNFs dramatically, thanks to the creation of new micropores' structures [51]. There was no significant increment in the SSA in all composite NF samples prior to activation. However, the value of the SSA was twice the SSA value of the pristine ACNFs after the physical activation. Prior to activation, rGO/ACNF exhibited the smallest DP Ave value. However, the value was the largest after activation, as shown in Table 2. This was probably due to the fast decomposition of rGO during carbonization (around 300-650 • C) (Figure 1), which minimized the catalytic effect of rGO during activation process, as its decomposition was getting slower, >650 • C, producing larger DP Ave compared to other samples. In this study, it was believed the minimum temperature for catalytic effect of rGO to take place is >700 • C, in order to produce maximum micropores and pore diameter reduction. SSA = specific surface area; TPV = total pore volume; V micro = micropore volume; DP Ave = Average pore diameter. * Micropore volume in NFs prior to activation was negative due to the absence of micropores in the samples. Table 2 shows that both GRHA/NF and GRHA/ACNF exhibited the highest SSA increments from 17.8035 m 2 /g and 384.65 m 2 /g, respectively, the highest among all the ACNFs. They correspond to the TPV and V micro obtained. In gas adsorption, surface area as well as the wide range of porous structures (depending on the types and size of gas molecules), were the main performance-determining factors. Generally, adsorbent with high SSA and high pore volume is desirable [52]. In Table 2, GRHA/ACNF exhibited the highest SSA, TPV, and V micro of 384.65 m 2 /g, 0.1785 cm 3 /g, and 0.1580 cm 3 /g, respectively. These results agree with the CH 4 adsorption performances discussed later. Figure 8 shows the CH 4 adsorption performances of all ACNFs at different pressures. In Figure 8a, it can be seen that the GRHA/ACNF exhibits the highest CH 4 adsorption capacity, of 44.32 mmol/g, followed by rGO/ACNF of 40.52 mmol/g and ACNF of 20.86 mmol/g at 4 bar. Meanwhile, Figure 8b,c exhibit the adsorption profile of CH 4 at other pressures, which was 8 bar and 12 bar, respectively. With an increasing pressure, the CH 4 adsorption capacity of all ACNF samples was gradually increased and reached a smooth value at equilibrium state. As expected, the adsorption performance of all ACNF samples showed similar trends as the one at lower pressure (4 bar), which was GRHA/ACNF>rGO/ACNF>ACNF. Adsorption Performance and Kinetic Study of gACNFs These results correspond well with the N 2 adsorption isotherms and the SSA (see Figure 7 and Table 2), in which high SSA was attributed to high adsorption capacity due to the physisorption [53]. This means that the adsorption of CH 4 was mainly dependent upon the SSA, pore size distribution, and the ratio of the suitable pore sizes [54]. Interestingly, although the obtained SSA of GRHA/ACNF composite was lower than some references [55], as tabulated in Table 3, its adsorption performance towards CH 4 was significantly higher, making this newly fabricated GRHA/ACNF composite a suitable candidate for good gas adsorbents. This is possibly due to the well distributed pore size distribution between the micropores (up to 90% of the TPV) and mesopores available in the entire ACNFs structures, which played significant role in the adsorbent-adsorbates interaction. The micropore size ranging from 1.3954 to 2.174 nm exhibited larger adsorption sites for CH 4 molecules with size of 0.38 nm. and this made the CH 4 adsorption onto the ACNFs surface much easier. structures (depending on the types and size of gas molecules), were the main performancedetermining factors. Generally, adsorbent with high SSA and high pore volume is desirable [52]. In Table 2, GRHA/ACNF exhibited the highest SSA, TPV, and Vmicro of 384.65 m 2 /g, 0.1785 cm 3 /g, and 0.1580 cm 3 /g, respectively. These results agree with the CH4 adsorption performances discussed later. Figure 8 shows the CH4 adsorption performances of all ACNFs at different pressures. In Figure 8a, it can be seen that the GRHA/ACNF exhibits the highest CH4 adsorption capacity, of 44.32 mmol/g, followed by rGO/ACNF of 40.52 mmol/g and ACNF of 20.86 mmol/g at 4 bar. Meanwhile, Figure 8b,c exhibit the adsorption profile of CH4 at other pressures, which was 8 bar and 12 bar, respectively. With an increasing pressure, the CH4 adsorption capacity of all ACNF samples was gradually increased and reached a smooth value at equilibrium state. As expected, the adsorption performance of all ACNF samples showed similar trends as the one at lower pressure (4 bar), which was GRHA/ACNF>rGO/ACNF>ACNF. Figure 9 shows the adsorption kinetic of all ACNF samples based on pseudo-first-and pseudo-second-order kinetic models at different pressures. As can be seen, the pseudo-second-order kinetic model exhibited greater value of all the coefficient correlations (R 2 ) than the pseudo-first-order kinetic model, which were 0.9262, 0.9685, and 0.9737 for ACNF, rGO/ACNF, and GRHA/ACNF, respectively, at adsorption pressure of 4 bar. Among the samples, GRHA/ACNF possessed the highest R 2 value, of 0.9737. It suggests that the adsorption of CH 4 towards ACNFs obeyed the pseudo-second-order kinetic models, indicating that the sorption kinetics of CH 4 occurred on the microporous structure of ACNFs involved in the chemisorption [60]. This result is in good agreement with the N 2 adsorption isotherm and SSA data. This finding was supported by a previous study conducted by Tang and co-workers (2007) [61], as they also found that the ACNFs-based adsorbents obeyed the pseudo-second-order kinetic model. Interestingly, at higher pressures, of 8 and 12 bar, all samples seemed to obey the pseudo-first-order kinetic model, with higher R 2 value than pseudo-second-order kinetic model, as tabulated in Table 4. R 2 values of GRHA/ACNF at 8 and 12 bars were 0.9369 and 0.8054, respectively. This is believed due to the occurrence of physical adsorption because of the formation of multilayers of CH 4 molecules on the heterogeneous surface of the ACNFs at higher adsorption pressure. Conclusions Incorporation of either GRHA or rGO showed great improvement in ACNF's structure as well as its adsorption performance. The adsorption capacity was highly dependent upon the SSA and micropore volume as well as the pore size of the adsorbents; the higher the SSA and micropore volume, the higher the adsorption capacity. As expected, the CH 4 uptakes showed similar trend to the SSA results as follows: GRHA/ACNF>rGO/ACNF>ACNF. The results revealed that the CH 4 adsorption capacity by GRHA/ACNF was the highest, with value of 44.33 mmol/g, which is nearly double the volume of the pristine ACNFs, of 20.86 mmol/g, and slightly higher than rGO/ACNF, of 40.52 mmol/g, at 4 bar. Meanwhile, at 8 and 12 bar, the adsorption values were improved to 58.94 and 66.40 mmol/g, respectively. As the pressure increased, the adsorption capacity also increased. These adsorption values of all samples showed great improvement compared to previously reported ACNFs' composites and this proved the resultant ACNFs with high heterogeneity surfaces as suitable adsorbents for CH 4 adsorption and storage.
2020-09-16T13:06:19.489Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "a8ff396b65e2f0d655f91d62f0a3ca97213d3faa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/12/9/2064/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c535c23945f0594f2d65508f8fca6ce0e88fd6c7", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
6384052
pes2o/s2orc
v3-fos-license
Phase Shifting Capacity of the Circadian Pacemaker Determined by the SCN Neuronal Network Organization Background In mammals, a major circadian pacemaker that drives daily rhythms is located in the suprachiasmatic nuclei (SCN), at the base of the hypothalamus. The SCN receive direct light input via the retino-hypothalamic tract. Light during the early night induces phase delays of circadian rhythms while during the late night it leads to phase advances. The effects of light on the circadian system are strongly dependent on the photoperiod to which animals are exposed. An explanation for this phenomenon is currently lacking. Methodology and Principal Findings We recorded running wheel activity in C57 mice and observed large amplitude phase shifts in short photoperiods and small shifts in long photoperiods. We investigated whether these different light responses under short and long days are expressed within the SCN by electrophysiological recordings of electrical impulse frequency in SCN slices. Application of N-methyl-D-aspartate (NMDA) induced sustained increments in electrical activity that were not significantly different in the slices from long and short photoperiods. These responses led to large phase shifts in slices from short days and small phase shifts in slices from long days. An analysis of neuronal subpopulation activity revealed that in short days the amplitude of the rhythm was larger than in long days. Conclusions The data indicate that the photoperiodic dependent phase responses are intrinsic to the SCN. In contrast to earlier predictions from limit cycle theory, we observed large phase shifting responses in high amplitude rhythms in slices from short days, and small shifts in low amplitude rhythms in slices from long days. We conclude that the photoperiodic dependent phase responses are determined by the SCN and propose that synchronization among SCN neurons enhances the phase shifting capacity of the circadian system. Summary Background In mammals, a major circadian pacemaker that drives daily rhythms is located in the suprachiasmatic nuclei (SCN), at the base of the hypothalamus. The SCN receive direct light input via the retino-hypothalamic tract. Light during the early night induces phase delays of circadian rhythms while during the late night it leads to phase advances. The effects of light on the circadian system are strongly dependent on the photoperiod to which animals are exposed. An explanation for this phenomenon is currently lacking. Methodology and Principal Findings We recorded running wheel activity in C57 mice and observed large amplitude phase shifts in short photoperiods and small shifts in long photoperiods. We investigated whether these different light responses under short and long days are expressed within the SCN by electrophysiological recordings of electrical impulse frequency in SCN slices. Application of N-methyl-D-aspartate (NMDA) induced sustained increments in electrical activity that were not significantly different in the slices from long and short photoperiods. These responses led to large phase shifts in slices from short days and small phase shifts in slices from long days. An analysis of neuronal subpopulation activity revealed that in short days the amplitude of the rhythm was larger than in long days. Conclusions The data indicate that the photoperiodic dependent phase responses are intrinsic to the SCN. In contrast to earlier predictions from limit cycle theory, we observed large phase shifting responses in high amplitude rhythms in slices from short days, and small shifts in low amplitude rhythms in slices from long days. We conclude that the photoperiodic dependent phase responses are determined by the SCN and propose that synchronization among SCN neurons enhances the phase shifting capacity of the circadian system. Introduction The daily revolution of the earth causes 24 hour cycles in the environmental conditions, while the annual cycle of the earth moving around the sun brings about seasonal changes. Many organisms possess an endogenous 24 hour or 'circadian' clock, which allows them to anticipate and adapt to the daily and annual environmental changes (Takahashi et al., 2001). In mammals, a major pacemaker for circadian rhythms is located in the suprachiasmatic nuclei (SCN) of the anterior hypothalamus (Ralph et al., 1990). The ability of the SCN to generate circadian rhythms is present at the single cell level and is explained by a molecular feedback loop in which protein products of period and cryptochrome clock genes inhibit their own transcription (Reppert and Weaver, 2001;Ko and Takahashi, 2006). The SCN control circadian rhythms in molecular, endocrine and physiological functions, as well as in behavior (Kalsbeek et al., 2006). Besides their role as a daily clock, the SCN are an integral part of the photoperiodic time measurement system and convey day length information to the pineal gland and other parts of the central nervous system (Carr et al., 2003;Sumova et al., 2003;Bendova and Sumova, 2006). The SCN are synchronized to the environmental light-dark cycle via the retina. Light information reaches the SCN directly via the retino-hypothalamic tract, which innervates the SCN with glutamate and pituitary adenylate cyclase activating peptide containing fibers (Morin and Allen, 2006). Synchronization to the environmental light-dark cycle is based on a time-dependent responsiveness of the SCN to light, which is most easily demonstrated in "perturbation experiments" in which animals are kept in constant darkness and subjected to discrete pulses of light. Light pulses presented during the early night induce phase delays of the rhythm, while at the end of the night, they induce advances. The characteristic phase dependent light responsiveness is a prerequisite for animals to entrain to the environmental cycle, and is a common property of many organisms (Pittendrigh et al., 1984). The maximum advancing and delaying capacity depends strongly on the photoperiod to which animals are exposed (Pittendrigh et al., 1984;Refinetti, 2002;Evans et al., 2004). This finding has received surprisingly little attention, given the robustness of the photoperiodic modulation and potential functional significance. For instance in the hamster, the phase shifting effects of a 15min light pulse on behavioral activity rhythms are about 2 -3 fold larger in short winter days than they are in long summer days (Pittendrigh et al., 1984). One possibility is that increased light exposure in long days desensitizes the system to light at the level of the retina (Refinetti, 2002). Recently, it has become known that the organization of the SCN shows plasticity under influence of changes in day length (Schaap et al., 2003;Johnston et al., 2005;Rohling et al., 2006b;Inagaki et al., 2007;VanderLeest et al., 2007;Naito et al., 2008). The variation in light response over the seasons could therefore also result from different response properties brought about by plasticity within the SCN itself. We performed behavioral and electrophysiological experiments and found evidence that the phase shifting magnitude is determined by the SCN. The large phase shifts observed in high amplitude rhythms in short days versus the small shifts in long days leads us to propose that synchronization among individual oscillator components enhance the phase resetting capacity. Results and Discussion We performed behavioral experiments to establish phase shifting effects of light under long and short photoperiods. Running wheel activity was recorded from C57 mice kept in short and long day length (light:dark 8h:6h and 16h:8h). After at least 30 days of entrainment to the light-dark cycle, the animals remained in continuous darkness for 3 days (Figure 5.1). On day 4 in darkness, the animals received a saturating 30 min white light pulse which was aimed at different phases of the circadian cycle. The onset of behavioral activity was used as a marker of circadian phase, and defined as circadian time 12 (CT 12). Maximum delays were observed for pulses given 3 hours after activity onset in both animals from short days (shift: 22.68 ± 0.19h, n = 6) and animals from long days (shift: 20.62 ± 0.28h, n = 5). The magnitude of the delays was however significantly larger for animals in short days (p < 0.001). Light pulses towards the end of the night produced small phase advances which were not significantly different between the groups (short day advance: 0.61 ± 0.26h, n =8; long day advance 0.50 ± 0.11h, n = 9; p > 0.6). To investigate whether the small phase delays in long days may have resulted from a decrease in photic sensitivity of the circadian system as a consequence of higher photon stimulation during entrainment, we reinvestigated the phase delaying effects of light in short days, and doubled the amount of photons to which we exposed the animals during the entrainment period. Thus, the short day animals received the same amount of photons as the long day animals, but now distributed over 8 instead of 16 hours. We found that the phase delaying effects of light were large (shift: 22.95 ± 0.19h, n = 8) and not different from those observed in short days under normal light conditions (p > 0.1). The shifts were, however still significantly larger than those observed in long days (p < 0.001). The results indicate that the difference in shift between long and short day animals is not attributable to an increment in photon stimulation during entrainment to long days. To investigate whether the difference in the magnitude of the phase shift in long and short days is retained in the SCN in vitro we kept animals under long and short day length and prepared hypothalamic slices containing the SCN on the third day after release in constant darkness ( Figure 5.2). We recorded electrical impulse frequency in the SCN by stationary electrodes and applied NMDA pulses (10µM, 30min duration) by switching from regular artificial cerebrospinal fluid (ACSF) to NMDA containing ACSF. The NMDA (A, B) Examples of wheel running actograms from animals kept in short (A) and long photoperiods (B). The actograms show the wheel running activity of the mice over the 24h day. Consecutive days are plotted on successive lines. The top bar indicates the light-dark schedule before transfer to continuous darkness (DD, indicated with an arrow). A light pulse was given on day four in DD (L, indicated with an arrow), 3 hours after activity onset (indicated by  in the actogram). Activity onset was defined as circadian time 12. Phase response plots to 30 minute light pulses in short (C) and long (D) photoperiod. Phase responses are plotted as a function of the circadian time of the light pulse. Individual phase shifts are indicated by a plus symbol. The results were grouped in 3h bins centered at CT 0, 3, 6, 9, 12, 15, 18 and 21. The average phase responses of the light pulses are indicated by squares and connected with a solid line. The time of maximal delay is at CT 15 for both long and short photoperiods and is significantly different between both day lengths (p < 0.001). The large magnitude of the delays observed in short days is consistent with other studies (Pittendrigh et al., 1984;Refinetti, 2002). receptor is of crucial importance in mediating phase shifting by light and application of the glutamate receptor agonist NMDA to brain slices in vitro generates phase shifts of the circadian rhythm resembling photic phase responses (Colwell et al., 1991;Ding et al., 1994;Shibata et al., 1994). The timing of the NMDA pulse was based on the extrapolated behavioral activity of the animal before slice preparation, and was aimed 3 hours after activity onset, where the largest shifts in behavior were observed in both photoperiods. NMDA induced a sustained increment in SCN electrical discharge in slices from both photoperiods ( Figure 5.3). The relative increase in electrical activity was 32.2 ± 9.1% (n = 5) of baseline discharge in short days and 43.9 ± 8.0% (n = 5) of baseline discharge in long days. No significant differences in responsiveness to NMDA were observed (p > 0.1). Despite the similarity in acute NMDA responses, the resulting phase shifts were significantly larger in short days (23.2 ± 0.50h, 6 control and 5 experimental slices) compared to long days (0.0 ± 0.89h, 6 control and 5 experimental slices; p < 0.006; Figure 5.2). We also calculated the phase shift based on a secondary phase marker, the time of half maximum value on the rising slope of the electrical discharge peak. With this phase marker we found the same difference in phase shift between long and short day length, indicating the robustness of the measured differences in phase shift (difference in phase shift between day lengths: 3.2 ± 0.86h; p < 0.002). The data indicate that the phase shifting capacity of the circadian system under long and short photoperiods is determined by the SCN itself. The absence of a difference in the magnitude of the NMDA response underscores this interpretation and shows that the same increase in neuronal activity of the SCN results in a different phase shifting response. Examples of extracellular multiunit recordings from the SCN in mice kept on a short photoperiod (A, C) and on a long photoperiod (B, D). Action potentials were counted in 10s bins, and are plotted as a function of circadian time, determined by activity onsets from the mice prior to slice preparation. NMDA pulses were given 3 hours after the activity onset (CT 15), on the first cycle in vitro, in slices from both short (C) and long (D) day animals. In slices obtained from short day animals these pulses induced a delay in the peak time of the rhythm on the day following the application. Peak times are indicated by a vertical line. (E) Delays obtained at CT 15 from short day animals were significantly larger than delays obtained from long day animals. The magnitude of the delay after an NMDA pulse at CT 15 was significantly different between day lengths (p < 0.01). (F) The magnitude of the behavioral delay was not different from the delay observed in vitro, for both day lengths (short day in vitro vs. behavior p > 0.3, long day in vitro vs. behavior, p > 0.4). In an additional series of experiments we treated 7 slices from long day animals with 25µM NMDA at CT 15. The acute increase in electrical activity in response to 25µM NMDA was 123 ± 16.0% as compared to baseline, which is three times larger than the response to 10µM NMDA (p < 0.003). The phase-shifts in electrical activity rhythms were however not significantly different from untreated or 10µM NMDA treated slices (shift: 20.29 ± 0.72h; p > 0.68 in both cases). These results show that for long day length, the phase shifting response is not enhanced by an increment of the pharmacological stimulus. The preservation of the small shifts in slices from long days indicates an intrinsic incapability of the SCN to shift in long photoperiods. The question arises what mechanism in the SCN underlies the photoperiodic modulation of the phase shifting capacity. Recently it has become clear that photoperiodic encoding by the SCN (Sumova et al., 2003;Mrugala et al., 2000) is accomplished through a reconfiguration of cellular activity patterns (Schaap et al., 2003;Inagaki et al., 2007;VanderLeest et al., 2007;Naito et al., 2008;Hazlerigg et al., 2005). In long days, the activity patterns of single SCN neurons are spread in phase, rendering a broad population activity pattern, while in short days, the neurons oscillate highly in phase, which yields a composite waveform with a narrow peak (Schaap et al., 2003;VanderLeest et al., 2007). Molecular studies have shown regional differences in gene expression patterns within the SCN that increase in long days and decrease in short days (Inagaki et al., 2007;Naito et al., 2008;Hazlerigg et al., 2005). Theoretically, it follows from such a working mechanism, that the amplitude of the SCN rhythm in short days is larger than the amplitude in long days. That is, when neurons overlap in phase in short days, the maximum activity of each neuron will be at similar phases, leading to a high frequency in multiunit activity due to the summed activity of overlapping units during the peak, while during the trough, non-overlapping units lead to low activity (Rohling et al., 2006b). We measured the frequency of the multiunit activity of SCN neurons in long and short day slices and found that indeed, the maximum discharge levels are higher in short day animals ( Figure 5.4). A general assumption in the field of circadian rhythm research is that high amplitude rhythms are more difficult to shift than low amplitude rhythms (Pittendrigh et al., 1991), which stands in contrast to our present findings. To critically test the observed amplitude differences, we analyzed the amplitude under long and short days in more detail, by an off-line analysis of subpopulation activity. To test if the amplitude differences are inherent to a change in photoperiod and are not influenced by threshold settings, we analyzed the amplitude at different threshold settings, reflecting the activity of different sizes of populations of SCN neurons (c.f. (Schaap et al., 2003;VanderLeest et al., 2007)). In this analysis, we could reliably compare subpopulation activity rhythms, with an equal number of action potentials contributing to the circadian waveform. The results showed that in short days, the amplitude of the rhythm was larger than in long days for any given number of spikes in the recording (Figure 5.4D). (A, B) NMDA application (10µM) at CT 15 induced an increase in firing rate that was recorded by extracellular multiunit electrodes. The magnitude of the NMDA response is similar in slices from long and short day animals and in both photoperiods, a plateau was reached during the application. (C) The magnitude of the acute response to NMDA, measured as the relative increase in discharge rate, was not different between day lengths (p > 0.3). These findings are in contrast to the general assumption that the magnitude of a phase shift is inversely related to the amplitude of the rhythm, i.e. that it is more difficult to shift high amplitude rhythms than low amplitude rhythms. This assumption is based on the theory of limit cycle oscillators, where a perturbation of similar strength changes the phase of an oscillator with low amplitude more than one with higher amplitude, because the perturbation represents a larger fraction of the radius of the circle (Aschoff and Pohl, 1978;Winfree, 2000). The question is how to explain our current findings. It could be argued, that in long day length, with a wide phase distribution, the neurons have a more diverse phase shifting response to a light input signal, while in short day length, with a narrow phase distribution, neurons may respond more coherent, resulting in a larger overall shift. Simulations were performed in which single unit PRCs were distributed over the light dark cycle according to experimentally obtained distributions of SCN subpopulations (see Supplemental Data and Figure 5.5). We used type 0 and type 1 PRCs as well as the single unit PRC described by Ukai et al. (Ukai et al., 2007).The simulations showed that the amplitude of the long day PRC is considerably smaller than the short day PRC, irrespective of the shape of the single unit phase response curve that was used in the simulations. The results from our simulations accord with our behavioral and electrophysiological results (Figures 5.6 and 5.7). Recent studies have shown that a phase resetting light pulse alters the phase relation between oscillating fibroblasts and SCN neurons (Ukai et al., 2007;Pulivarthy et al., 2007). Our results show that, vice versa, the phase relation between neurons determines the phase response of the ensemble. Together the data indicate a close relation between phase resetting behavior and the synchrony among oscillating cells. (A) Maximal firing frequency in multiunit activity, recorded in slices from animals maintained in short and long days were significantly different (p < 0.05). (B) Amplitudes of the multiunit activity rhythm, defined as the difference between maximal and minimum firing level, were significantly different between the short and long day groups (p < 0.01). (C) An analysis was performed, in which the total number of action potentials contributing to the electrical activity pattern was determined. This allowed for a comparison of rhythm amplitude between the experiments for multiple sizes of subpopulations. Each line represents an increase over the lower line of a total number of 10 5 (D) Amplitude of electrical activity rhythms in subpopulations with a selected number of action potentials included in the recording. The amplitude of the electrical activity in the short day group is larger than the amplitude in the long day group for recordings with an equal number of action potentials. For 50×10 action potentials included in the recording. Action potentials were counted in 60s bins. 5 (E, F) Examples of electrical activity patterns obtained in short (E) and long (F) days, with the same total number of action potentials (50×10 action potentials (indicated by arrows), examples of subpopulation electrical activity patterns are indicated in E and F. The difference in amplitude between long and short days exists for any number of action potentials contributing to the curve, and thus, for any subpopulation size. While our data suggest that the phase relationship among oscillators determines the response to a shifting pulse, we acknowledge that other mechanisms cannot be ruled out. Our explanation is parsimonious, however, as two major aspects of photoperiodic encoding by the SCN, namely changes in circadian waveform and changes in light resetting properties, can be explained by changes in phase distribution within the SCN. In summary, our findings indicate that the phase shifting capacity of the SCN expressed in long and short day length is retained in the SCN in vitro, offering an attractive model for future investigation. Our data also show that the inverse relation between the phase shifting capacity and the amplitude of the neural activity rhythm may not hold for neuronal networks in which neurons oscillate with different phases. We have shown that such networks respond in fact opposite, and show a maximum phase shifting capacity when the rhythm amplitude is large, and a smaller response when the amplitude is low. The data provide a clear example that neuronal networks are governed by different rules than single cell oscillators. To predict the phase response characteristics of the SCN network, we have taken into account the phase distribution among the single cell oscillators. We realize that a more accurate prediction of the properties of the network can be obtained when the interactions between the single cell oscillators are incorporated (Johnson, 1999;Indic et al., 2007;Beersma et al., 2008). In the past few years a number of synchronizing agents have been proposed such as VIP, GABA, and gap junctions (Colwell et al., 2003;Aton et al., 2005;Welsh, 2007;Albus et al., 2005;Long et al., 2005;Colwell, 2005), and it would be interesting to determine their role in photoperiodically induced changes in the phase resetting properties of the SCN. Our findings may be relevant not only for the photoperiodic modulation of the phase shifting capacity of the circadian system, but may have broader implications and be relevant also to observations of reduced light responsiveness and reduced circadian rhythm amplitude in the elderly. (B) A fitted curve through the long and short day length distribution was used to distribute 100 single unit PRCs. The y-axis represents differently phased single unit PRCs, distributed according to the fitted curve. The blue part of each line represents the delay part of the single unit PRC, the red part represents the advance part of the single unit PRC. The left side shows the distribution for short days and the right side shows the distribution for long days. (C) The resulting simulated ensemble PRC for short and long days using type 1 single unit PRCs. The long day PRC shows a lower amplitude than the short day PRC. (D) The area under the curve of the PRC decreases exponentially when the phase distribution of the neurons increases. The area is given relative to the area under the curve when all single unit PRCs coincide, which leads to a maximum amplitude of the PRC of the ensemble, and a maximal working area. On the x-axis, the observed distributions for the short and long day lengths are indicated. The Figure shows the results for type 1 single unit PRCs. The results indicate that the area under the curve for short days is about two times larger than for long days, consistent with experimental results (see also Figure 5.7). Ethics Statement All experiments were performed in accordance to animal welfare law and with the approval of the Animal Experiments Ethical Committee of the Leiden University Medical Center. Behavioral Experiments Mice (C57BL6) were kept under long (16h light, 8h dark) and short (8h light, 16h dark) photoperiods for at least 30 days in clear plastic cages equipped with a running wheel. The animal compartments are light tight and illuminated by a single white fluorescent "true light" bulb with a diffuse glass plate in front. The light intensity at the bottom of the cage was ~180lux. Running wheel activity was recorded with Actimetrics software and the onset of activity was defined as circadian time 12 (CT 12). After at least 30 days in the light regime, the animals were released into constant darkness (DD). On day 4 in DD, the animals received a 30min white light pulse (180lux) at a specific CT. We have previously shown that after 4 days in constant darkness, photoperiodic effects on behavioral activity and on SCN waveform are still fully present (VanderLeest et al., 2007). For each animal in the compartment, the average onset of activity was calculated and the CT of the light pulse was determined. Running wheel activity was recorded for another 14 days after the light pulse. The phase shifts were calculated by comparing activity onset in DD before and after the light pulse. The circadian times at which the light pulses were given were binned in 3h intervals. In vitro Experiments Animals were housed under long and short photoperiods, as described before, for at least 30 days. Prior to the in vitro experiment, the animals were transferred to a dark compartment for 3 days. Onset of wheel running activity was determined over these 3 days and decapitation and subsequent dissection of the brain was performed at the end of the resting period of the animal (CT 12). Slices of 400µm were prepared with a chopper and were transferred to a laminar flow chamber that was perfused with warmed (35°C) ACSF within 6 min after decapitation (Schaap et al., 2003). The pH was controlled by a bicarbonate buffer in the ACSF and was maintained by gassing the solution and blowing warmed humidified O2 (95%) and CO2 (5%) over the slice. The slice was kept submerged and was stabilized with an insulated tungsten fork. The slices settled in the recording chamber for ~1h before electrode placement. Action potentials were recorded with 90% platinum 10% iridium 75µm electrodes, amplified 10k times and band-pass filtered (300Hz low, 3kHz high). The action potentials crossing a preset threshold well above noise (~5µV) were counted electronically in 10s bins by a computer running custom made software. Time of occurrence and amplitudes of action potentials were digitized by a CED 1401 and stored for off-line analysis. To induce a phase shift, the recording chamber was perfused with ACSF containing 10 or 25µM N-methyl-D-aspartate (NMDA) for 30min. The timing of the NMDA application was in accordance with the light (A) Experimentally obtained subpopulation distributions (VanderLeest et al., 2007) were used to distribute 100 type 1 single unit PRCs. The simulations resulted in high amplitude phase shifts for short days and low amplitude shifts for long days. Short day shifts were normalized to 100% and long day shifts were plotted relative to this value. (B) The same procedure was followed for type 0 single unit phase response curves. (C) For comparison, the experimentally obtained phase shifts in running wheel activity are depicted with the shift in short days normalized to 100% (p < 0.001). pulse presentation in the behavioral experiments: The slices were prepared on day 3 in DD and on the fourth night in DD the NMDA pulse was applied at CT 15. The estimation of CT 15 was done one the basis of the activity onsets of the animals in DD, on the days preceding the preparation of the slice. Data Analysis Electrophysiological data was analyzed in MATLAB using custom made software as described earlier (VanderLeest et al., 2007). The time of maximum activity was used as marker for the phase of the SCN and was determined on the first peak in multiunit activity, both for control and experimental slices. Multiunit recordings of at least 24h, that expressed a clear peak in multiunit activity, were moderately smoothed using a least squares algorithm (Eilers, 2003) and peak time, half maximum values and amplitude were determined in these smoothed recordings. For a more detailed analysis of rhythm amplitude, we used the stored times of the occurrence and amplitudes of the action potentials. This analysis allows for an off-line selection of the size of the population of neurons that contributes to the electrical activity rhythm, through a selection of voltage thresholds (see also Schaap et al., 2003(Schaap et al., 2003 and VanderLeest et al., 2007(VanderLeest et al., 2007). In this way, we could describe the circadian activity pattern of larger or smaller subpopulations of SCN neurons. This analysis was performed in slices from long and short day animals, and allowed to compare rhythm amplitudes in both groups with an equal number of action potentials that contributed to the recording over the same time interval (c.f. Figure 5.4). The thresholds were determined so that each trigger level includes 10 5 Statistical analyses were performed in Origin 7 (OriginLab Corporation) and Excel (Microsoft). All values are stated as average ± standard error of the mean (s.e.m.). Whenever the calculated value is more spikes than the previous level. For all experiments the deviation from the aimed number of action potentials selected for was <5%. the result of a difference between groups, such as in the calculation of in vitro phase shifts, variances were considered unequal, rendering a conservative test. P-values were calculated with a two sided t-test and were considered to be significant when p < 0.05. Supplemental Data -Simulations Both molecular and electrophysiological studies have provided evidence that photoperiodically induced waveform changes observed at the population level (Mrugala et al., 2000;Sumova et al., 2003) are caused by a reconfiguration of single cell activity patterns (Schaap et al., 2003;Hazlerigg et al., 2005;Inagaki et al., 2007;VanderLeest et al., 2007;Naito et al., 2008). In short day length single units oscillate highly in phase while in long days they are more spread out over the circadian cycle. Because in short days the phase distribution among neurons is narrow, light information will reach SCN neurons at a similar phase of their cycle. When the distribution is broad, however, light information reaches neurons at different phases of their cycle. We performed simulations both with type 1 and with type 0 PRCs in the distributions. When distributing 100 type 1 PRCs, the amplitude (A) Quantitative analysis of the PRC based on type 1 single unit PRCs by a measurement of the area under the curve. For short days, the area was normalized to 100% and for long days, the area was plotted as a fraction of this value. (B) The same procedure was repeated for type 0 single unit PRCs. (C) The relative area under the curve from experimentally obtained behavioral PRCs. The area under the PRC in long day length is 45% of the normalized area in short day length. for the long day length PRC is 52.5% of the amplitude for the short day length PRC; for type 0 PRCs, this ratio is 43% (Figure 5.6). The simulations revealed that irrespective of the type of single unit PRC, a broad distribution of cellular oscillations, corresponding to long days, results in a low amplitude PRC of the ensemble, and a narrow distribution, corresponding to short days, results in a high amplitude PRC of the ensemble (Figure 5.5). These results were independent of the number of single unit PRCs that were used in the simulations, although small deviations occurred for low numbers (n < 40). The simulated differences in the magnitude of the shifts resembled the experimentally obtained data (Figure 5.6). We have also measured the area under the delay and advance part of the PRC for long and short day length PRCs for both the simulations and the experimentally obtained data ( Figure 5.7). The area under the simulated long day PRC curve is about 50% of the area under the short day PRC. This was true for both types of single unit PRCs that were used to construct the ensemble PRC. For type 1 single unit PRCs, the area under the curve of the simulated long day PRC was 55.9% of the area under the curve of the short day PRC. For type 0 single unit PRCs, the area under the curve of the simulated long day PRC was 53% of the short day PRC. The results from these simulations were independent of the number of single unit PRCs and in accordance with the experimentally obtained data ( Figure 5.7). Simulation Methods Simulations of a PRC for short day length and long day length were performed in MATLAB by distributing 100 normalized single unit phase response curves over the day. Two types of single unit PRCs were used. The first PRC consisted of a 12h dead-zone (where no phase responses can be induced) and a 12h sinusoidal responsive part, in accordance with the type 1 light pulse PRC (Johnson, 1999). The other single unit PRC was in accordance with a type 0 light pulse PRC, consisting of a 12h dead-zone followed by an exponential function with an asymptote at CT18, and a maximum shift of 12 h (Johnson, 1999). The distributions that were used for long and short day lengths were taken directly from experimentally described subpopulation distributions in long and short photoperiods (VanderLeest et al., 2007). The peak times of these subpopulations were used to fit a distribution curve. This curve was used to distribute 100 single unit PRCs over the circadian day. In addition, simulations were performed using single unit PRCs without a dead-zone (c.f. Ukai et al., 2007(Ukai et al., 2007, data not shown). We have measured the area under the curve, which is the surface of the delay and advance part of the PRC. The surface is an indication for the phase shifting capacity of the circadian system. The equation for this calculation is the first integral of a curve over a
2016-05-16T03:44:11.235Z
2009-03-23T00:00:00.000
{ "year": 2009, "sha1": "707c68955e627a04e890150f7b5ea8c440e9756c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004976&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "464f4ab7d01cf90c53f80c1144f131d576138b41", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
243185134
pes2o/s2orc
v3-fos-license
Associations between Work–Family Balance, Parenting Stress, and Marital Conflicts during COVID-19 Pandemic in Singapore As part of the “Circuit-breaker” social distancing measure to address COVID-19, the government of Singapore closed schools and workplaces from April to May 2020. Although this helped reduce transmission rates, for working parents, this period had been a challenging experience of working from home while providing care for children full-time. Problems in the work-home interface can have a significant impact on parenting and marital harmony. We analyzed data from 201 married and employed parents in Singapore using online surveys. Latent profile analysis was used to identify profiles of parents’ work–family balance (WFB) and spousal and employer support. Linear regression was used to examine links between profiles with parenting stress and marital conflicts. Results indicated three distinct profiles of WFB and social support levels: (a) Strong (43%), (b) Moderate (38%), and (c) Poor (19%). Mothers were more likely than fathers to be in the Moderate and Poor profiles. One key finding is that profiles characterized by poorer WFB were found to be linked with higher parenting stress and increased marital conflicts. There are important variations in parents’ abilities to balance work and family and levels of social support received. Lock-downs can affect parenting and marital harmony especially for parents with poor WFB and weak social support. Any attention given to supporting working parents is vital and urgent to counter any problems in the work–family interface during a lockdown. A total of 258 parents were surveyed during a COVID-19 partial lockdown in Singapore about their experiences in parenting and working from home. • Poorer work-family balance (WFB) was found to be linked with higher parenting stress and increased marital conflicts. • Mothers were more likely than fathers to be in the Moderate and Poor profiles of WFB and social support levels. • Any attention given to supporting working parents is vital and urgent to counter any problems in the work-family interface during a lockdown. The coronavirus (SARS-CoV-2) has caused a pandemic of acute respiratory syndrome (COVID-19) with more than 6.6 million people infected as of June 5, 2020 (The New York Times, 2020). In Singapore, the first case of COVID-19 was detected in January 2020. By June 5, 2020, Singapore had more than 37,183 cases among its population of 5.7 million people. Because of the increasing rate of transmission over March 2020, on April 3, 2020, the Singapore government implemented a month-long safety distancing measure termed "Circuit-breaker" (Channel News Asia, 2020a). Under Circuit-breaker, schools, childcare facilities, businesses, and workplaces were closed and people were encouraged to stay at home. As a result, many parents attempted to work from home remotely while providing care to their children. Subsequently, the Circuit-breaker was extended for a second month in May 2020 to further reduce transmission rates in the community (Mohan, 2020). Although this extension was necessary from a public health perspective, for parents it meant prolonging their telecommuting at a time when their resources, split between work and childrearing at home, are stretched to their limits. Researchers such as Fisher et al. (2020) have described how physical distancing measures can be detrimental to work and family life. Others such as Coyne and team (2020) described the stressful "collision of roles, responsibilities, and expectations" (i.e., as a parent, spouse, employee, teacher) experienced by parents during this pandemic even as they face an uncertain future. As the stress of balancing work with full-time childrearing at home increases, some experts have warned about the risk for increased marital conflict and domestic violence during this period when families remain at home with reduced community contact (Campbell, 2020). Indeed, many countries impacted by COVID-19, including Singapore, are reporting an increase in cases of spousal violence and child abuse (Agrawal, 2020;Channel News Asia, 2020b). The present study used indicators that measured working parents' perceptions of support from their spouse and their employers and how well they are balancing work with family at home. Using these indicators, we identified profiles that represent how well parents are managing working at home with parenting during the COVID-19 pandemic in Singapore. We then examined sociodemographic and substantive characteristics that are associated with membership in these profiles. Lastly, we looked at how these profiles are associated with family outcomes that include parenting stress and marital conflict. The Work-Home Resources Model To understand the multi-faceted work and caregiving demands that working parents experienced during the Circuit-breaker in Singapore, we draw on ten Brummelhuis and Bakker's (2012) work-home resources model. The work-home resources model applies Hobfoll's (2002) conservation of resources (COR) theory to the work-home interface, describing the dual processes of work-home enrichment ("gain spirals") and work-home interference ("loss spirals"). Specifically, work-home enrichment occurs when contextual resources from the work or home domain led to the development of personal resources, which subsequently facilitate outcomes in the other domain. For instance, growth in job skills or career advancements produce positive mood that improves working individuals' emotional functioning at home. Conversely, work-home interference occurs when contextual demands in the work or home domain deplete personal resources, so these resources are not available for individuals to function optimally in the other domain (e.g., unconventional work hours increase individuals' fatigue which in turn affect psychological availability to their family members). The work-home resources model further distinguishes the types of resources based on the work-home interface. Contextual resources are external to the self and can be found in the social environment, for example, social support offered by others such as spousal support and employer support. Personal resources are proximate to the self, such as skills, knowledge, attention, and cognitive energy. Lastly, macro resources refer to characteristics of the larger economic, social, and cultural system in which an individual is embedded. Macro resources (e.g., social equality and public health) are more stable than other contextual resources and are not usually within the control of individuals. Circuit-breaker has abruptly imposed remote working policies that require many working parents to work at home in Singapore, blurring the boundaries between their work and family roles (Borg et al., 2020;Restubog et al., 2020). The closure of childcare and schools has forced many of them to take on full-time child-caring responsibilities and homeschool instruction, alongside adjusting to their new workfrom-home arrangements. The work-home resources model tells us that when faced with intense work and family demands (or contextual demands), individuals are more prone to lose resources as they need to utilize their personal resources (e.g., physical energy, mental resilience, attention, and time) to deal with the demands (Hobfoll, 2002). When their personal resources are depleted, they are less likely to function well in both their work and home domains, leading to work-to-home or home-to-work interference (ten Brummelhuis & Bakker, 2012). Several studies in the work-family literature have examined how stressful experiences at work affect individuals' functioning at home especially in the domains of parenting and marital relationships (Costigan et al., 2003;Fellows et al., 2016;Greenberger et al., 1994). For example, Costigan et al. (2003) found that poor interpersonal atmosphere and low job morale at the workplace increased negative parenting affect and behaviors among married couples transiting into parenthood. Greenberger et al. (1994) observed parent-child interactions and found decreased emotional availability of parents when job stressors (e.g., time urgency) increased. In a meta-analysis of 33 studies, work-family conflict was found to have a significant impact on marital quality which included marital satisfaction and relationship quality (Fellows et al., 2016). Correspondingly, in the present study, we focused on two home outcomes: parenting stress and marital conflict. Parenting Stress Parenting stress is defined as a psychological reaction when parents experience parental demands and they do not have the resources (e.g., energy, skills, and time) to meet these demands (Holly et al., 2019). Studying parenting stress is important because it is a key determinant of parenting behaviors (Abidin, 1992), especially harsh parenting that may lead to subsequent child maltreatment (Chung et al., 2022). Parenting stress is conceptually distinct from other forms of stress that a parent might experience (e.g., marital stress), and may be considered a home outcome in ten Brummelhuis and Bakker's (2012) work-home resources model. Specifically, parenting stress may arise from a parent's appraisal of contextual demands (or stressors) associated with their parenting role, such as insufficient personal resources (e.g., depleted physical energies, time, and parenting skills) to meet the demands of caring for young children. Marital Conflict Marital conflict, also broadly referred to as marital discord, tends to refer to the conflict, disharmony, or lack of parental agreement between married parents of children (Reid & Crisafulli, 1990). Marital conflict can range from verbal to physical abuse, and is generally associated with poorer health outcomes for the couples involved (Shrout et al., 2019). In the work-home resources model, marital conflict can be considered a home outcome (ten Brummelhuis & Bakker, 2012). Similar to parenting stress, the likelihood of marital conflict occurring during Circuit-breaker is heightened because remote working and home-based learning are additional contextual demands that working parents have to deal with, increasing the tendency for their personal resources to be depleted. For example, parents who have to supervise their school-age children's home-based learning on top of working remotely would have utilized much of their physical and cognitive energies at the end of the day. The reduced availability of resources, in turn, leaves fewer resources for parents to communicate with their spouses or contribute to household chores, potentially leading to conflict between both parties (see Carroll et al., 2013;Stevens et al., 2001). Spousal Support, Employer Support, and Work-Family Balance Informed by the work-home resources model, we examine spousal support, employer support, and work-family balance as predictors of parenting stress and marital conflict outcomes. Spousal support and employer support are contextual resources that are present in the home and work environments respectively (ten Brummelhuis & Bakker, 2012). Spousal support, which is a form of family support, typically includes enriching experiences such as a spouse listening to your work experiences or stepping in with household chores . Prior studies by Gayathri and Karthikeyan (2016) and Siu et al. (2010) found that spousal support facilitated home-to-work enrichment, indicating that individuals who received spousal support were able to use these resources to buffer any stress that arises, or accumulate other resources (e.g., energy) to perform their parental and spousal responsibilities . Aycan and Eskin's (2005) study also indicated a direct positive association between spousal support and marital satisfaction. Therefore, despite the sudden changes brought about by Circuit-breaker, we hypothesize that working parents who receive spousal support are less likely to experience parenting stress and marital conflict. Unlike spousal support which stems from the home domain, employer support comes from the work domain. Employer support, a form of organizational support, typically refers to family-friendly policies and practices (e.g., flexible work arrangements) or the extension of organizational benefits to family members . Generally, studies have found that organizational support led to reduced work-to-home interference and increased work-to-home enrichment, particularly for women (Clark et al., 2017;Lapierre et al., 2018). Interestingly, a study by Aycan and Eskin's (2005) found that employer support reduced work-to-home interference for men but not for women. Similar to spousal support, we also hypothesize that working parents who receive employer support during Circuit-breaker are less likely to experience parenting stress and marital conflict, as they can draw on their personal resources to become involved parents and spouses. Lastly, we also examine work-family balance as an antecedent of parenting stress and marital conflict. Defined as "'the individual perception that work and non-work activities are compatible and promote growth in accordance with an individual's current life priorities" (Kalliath & Brough, 2008, p. 326), work-family balance has been shown to lead to increased family satisfaction and functioning (Brough et al., 2020;Chan et al., 2016). Even though the popular media has often reported that work-family balance has diminished in light of the COVID-19 pandemic and Circuit-breaker, we draw particular attention to the current conceptualization of work-family balance which emphasizes perceptions as opposed to objective measures of "balance". Importantly, in adopting this conceptualization of work-family balance, we recognize that perceptions of "balance" are highly subjective and malleable, and tend to change over time due to different life priorities. Based on the work-home resources model, work-family balance can also be considered a contextual resource that promotes the accumulation of personal resources, leading to work-home enrichment (ten Brummelhuis & Bakker, 2012). Therefore, we hypothesize that working parents with better work-family balance are less likely to experience parenting stress and marital conflict during Circuit-breaker. Patterns of Work-Family Balance, Spousal, and Employer Support Existing studies have mostly focused on the main effects of work-family balance, spousal, and employer support on home outcomes. Specifically, they have either aggregated the scores of these predictors or statistically control the influence of one or more predictors to study the main effect of another (e.g., Aycan & Eskin, 2005;Clark et al., 2017). Far less is known about how patterns of these predictors can take form and their combined influence on home outcomes. This gap in research is unfortunate because it is realistic to expect that working parents experience varying levels of work-family balance, spousal and employer support (this is also suggested in the work-home resources model). By identifying disparate patterns of working parents' experiences of work-family balance, and support from spousal and employer, this may provide a more accurate understanding of their joint impact on home outcomes. Hypotheses of Study To identify these patterns, we used latent profile analysis (LPA) to identify latent profiles of working parents with similar ratings on the three indicators of work-family balance, spousal support, and employer support. We expected to find a profile of parents with higher ratings on all three indicators and a profile that reported lower ratings on all indicators. Second, we examined the associations between these latent profiles with sociodemographic variables and parents' perceived impact of COVID-19 on their finances and psychological health. We expect that parents who are more affected by COVID-19 would more likely be associated with membership in latent profiles that are characterized by lower ratings on work-family balance and support from spousal and employers. Third, we examined the relationships between latent profiles and the two outcomes: parenting stress and marital conflict. We hypothesized that parents with membership in profiles that are characterized by higher work-family balance and better spousal and employer support would report lower levels of parenting stress and lower likelihood of marital conflict during the period of Circuit-breaker. Data and Sample Data were analyzed from an online survey that we created and disseminated to parents in Singapore from April 22, 2020, to May 5, 2020. To be eligible for the study, respondents had to be at least 18 years old, living in Singapore with at least one child at most 12 years old, and be Singaporean citizens or permanent residents. Only one respondent from each household had completed the survey. The online survey was disseminated using a website link hosted on a Qualtrics server. We reached potential respondents via advertisements on Facebook and online groups and community organizations associated with families in Singapore. In total, 268 respondents completed the survey. Because of the present study's aims, we excluded (a) caregivers who were not parents (seven excluded), (b) parents who were not married (two excluded), and (c) parents who were not employed (58 excluded). This left us with an analytical sample of 201 respondents. The participant information sheet provided a detailed explanation of the study and we obtained participant consent from all respondents. This study has been approved by the Institutional Review Board (IRB) at the University of North Carolina at Chapel Hill. No incentives or compensations were provided to respondents for participating in the survey. Measures The survey consisted of 50 questions and took about 12 min for the respondents to complete. Survey questions were related to work, family life, parenting, and demographic information. Descriptive statistics of the measures are provided in Table 1. Latent profile indicators Information about work-family balance, support in work from spouse, and support from employer were measured using three items constructed for the survey. Respondents were asked the following three statements: (a) "I can balance my work at home and parenting well", (b) "I get enough support from my spouse while working at home", and (c) "My employer gives me flexibility and support that helps my parenting". Respondents rated their agreement to the statements on a 4-point Likert scale ranging from 1 = Strongly disagree to 4 = Strongly agree. Marital conflict Information about verbal arguments or conflict with spouse in the past weeks was measured using one binary item. Parents were asked if there had been an increase in verbal arguments or conflicts with their spouse in the past weeks and they answered using two possible responses: 1 = Yes, increase and 0 = No increase. We acknowledge that the use of a binary item to measure any increase in marital conflicts may not capture different aspects of marital conflicts (e.g., negative emotional communication, differences over money management or disagreements in use of leisure time and childrearing; see Shrout et al., 2019) and do not allow for a range of responses. Our subsequent results for marital conflict should be interpreted in view of this limitation. Parental Stress Scale (PSS) The PSS measures an individual's perceptions and feelings of stress directly associated with being a parent (Berry & Jones, 1995). The PSS scale has been found to have strong psychometric properties, including an internal reliability of Cronbach's α = 0.89 in a validation study with a Hong Kong-based Chinese sample (Cheung, 2000), and strong criterion validity with other parental stress scales such as the Parenting Stress Index (Berry & Jones, 1995). Parents responded to statements about their parenting over the past weeks on a 4-point Likert scale (responses ranging from 1 = Never to 4 = Often). Examples of these statements included: "Caring for my children take more time and energy than I have to give"; "I sometimes worry whether I am doing enough for my child(ren)"; and "I feel overwhelmed by the responsibility of being a parent". A composite score was created by averaging the items (α = 0.74), with higher scores indicating higher parental stress (M = 2.43, SD = 0.46). Coronavirus Impacts Questionnaire (CIQ) The CIQ was developed as one of several socialpsychology-relevant questionnaires to measure how people in the United States have been impacted by COVID-19 and social distancing (Conway et al., 2020). Confirmatory factor analysis indicated an excellent 3-factor structure for the 9-item version of CIQ. The scale had good face validity and had strong internal reliability within each factor (α scores ranged from 0.76 to 0.93). The three factors of the CIQ are (a) financial impact, (b) resource impact, and (c) psychological impact. Examples of items from each of the factors are "The Coronavirus (COVID-19) has impacted me negatively from a financial point of view", "I have had a hard time getting needed resources (food, medicine) due to the Coronavirus (COVID-19)", and "The Coronavirus (COVID-19) outbreak has impacted my psychological health negatively". Respondents responded to these statements on a 4-point Likert scale ranging from 1 = Not true of me at all to 4 = Very true of me. In this study, we used the shortened 6-item version of the CIQ. The shortened version contains two items from each of the three factors (i.e., financial, resource, and psychological impact). A composite score was created by averaging the six items (α = 0.73), with higher scores indicating that the respondent had experienced a greater overall impact on their life due to the pandemic (M = 1.97, SD = 0.60). Controls Using various sociodemographic variables (Table 1), we controlled for variations in outcomes that may be attributed to differences in respondents' background characteristics. These included parents' sex (binary variable where 0 = female and 1 = male), ethnic group (i.e., Chinese, Malay, Indian; recoded into a binary variable where 0 = non-Chinese and 1 = Chinese because of small numbers), age (continuous variable in years), educational level (binary variable where 0 = less than university degree and 1 = university degree), monthly household income (continuous variable in Singapore dollars), number of caregivers at home (binary variable where 0 = two or less caregivers and 1 = more than two), presence of a domestic helper in the household (binary variable where 0 = Yes and 1 = No), age of the child(ren), and the number of children and caregivers in the household (count variable types). Analytical Method Latent profile analysis (LPA) was used to identify profiles of work-family balance, spousal support, and employer support using continuous indicators. LPA is a personcentered method that is appropriate for exploring unobserved heterogeneity or potential subgroups in samples (Chung et al., 2020;Kainz et al., 2018). In the present study, up to five latent profile solutions were estimated to identify the optimal solution. The Bayesian information criterion (BIC) was used as a measure of the relative fit across different profile solutions (Schwarz, 1978) with lower values indicating better relative model fit (Collins & Lanza, 2010). The Bootstrap Likelihood Ratio Test (BSLRT) was also used to contrast the fit of neighboring profile solutions (i.e., comparing the k-profiles model with the k-1-profiles model; Berlin et al., 2014). p values derived from the BSLRT were used to determine if there is a statistically significant improvement in fit for the inclusion of an additional profile. The sample size of the smallest profile was also evaluated since a small sample size profile (i.e., <1% and/or <25) may have less precision and low power (Berlin et al., 2014). Entropy and mean posterior probability values were also examined to assess the classification certainty associated with each profile solution; values closer to 1 reflect better classification certainty (Berlin et al., 2014). After identifying the optimal profile solution, multinomial logistic regression was used to test associations between membership in latent profiles (categorial variable) and a set of sociodemographic covariates and impact of COVID-19 (this also serves as a form of construct validation for the selected latent profile solution). Then, to examine the associations between home outcomes with latent profiles: (a) multiple linear regression was used to model the associations between parental stress and the latent profiles while (b) logistic regression was used to model the associations between marital conflict (binary outcome variable) and the latent profiles. Sociodemographic covariates were included in both models. There were no missing data for all variables used in the analysis. Mplus 8.4 (LPA) and Stata 16.1 (all other analyzes) software packages were used for the analyses. Table 1 shows the sociodemographic characteristics of this sample of working and married parents. There were more fathers (61%) than mothers. Most of the parents were Chinese (81%) and their ages mostly 18-35 years old (37%). Most parents had at least a university degree (87%). A total of 65% of the parents earned a monthly household income more than S$8000 (about USD$5600). With the median income of a resident household in Singapore in 2019 at about $7981 according to the Singapore Department of Statistics (DOS, 2020), this sample consisted mostly of families with financial income more than 50% of households in Singapore. The age of the youngest children was mostly 0-3 years (46%). A total of 51% of parents had two or more children age 12 or younger and 57% had up to 2 caregivers in the household. A total of 39% of parents reported having a domestic helper at home. Result We estimated one to five latent profile solutions to determine the optimal solution. Table 2 shows the results from this profile enumeration process. The BIC values decreased from one profile to three profiles but increased from four profiles to five profiles, indicating that a larger number of profiles yielded a better fit but only up to three profiles. The BSLRT test was statistically insignificant (p = 0.15) when assessing the addition of the fourth profile, indicating that the three-profile solution might be more optimal Furthermore, the four-profile solution produced one profile with a small sample size (i.e., n = 8; about 4% of the full sample), indicating potential over-extraction (Petras & Masyn, 2010) and vulnerability to low power (Berlin et al., 2014). The three-profile solution had the highest entropy value of 0.99 indicating strong classification certainty. The mean posterior probabilities values ranged from 0.99 to 1.00, indicating strong class separation (Asparouhov & Muthén, 2014). Thus, the three-profile solution was selected as optimal. Table 3 shows the average levels of work-family balance, spousal support, and employer support across the three profiles expressed in raw means and standardized Z scores (i.e., sample mean set to 0 with a standard deviation of 1; Bauer & Shanahan 2007). Figure 1 visually plots the scores. Results in Table 3 indicate substantive differences across each latent profile in terms of raw means and standardized Z scores (i.e., standard deviation units). In Profile 1 (Poor Support and Balance, n = 38, 19%), parents reported the lowest levels of support from spouses (Z = −0.74) and employers (Z = −0.66) as well as poorest work-family balance (Z = −1.55). In Profile 2 (Moderate Support and Balance, n = 76, 38%), parents reported levels of spousal (Z = −0.17) and employers' support (Z = −0.13) and work-family balance (Z = −0.35) that are slightly lower but close to the sample means. In Profile 3 (Strong Support and Balance, n = 87, 43%), parents reported the highest levels of support from spouses (Z = 0.47) and employers (Z = 0.41) as well as the strongest balance between work and family (Z = 0.99). Mean difference tests in Table 3 indicate that the three profiles differ significantly with Profile 1 having the lowest means and Profile 3 with the highest means compared to the other profiles. Table 4 shows the results of a multinomial logistic regression used to examine the associations between latent profile membership and a set of sociodemographic variables and the impact of COVID-19. Profile 3 (Strong Support and Balance) was chosen as the reference profile to which other profiles were compared across associations with the model covariates. In this step of the multinomial logistic analysis, we treated the latent profiles as an observed variable and used the variable to examine its associations with auxiliary variables (also known as the classify-analyze approach) instead of the 3-step procedure as suggested by Bolck et al. (2004). While the latter procedure can account for classification errors, we think that this is unlikely since our high entropy value of 0.99 indicates clear profile separation for our 3-profile model. Parents' age, race, education, family income, any domestic helpers, and the number of caregivers in the household did not statistically predict membership in any of the profiles. For mothers relative to fathers, the relative risk for membership in Profile 1 (Low Support and Balance) and Profile 2 (Moderate Support and Balance) would be expected to increase by a factor of 6.35 and 3.05, respectively, while holding the other variables in the model constant. In other words, mothers are more likely than fathers to be in the profiles characterized by low and moderate levels of spousal and employer support and work-family balance (Profile 1 and 2). With a year increase in child's age, the relative risk of being in Profile 1 or 2 would decrease by a factor of 0.60 and 0.72, respectively, holding other variables in the model constant. In other words, parents with older children are less likely to be in profiles characterized by low and moderate levels of support and balance (Profile 1 and 2). Parents who reported a greater impact of COVID-19 were more likely to be in Profile 2 (increase by a factor of 2.78) than in Profile 3. Parents with more children in the household were also more likely to be in Profile 2 (factor of 1.63) than to be in Profile 3. In summary of the results in Table 4, parents in Profile 3 (i.e., those who reported highest levels of spousal and employer support and work-family balance) were more likely to be fathers, were impacted lesser by COVID-19, had older children, and had lesser number of children at home. With respect to our second study aim, Table 5 shows the results from multiple linear regression and logistics regression models used to examine the associations between home outcomes and latent profiles controlling for sociodemographic covariates. For parenting stress outcome, parents in Profile 1 (B = 0.39, p < 0.001) and Profile 2 (B = 0.32, p < 0.001) had higher parenting stress than Profile 3. For marital conflict, parents in Profile 1 (OR = 2.62, p < 0.05) and Profile 2 (OR = 2.62, p < 0.001) were more likely than parents in Profile 3 to report an increase in marital conflict. Discussion As part of the Circuit-breaker measure, the government of Singapore has closed schools, childcare facilities, and workplaces for almost two months, from April 2020 to May 2020. The goal of Circuit-breaker is to reduce the transmission of COVID-19 in the community. However, for many working parents, this period has been a challenging experience of working remotely from home while providing care for their children full-time. In this study, informed by the work-home resources model, we first identified patterns of working parents' perceived levels of work-family balance, spousal support, and employer support. We had expected to find different profiles of high and low levels of work-family balance and support. We hypothesized that parents who experienced a greater impact of COVID-19 would more likely be classified in profiles characterized by lower levels of work-family balance, spousal and employer support. Finally, we examined associations between these profiles and home outcomes. We expected that parents in profiles characterized by higher levels of work-family balance and support would report lower levels of parental stress and lesser marital conflict during the period of Circuit-breaker. All our hypotheses were supported by the results of our study. First, we found three distinct profiles indicating notable variations in the levels of work-family balance, and support that working parents have received from their spouses and employers. The two most prevalent profiles are the Strong Support and Balance profile and the Moderate Support and Balance profile. In the Strong Support and Balance profile, working parents are characterized by higher than average levels of work-family balance and support from spouses and employers. Parents in the Moderate Support and Balance profile are characterized by lower but close to the average levels of work-family balance and supports. On the other hand, the Poor Support and Balance profile is the least prevalent with parents in this profile reporting the lowest level of work-family balance and support from spouses and employers. These distinct profiles point to heterogeneity in work-family balance and supports experienced by parents during the Circuit-breaker. Though popular media in Table 3 Latent profiles of work-family balance, spousal support, and employer's support Singapore has reported that work-family balance has diminished for working parents during this period of pandemic and Circuit-breaker (The Straits Times, 2020b), some parents have found ways to balance telecommuting with family responsibilities. At the same time, they received strong organizational support from employers indicating that certain companies here are empathic with the difficulties faced by working parents and have extended family-friendly policies and practices to their employees during this pandemic. The Circuit-breaker is an unprecedented event in Singapore where before this, most working parents do not work from home. But since this pandemic and the Circuitbreaker, more parents have become open to the idea of working from home because they have experienced good support and managed to balance work with their family responsibilities well. This could explain why in a survey of 9000 working individuals during the Circuit-breaker, about 15% of respondents expressed that they want to continue working entirely from home while 75% preferred varying amounts of time working from home. Only 10% expressed that they do not want to work from home at all (The Straits Times, 2020a). It is also equally informative with regard to the types of profiles that we did not find in the present study. First, we did not find any profiles characterized by opposite levels of work-family balance and supports (e.g., high scores of work-family balance but low scores across supports). This is consistent with the work-home resources model where higher contextual resources (i.e., various types of support) are expected to contribute to higher work-family balance. This finding also gives construct validity to the latent profiles produced in the analysis. Second, we also did not find a profile where there are opposite levels of support (i.e., high employer support but low spousal support). One possible explanation is that family members and employers are more responsive and ready to provide support given the challenges during this pandemic. Taken together, all three latent profiles suggest wide variations and with most parents in profiles characterized by moderate to strong levels of work-family balance and support from spouses and employers. Working parents in the Strong Support and Balance profile were more likely to be males than females than in other profiles indicating that gender differences in work-family balance, spousal support, and employer support persist during the Circuit-breaker. A recent study with families in the United Kingdom found that working mothers spent more time in childrearing and home-schooling than working fathers during the pandemic (Ferguson, 2020). Existing studies show that this could be due to the differences in employers' expectations for male and female employees when given flexible working arrangements. Men are expected to use flexible working to improve work performance (e.g., increase working hours) while women are expected to increase their familial responsibilities when working flexibly, which potentially reduces their work-family balance (Chung & van der Lippe, 2018). Parents in the Strong Support and Balance profile are also more likely to have older children than parents in the other profiles. Parenting younger children is more challenging because of the stressors and emotional demands related to nurturing and guiding a child at this developmental stage. Similarly, parents with fewer children are more likely to be in the Strong Support and Balance profile but only when compared to Moderate Support and Balance profile. Thus, for parents with younger children and more children in the households, the needs for support are higher and, consequently, it is more difficult to balance work with parenting during Circuit-breaker. Parents who because of COVID-19 experienced reduced finances, increased difficulties in accessing resources, or poorer psychological health were less likely to be classified in the Strong Support and Balance profile. Indeed, because of the impact of the pandemic and Circuit-breaker measures on Singapore's economy, many families are experiencing financial difficulties as a result of job losses and reduced wages (Tang, 2020). Parents' ability to access support from their support networks (e.g., neighbors, religious communities, relatives) are also disrupted. Social isolation as a result of Circuit-breaker can also be detrimental to mental health which in turn affects parents' ability to manage work and parenting (Usher et al., 2020). The key finding in the present study is that poorer home outcomes were associated with membership in profiles characterized by levels of poorer work-family balance and support from spouses and employers. Unlike previous studies that examined main effects of a predictor while controlling out other predictors, the present study looks at the combined effects of work-family balance, spousal and employer support on home outcomes. Specifically, we find that parenting stress and marital conflicts are higher during the period of Circuit-breaker when the levels of work-family balance and supports are lower combined across all indicators. Prolonged marital conflict may lead to intimate partner violence while parenting stress is a determinant of harsh parenting behaviors and are risk factors for subsequent child maltreatment (Chung et al., 2022;Lee et al., 2014). These findings concur with the work-home resources model where the interplay between contextual resources and personal resources determine either work-home interference or work-home enrichment whose effects spillover into working individuals' functioning in the home domain. Limitations This study has some limitations that should be considered in the interpretation of the findings. First, the married and working parents in this sample were mostly Chinese, parents in their 20s to 30s, financially well-to-do, and highly educated. Thus, the findings in this study may not be valid for all families in Singapore. Our findings may also not apply to families with children of special needs and disabilities who may have unique needs and circumstances. Second, LPA was analyzed using crosssectional data. Thus, patterns identified may only represent parents' momentary views of work and family life in the past weeks. We also could not determine respondents' functioning in the pre-Circuit-breaker period but our data collection depends on parents' retrospective assessment of change in outcomes. Third, we certainly cannot explain the full matrix of associations between work and home outcomes. Other possible predictors include children's behaviors, and respondents' intra-individual characteristics including parenting self-efficacy and marital satisfaction. Fourth, we use single items to measure the key constructs and future studies should use standardized instruments. In particular, the binary item used to measure marital conflicts may not capture different aspects of marital conflicts (e.g., disagreements in partners' use of leisure time or money management; see Shrout et al., 2019) and does not allow for a range of responses. Our research findings for marital conflicts should be interpreted in light of this limitation. Fifth, the use of person-oriented methods, including LPA, has been criticized for its "uncertainty about the ontological nature of emergent latent classes" (Jensen, 2019, p. 399). Hence, the latent profiles identified in this study should be considered as possible variations in the larger population and not necessarily as true subpopulations. Practice Implications Despite the limitations, our study provides timely insights into the interplay between the domains of work and family during a prolonged period when families are restricted to their homes because of the COVID-19 pandemic. First, policymakers and practitioners should be mindful of the heterogeneity in levels of work-family balance and support received by working parents during the Circuit-breaker. In fact, we found in this study that most working parents are receiving good support from employers and spouses and balancing their work with family roles well. These patterns are contrary to what may have been portrayed in the media that there has been significant diminishing of work-family balance among working parents. With the transmission of COVID-19 still high in the community, telecommuting will be a way of life for many parents in Singapore for an extended period. Help for families would then need to be tailored to their different needs. Policymakers in Singapore have recently suggested introducing flexi-hours work models and giving government-paid childcare leave to parents who have used up their annual allotment of leave (The Straits Times, 2020b). However, specific considerations need to be given to the needs of parents in the Poor Support and Balance profile. Parents in this profile are likely to be mothers than fathers and have younger children in age compared to the parents in other profiles with better work and family balance and support. Gender differences in support given by employers and spouses are, according to the work-home resources model, a function of macro resources embedded in the characteristics of the larger economic, social, and cultural system. Addressing the issue of gender differences is beyond the individuals' locus of control and would require policy and organizational actions. The impact of COVID-19 on parents' finances and psychological health was also associated with how well parents can balance their work and family. These are important areas that assistance can be given to support working parents. Any attention given to supporting working parents is vital and urgent because the work-family interface during this time of Circuit-breaker has been shown in this study to have a substantial impact on parental functioning and marital harmony. Specifically, increased parenting stress and marital conflicts were found to be more likely among parents struggling with work and family and receiving lower levels of support. Since working parents are unable to leave their homes during the Circuit-breaker, measures to increase the accessibility of online marital counseling and self-directed parenting interventions can help reduce marital conflict and parenting stress (see Chung et al., 2022). Finally, policymakers, community organizers, and practitioners need to be aware that while public health safety measures like Circuit-breaker can be effective in reducing the transmission of viruses, they can be detrimental to family life.
2020-07-02T10:07:57.936Z
2020-06-25T00:00:00.000
{ "year": 2022, "sha1": "f72cf8c82a55f25b9dead95e2bb7e61ae623638f", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10826-022-02490-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e9def6e74a2815b45de1da0f0282baa94be10ad5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
6454607
pes2o/s2orc
v3-fos-license
Reanalyzing variable directionality of gene expression in transgenerational epigenetic inheritance A previous report claimed no evidence of transgenerational epigenetic inheritance in a mouse model of in utero environmental exposure, based on the observation that gene expression changes observed in the germ cells of G1 and G2 male fetus were not in the same direction. A subsequent data reanalysis however showed a statistically significant overlap between G1 and G2 genes irrespective of direction, leading to the suggestion that, as phenotypic variability in epigenetic transmission has been observed in several other examples also, the above report provided evidence in favor of, not against, transgenerational inheritance. This criticism has recently been questioned. Here, it is shown that the questions raised are based not only on incorrect statistical calculations but also on wrong premise that gene expression changes do not constitute a phenotype. Introduction Iqbal et al. [1] previously claimed, based mainly on gene expression data, that endocrine disrupting chemicals (EDCs) do not cause transgenerational effects in mammals. In brief, the authors treated G0 female mice with the EDCs vinclozolin (VZ), and di-(2-ethylhexyl)phthalate (DEPH), performed transcriptomic analysis of purified G1 and G2 prospermatogonia, found statistically significant overlap neither between upregulated genes in G1 and G2 nor between downregulated genes in the two prospermatogonia samples, and concluded that the EDCs do not cause TEI. My reanalysis of their data [2] using hypergeometric distribution probability showed Data reanalysis Szabó's [3] first objects that I selectively focused on genes identified at reduced statistical stringency by Iqbal et al. [1]. It is however notable that Iqbal et al. themselves used these genes in Fisher's exact test to arrive at the conclusion that a significantly higher number of the common changes between generations occurred in the opposite direction. This objection is therefore not relevant. On the contrary, Iqbal et al.'s above conclusion reiterates the core concern, that why should gene changes in opposite direction be assumed as negative evidence in transgenerational inheritance. Next, Szabó asserts that Iqbal et al.'s conclusion, that they did not find any evidence of TEI, was not erroneous, as claimed by me, mainly because she finds my overlap analysis incorrect. In support, Szabó produces two data sets, Tables 1 and 2 [3], countering Figures 1 and 2 [2] of my analysis, respectively. The Table 1 [3] shows the hypergeometric p values for significance of overlap between the combined set of up-and down-regulated genes in the first and the second generation, as does my Figure 1 [2]. A comparison of the data published in my Figure 1 [2] and that in Szabó's Table 1 [3] clearly shows (Figure 1) that Szabó's result does not dispute my finding, that differentially expressed genes between generations overlap significantly. The rest of the data shown in Table 1 [3] relate to comparisons between genes that changed in expression either in the same or in the opposite direction across generations, results that are not directly related to my analysis. Nevertheless, these comparisons further support TEI, because a highly significant (p = 8.94E-04) overlap is reported for genes that changed in the opposite direction between generations in VZ group. Had there been no transgenerational effects, a significant overlap would not have been observed. Szabó's explanation that genes changing in opposite direction may indicate a slight overcompensation in the erasure process is not supported by Iqbal et al.'s data. Though Iqbal et al. performed extensive DNA methylation analysis, they found no evidence of methylation changes across generations. Moreover, should Szabó's speculation stand valid, would it not beg the question as to why overcompensation will be observed if there is no transgenerational effect? Regarding Szabó's objection [3] that adjustment for multiple hypothesis testing was not performed by me [2], it is noted here that the significant p values shown in her Table 1 [3] as well as my Figure 1 [2] would all remain highly significant even after Bonferroni correction. For example, for the six hypotheses that have been tested in Figure 1 that provided evidence in favor of TEI. As regards Szabó's Table 2 Table 2 is found invalid, with my Figure 2 data supporting TEI remaining justified. Prior evidence Szabó argues that the evidence that I cited in support of the possibility that gene expression changes may show directional variability across generations in transgenerational epigenetic inheritance is inappropriate. Primarily, her concern is that a paper that I referred to showed directional change in gene expression between F1/F2 and F3, not between F1 and F2. She asserts that a lack of phenotype under investigation, primordial germ cell defects, in F3 renders this example unacceptable. Is not altered gene expression a phenotype in itself? Then, is not observing this molecular phenotype in F3 an evidence of TEI? Obviously, the answers to these questions are in the affirmative. Szabó's objection is hence not supported. Her next objection is that another paper that I cited relates to expression of miRNA, not mRNA, and to the worm C. elegans, not mammals. How it is that miRNA expression change across generations is acceptable as evidence for TEI, whereas that of mRNA not? Also, is not C. elegans an established model of Figure 1 Comparison of G1R-G2R gene overlap data. Whereas my analysis [2] presented the probability of drawing the given number of successes exactly, Szabó [3] calculated the probability of drawing the given number of successes at least. *indicates a note in the legend of my Figure 1 [2] clearly stating that the p value remains highly significant even when the probability is calculated for the given number of successes at least.
2016-06-28T07:30:01.000Z
2016-06-28T00:00:00.000
{ "year": 2016, "sha1": "21467401d3b37ac780416f34070982859a28bf7f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "21467401d3b37ac780416f34070982859a28bf7f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
245732596
pes2o/s2orc
v3-fos-license
An Updated Review on Phyto-Pharmacological and Pharmacognostical Profile Of Buchanania Lanzan : A Pharmacognostic Miracle Herb Buchanania lanzan (Anacardiaceae) is a miracle herb widely used by Indian tribes for treating various diseases. Aim of the current review is to search literature for the pharmacological properties, pharmacognostic studies and phytochemical investigation of Buchanania lanzan. The compiled data may be helpful for the researchers to focus on the priority areas of research yet to be discovered. Complete information about the plant has been collected from various books and journals. Particulars of pharmacological activities, phytochemical isolation, toxicity studies etc. ongoing and emerging areas of research of this plant, especially in the field of phytomedicnes and pharmaceuticals was concluded in the review. subjected to various evaluation parameters such as weight variation, uniformity endurance, hardness, surface pH, swelling index, and in vitro release studies. The drug release studies from the formulated ophthalmic films exhibited a promising stability, swelling index, folding endurance, and sustainability for a period of 8 hrs. The isolated biofilm former acts as a novel film former for formulating various ophthalmic films. 299 As per data available over three-quarters of the world population relies mainly on plants and plant extracts for their health care needs. More than 30% of the entire plant species, at one time or other were used for medicinal purposes. It has been estimated, that in developed countries such as United States, plant drugs constitute as much as 25% of the total drugs, while in fast developing countries such as India and China, the contribution is as much as 80%. Thus, the economic importance of medicinal plants is much more to countries such as India than to rest of the world. These countries provide two third of the plants used The roots are acrid, astringent, cooling, depurative and constipating. They are useful in the treatment of diarrhoea. 2 The fruits are used in treating coughs and asthma. The seeds are used as expectorant and tonic. The oil extracted from kernels is applied on skin diseases 3 and also used to remove spots and blemishes from the face. The juice of the leaves is digestive, expectorant, aphrodisiac, and purgative. 4 The gum after mixing with goat milk is used as an analgesic. Growing Season and Type The tree is leafless or nearly so, for a very short time during the summer season. Flowers appear from January to March and their colour is greenish-white. Fruits ripen in the months of May-June. 6 The fruits become red after ripening. The fruit collection starts from mid-April and ends by mid-June, but its harvesting is generally finished in 15-20 days only. The harvesting period may vary with the purpose of fruit collection indifferent agro-climatic zones. Early harvesting result into low fruit/seed setting and poor germination potential. In most parts of Madhya Pradesh, fruits of Buchanania lanzan are harvested before ripening. As a result, it fetches much lower price in the marked because of small seed size and low seed quality even. This tree is lopped frequently for the purpose of huge and rapid collection. In forests, its natural regeneration is very scanty due to unscientific and pre-mature harvesting of its seeds and site degradation on account of growing biotic pressure. 7 The seeds are the major source of regeneration of B. lanzan in India. The major problem in the reforestation of B. lanzan is the low germination frequency of seeds due to seed borne fungal contamination (endogenous) during storage of seeds. Moreover, the fungal attack by Fusarium sp. (wilting disease) is common after sowing the seeds in soil. 8 The seeds exposed to sunlight fail to germinate and soon lose their viability. Another hindrance is the presence of a hard seed coat which leads to low germinating capability. Therefore, in order to ensure further supply of this commercially useful tree species, other breeding methods are required. Plant tissue culture is one of the most effective techniques to micro propagate a plant of interest. 9 Distribution Seven species of Buchanania have been reported in India of which two B. lanzan (Syn. B.latifolia) and B. axillaries (Syn. angustifolia) produce edible fruits. B. lanceolata is an endangered species. It is found in the ever green forests of Kerala. B. platyneura is found in Andaman only. Other species of the genus are B. lucida, B. glabra, B.accuminata. 10 Among these species Buchanania lanzan Spreng is most important and widely distributed species in India. This species was first described by Mr. Hamilton, a forester in 1798 in Burma and the genus Buchanania was named after him. It was originated in the Indian sub-continent, and is found in India, Burma, Nepal and few other countries. 11 It is a valuable tree species found in mixed dry deciduous forest throughout the greater part of India The seeds colour is brownish, odour pleasant and sweetish, oily taste. Seed is oblong to rectangular, 4 to 9 mm in length, 5 to 7 mm in width, dorsiventral convex. Oval to circular fan shaped brown patch with white streaks. A ridge runs throughout the edge. 18 Microscopical features 18,19 The anatomy of bark and seed was studied by taking the transverse sections followed by staining. The transverse section of bark shows the outer most layers is cork. Cork is 5-9 layers, thin walled rectangular cells, some with yellowish matter. Memory booster Alzheimer's disease is a progressive neurodegenerative brain disorder that occurs gradually and results in memory loss, unusual behavior, personality changes, and ultimately death. 35 Biochemical abnormalities such as reduction of acetyltransferase, acetylcholine biosynthases and increase in acetyl cholinesterase (AChE), and metabolism are strongly associated the degree of cognitive impairment. 36 Petroleum ether extract of seeds of B.lanzan (PEB) (500 mg/kg, oral) is studied for its neuro-psychopharmacological effect in experimental rats. Activity of seeds extract on memory acquisition and retention is studied using elevated plus maze and step down apparatus models, and AChE enzyme level at discreet parts of brain is also estimated. Administration of PEB (500 mg/kg) to positive control and treated groups showed significant reduction in transfer latency in elevated plus maze, increase in step down latency in step down apparatus models and reduction of acetylcholine esterase enzyme activity in different regions of the brain as compared with the other groups. 37 Applications in Novel Drug Delivery System Zidovudine Nanosuspensions Using a Novel Bio Polymer A novel bio material is isolated from the seeds of Buchanania lanzan by simplified economical and it is evaluated its potency for sustained drug delivery by formulating various Nano suspension using methylene chloride as organic solvent and biomaterial. According to OECD guidelines, five different formulation were prepared using different ratios of biomaterial by solvent evaporation. The Nano suspension formulations were subjected for various evaluation parameters like particle size and shape, drug content, entrapment efficacy and % transmittance and in-vitro drug release studies. On the basis of in-vitro release studies, the formulation with increased amount of bio polymer (FNS4) was found to be better than the other formulations and it was selected as an optimized formulation . In-vitro studies revealed that FNS4 followed perfect zero first order kinetics release. It was observed that the increasing the proportion of bio polymer increases the rate of release of zidovudine. 38 Oral Mucoadhesive Tablets Oral mucoadhesive drug delivery system formulated using Buchanania lanzan spreng seeds mucilage and it was evaluated for its mucoadhesive properties in compressed tablet, containing losartan potassium. Four different concentrations of Mucilage was used i.e., 21, 42 and 55% w/w and Granules were prepared using polyvinylpyrrolidone as binding agent. These tablets were subjected to its physical property evaluation followed by in vitro dissolution and swelling index was determination. The isolated mucilage bioadhesive strength was compared with Guar gum and HPMC E5LV, which was used as standard mucoadhesive agent concentration which was measured on the modified physical balance. Result revealed that tablets had good physiochemical properties, and drug release was retarded as concentration of mucilage was increased and it showed relative effect on release of drug from formulation. All the formulations were subjected to stability studies for three months, all formulations showed stability with respect to release pattern. From these results, it is concluded that the seed mucilage of BL can be a suitable excipient for oral mucoadhesive drug delivery systems. 39 Biostabilizer in Selegiline Bionanosuspensions selegiline-loaded bio-nanosuspensions are formulated using biopolymer isolated from seeds of Buchanania lanzan, used as biostabilizer and standard stabilizer (hydroxypropyl methylcellulose) by sonication solvent evaporation method with different ratios (1%, 2%, 3%, 4%, and 5%) and it is evaluated for particle size, polydispersity index, zeta potential, pH stability studies, percentage entrapment efficacy, in vitro drug release, and stability studies. . From this study it is found that The biopolymer isolated from the seeds provided excellent stability and particle size for the best formulation. The best formulation was found to be polydispersity index of 0.43 with zeta potential of −5.12 mV. 40 Ophthalmic biofilm from seed A novel biomaterial from the seeds of B. lanzan was isolated and its biofilm forming ability by formulating various ophthalmic films using polyethylene glycol 400 as plasticizer and biomaterial as biofilm former was evaluated. Four formulations were prepared using biofilm former in different ratios by film casting technique. The formulated ophthalmic films were subjected to various evaluation parameters such as weight variation, uniformity thickness, folding endurance, hardness, surface pH, swelling index, and in vitro release studies. The drug release studies from the formulated ophthalmic films exhibited a promising stability, swelling index, folding endurance, and sustainability for a period of 8 hrs. The isolated biofilm former acts as a novel film former for formulating various ophthalmic films. 41 Transdermal patches using B.lanzan (Spreng) Seed Oil as Penetration Enhancer The permeation enhancement properties of Buchanania lanzan spreng seed oil was evaluated using Ethyl cellulose transdermal patches of Glipizide using some essential oils as penetration enhancers. Effect of drug loading and penetration enhancers was investigated on the in vitro permeation of drug through rat skin. Incorporation of essential oils increased the moisture content, moisture uptake ability and permeation of Glipizide across skin barriers. Buchanania lanzan spreng seed oil is found to be most effective when compared with others. it was also concludes that the seed oil can be used in permeation enhancement of various types of tropical preparation. 42 III. Conclusion The above review reveals that the plant is traditionally used for various therapeutic purpose. The plant was found to be potent analgesic, antiinflammatory, Cardio protective, anthelmintic, antibacterial, antifungal and cytotoxic agent. The phytoconstituents which are present in the plant are mainly phenols which are responsible for the actions. More research is needed to isolate the constituents responsible for the biological actions. It was also observed that no clinical trials have been done so far. So from the current review of literature and ayurvedic text it was concluded that the plant is having high medicinal value. The traditional and ethnomedicinal literatures showed that the plant is very effective and safe for medicinal uses. By using the reverse pharmacological approaches in natural drug discovery a potent and safe drug can be investigated from the plant for various chronic diseases like liver diseases, cancer, arthritis, and other inflammatory diseases.
2022-01-06T16:09:36.171Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "e18a9e6931e49cd672c6d0ee22a338203ad3cbab", "oa_license": null, "oa_url": "https://doi.org/10.32628/ijsrst218642", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9f7706c17772cb072a1d70a70dab2c2b60491fbf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
224829246
pes2o/s2orc
v3-fos-license
Reactive Oxygen Species and Oxidative Stress in the Pathogenesis and Progression of Genetic Diseases of the Connective Tissue Connective tissue is known to provide structural and functional “glue” properties to other tissues. It contains cellular and molecular components that are arranged in several dynamic organizations. Connective tissue is the focus of numerous genetic and nongenetic diseases. Genetic diseases of the connective tissue are minority or rare, but no less important than the nongenetic diseases. Here we review the impact of reactive oxygen species (ROS) and oxidative stress on the onset and/or progression of diseases that directly affect connective tissue and have a genetic origin. It is important to consider that ROS and oxidative stress are not synonymous, although they are often closely linked. In a normal range, ROS have a relevant physiological role, whose levels result from a fine balance between ROS producers and ROS scavenge enzymatic systems. However, pathology arises or worsens when such balance is lost, like when ROS production is abnormally and constantly high and/or when ROS scavenge (enzymatic) systems are impaired. These concepts apply to numerous diseases, and connective tissue is no exception. We have organized this review around the two basic structural molecular components of connective tissue: The ground substance and fibers (collagen and elastic fibers). Introduction Connective tissue (CT) is the body's structural support and a dynamic site for other important functions. For example, it is a medium for the exchange of metabolites; the defense, protection, and repair of the body; the storage and mobilization of energy (fat); the regulation and integration of mechanical and cell-signaling responses; the storage and mobilization of growth and differentiation factors; and a guide and barrier for cell locomotion and migration [1]. CT tightly interacts with other tissues to maintain functional organs. Most CTs originate from the mesoderm. From this embryonic layer, pluripotent mesenchymal cells are formed that migrate throughout the embryo, giving rise to adult CT cells, such as cartilage, bone, tendons, blood, and hematopoietic and lymphoid cells. CT is a major meeting point of metabolic and catabolic reactions of tissues and organs and a large platform of signaling that regulates them. One of the most general and significant processes is redox stress, which involves free radicals. Free radicals are by-products of a wide variety of physiological reactions 1. 1 .1. Cellular Components The CT is composed of resident and transient cellular components [2]. The most representative of the former group is the fibroblast [3]. Transient cells are those that (relatively) freely wander and move in and out of the tissue. Transient cells are almost exclusively represented by leukocytes and macrophages. Fibroblasts are the most abundant resident cell type of proper CT and are responsible for synthesizing almost all ECM components. Fibroblasts undergo different states of activity. Those that are highly active have an elongated morphology, with high transcriptomic activity. In contrast, when cells are scarcely active (called fibrocytes) they become smaller and have low transcriptomic activity. In both physiological states, cells are tightly associated with ground substance components and with collagen and elastic fibers (see below). Fibroblasts undergo cell division and restricted movement and can differentiate to other cell types such as adipocytes, osteoblasts, and myofibroblasts. In pathological circumstances, they can also be converted into epithelioid cells through the mesenchymal-epithelial transition (MET) mechanism. The reverse process, called epithelial-mesenchymal transition (EMT), also occurs and is relevant in cancer [4,5]. Myofibroblasts are modified fibroblasts that express some characteristic proteins of smooth muscle cells (SMCs) (some actin-based cytoskeleton proteins). Myofibroblasts acquire special relevance in wound healing and fibrotic processes [6]. ECM Components ECM is composed of a large variety of complex macromolecules localized in the extracellular space of the cells [7]. The extent of ECM varies with the tissue type. Cells maintain their associations with the ECM by forming specialized junctions that hold them to the surrounding macromolecules. ECM is not only the skeleton of tissues but also (1) modulates and determines the morphology and function of fixed and resident cells (see above), (2) influences their development and differentiation state, (3) regulates their migration and mitotic activity, (4) senses and transduces mechanical forces (compression and tensile) to cells, (5) facilitates junctional associations among cells, and (6) provides a biological field for immune defense. As indicated above, ECM is composed of a hydrated gel-like ground substance embedded with fibers. Ground substance resists compression forces and facilitates a quick exchange of metabolites and catabolites, whereas fibers support tensile forces. Other ECM Components Finally, other parts of ECM are (1) the basement membrane, which forms the interface between epithelium and CT, and (2) integrins and dystroglycans, transmembrane glycoproteins that act as nonsignaling receptors of nonfibrillar/cell adhesive glycoproteins of the ECM and assist in the structure of basement membrane and CTs. Integrins function in adhesion and signal transduction from extracellular to intracellular media, activating second messengers at the focal adhesions. Maintenance and Turnover of ECM ECM is slowly but continuously (re)modelled for maintenance and adaptation to local homeostasis and pathological environments. Major components that are responsible for maintenance and turnover of ECM are a large family of proteases and their inhibitors, which are both secreted by fibroblasts, local and transient macrophages, some translocated leukocytes, and metastatic cells. Metalloproteases (MMPs), transmembrane inhibitors of proteases (TIMPS), soluble cathepsins, and other types of proteases belong to this group of ECM components [32][33][34][35]. Pathologies Associated with the Connective Tissue As in any other tissue and organ, connective tissue is susceptible to damage, which is primary if it originates in some of the cell components or in any of the numerous ECM components and secondary because of alterations in any of the associated functions such as the metastatic process, immune overreactions, etc. In this section, we only review pathologies in which redox stress contributes to their evolution and that arise from damage in genes coding ECM components. Biochemistry of Free Radicals Free radicals are atoms or molecules containing one or more unpaired electron(s) in the outer shell (valence shell). The unpaired electron(s) confers specific chemical properties on these molecules, such as the capacity to subtract electrons from other compounds to obtain stability [36]. However, this process transforms the molecule that loses its electron(s), so that it becomes a free radical itself, which may lead to modification of its own function and the function of other molecules. Additionally, molecules with unpaired electron(s) are short-lived and highly reactive because they are energetically unstable. Free radicals include molecules that are either positively or negatively charged or electrically neutral and may be organic or inorganic. Redox reactions involve oxidations and reductions. Oxidation means the gain of oxygen (O 2 ) by a substance or the loss of an electron, while reduction mean the loss of O 2 , the gain of an electron, or the gain of hydrogen [37,38]. Reactive oxygen species (ROS) are small molecules formed by partial reduction of molecular O 2 , which participates in crucial biological processes such as cellular respiration and aerobic metabolism. However, O 2 is a Janus-faced molecule, since reactive O 2 intermediates are easily converted into toxic compounds that can cause cell damage through oxidation of proteins, lipids, carbohydrates, and nucleic acids. to produce the highly reactive and harmful peroxynitrite (ONOO •− ), which is either a nitrogen-or oxygen-centered radical species [40]. H 2 O 2 is not a free radical. It is more stable than O 2 •− and it can cross membranes through aquaporins [41]. H 2 O 2 is produced constitutively in the mitochondria [42], in the membrane of the ER [43], and by NADPH oxidase NOX4 [44]. Other important sources of H 2 O 2 result from the dismutation of O 2 •− spontaneously or enzymatically via superoxide dismutase (SOD) [45]. Main Enzymatic Sources of Free Radicals Endogenous free radicals are produced in environments of high O 2 consumption, which mainly include intracellular organelles such as mitochondria, endoplasmic reticulum (ER), and peroxisomes. They are also produced in locations like the plasma membrane. The main endogenous enzymatic sources of ROS in mammals include (1) the mitochondrial respiratory chain, (2) cytochrome P450, (3) the flavoenzyme Ero1, (4) NADPH oxidases, (5) xanthine oxidase (XO), (6) lipoxygenases, (7) nitric oxide synthases (NOS), and (8) [50], and monoamine oxidases [51]. The ER also produces ROS because of protein-folding processes, NADPH oxidase enzymes, and flavoenzyme Ero1 activity. The last is an ER-resident oxidase responsible for disulfide bond formation to achieve oxidative folding of proteins [52]. In this process, O 2 is consumed and H 2 O 2 is produced as a by-product. When oxidative stress rises, it can increase Ca 2+ leak from the ER lumen, which in turn stimulates mitochondrial ROS production [53]. The NOX family of catalytic subunits of NADPH oxidase are transmembrane-bound redox enzymes that represent the main source of ROS in vascular tissue, though they are also present in nonvascular tissues. The catalytic function of NOX isoforms is the reduction of O 2 in the presence of NADPH to generate O 2 •− and other ROS. In most mammals, the NOX family involves seven isoforms: NOX1-5, DUOX1, and DUOX2. All of them act as transmembrane catalytic subunits, but have different levels of action [54]. NOX-1-3 are activated by effector proteins (i.e., GTP Rac, NOXO1, NOXA1, p22 phox , p40 phox , p47 phox and p67 phox ) to assemble large functionally active complexes, while NOX4 is constitutively active and is regulated mainly by its level of expression by Poldip 2 [55]. NOX5 and DUOX1 and 2 are Ca 2+ -activated isoforms [56]. Unlike NOX3, the rest of the NOXes are all expressed in the cardiovascular system [57]. In addition, NOX5 is only expressed in human cells [58]. NOX isoforms localize to the plasma membrane, caveolae, endosomes, focal adhesions, ER, nucleus, and mitochondria [59]. Regarding the specific ROS produced by these enzymes, NOXes1 and 3 and NOX5 generate O 2 •− , while NOX4, DUOX1, and DUOX2 produce H 2 O 2 [55] . On the other hand, XO is a soluble, membrane-bound O 2 •− -and H 2 O 2 -generating enzyme that plays a crucial role in the catabolism of purine nucleotides. It catalyzes the oxidation of hypoxanthine to xanthine and can further catalyze the oxidation of xanthine to uric acid (UA) [60]. Notably, when it accumulates, UA can be a pro-oxidant, but at physiological levels, it is the most potent non-enzymatic antioxidant in human plasma [61]. Other sources of ROS include lipoxygenases that catalyze the conversion of polyunsaturated fatty acids into leukotrienes and lipoxins, which mediate important cellular signaling pathways [62]. These enzymes generate O 2 •− in the presence of reducing co-substrates [63]. NOS are the most important source of nitric oxide (NO) in biological systems. NO is a free radical and a potent vasodilator with many other relevant physiological functions. Three isoforms of NOS are known [64]: (1) Neuronal NOS (nNOS or NOS1), whose expression goes beyond neural tissue, (2) inducible NOS (iNOS or NOS2), whose expression is stimulated by inflammatory stimuli, and (3) endothelial NOS (eNOS or NOS3), predominantly located in endothelial cells and crucial to maintaining vascular homeostasis. These enzymes synthesize NO and use L-arginine as the substrate and O 2 and NADPH as co-substrates. The eNOS/NOS3 also has the potential to generate O 2 •− when some of its cofactors, tetrahydrobiopterin and L-arginine, are below physiological levels. This process is known as eNOS uncoupling [65]. Therefore, this phenomenon reduces NO synthesis and increases O 2 •− -formation, which in turn may scavenge NO to reduce its availability, leading to impaired NO-dependent relaxations and the formation of peroxynitrite (ONOO •− ), an extremely toxic ROS that further exacerbate vascular injury. Finally, COXes are the enzymes that generate prostanoids after oxidation of arachidonic acid, a polyunsaturated fatty acid present in the phospholipids. Two isoforms of COXes are reported: The constitutive isoform COX-1 and COX-2, which is generally induced by inflammatory stimuli and other mediators such as angiotensin II or endothelin-1 [66][67][68]. COXes generate ROS via oxidation of substances like NADPH [69] or their products (i.e., prostanoids), which may act as autocrine ROS stimulators [70]. Elimination of Free Radicals Free radical levels are regulated by endogenous enzymatic and non-enzymatic antioxidant defense systems to prevent their accumulation and maintain cell redox homeostasis. Tripeptide glutathione, vitamins C and E, and UA provide non-enzymatic antioxidant mechanisms. However, antioxidant enzymes such as SOD, glutathione peroxidase, glutathione reductase, glutathione S-transferase, catalase (CAT), peroxiredoxins, and thioredoxin reductase are the most representative and provide the most specialized enzymatic antioxidant mechanisms in mammalian tissues. There are three isoforms of SOD ( [73]. Glutathione peroxidase and glutathione S-transferase reduce hydroperoxides, using glutathione as an electron donor. There are eight glutathione peroxidase isoforms with different tissue distribution [71]. Thioredoxin reductase catalyzes the reduction of thioredoxin using NADPH and participates in the reduction of hydroperoxides and in maintaining proteins in their reduced state [74]. Peroxiredoxins exist in six subfamily enzymes that are ubiquitously expressed. In general, these enzymes have peroxidase activity on peroxide substrates (e.g., H 2 O 2 , alkyl hydroperoxides, ONOO •− ) using NADPH as the source of reducing equivalents and a thioredoxin system with the exception of peroxiredoxin 6, which uses glutathione peroxidase as the reductant [71,75]. Antioxidant response elements (AREs) are key components to cellular redox homeostasis in the reduction of oxidative stress episodes. The activation of these gene expression regulatory elements triggers fundamental antioxidant responses mediated by the expression of detoxification genes [76]. Multiple transcription factors interact with AREs to activate them and include nuclear factor erythroid 2-related factors 1, 2, and 3 (Nrf1, Nrf2, and Nrf3), small musculoaponeurotic fibrosarcoma proteins broad-complex, Tramtrack and Bric-a-brac, and cap'n'collar homology proteins, activating transcription factor 4, JUN proteins, and c-FOS and FRA proteins [76]. One of the most important transcription factors that combat oxidative stress through the activation of AREs is Nrf2 [77,78]. KEAP1 is a repressor of NRF2 under homeostasis but, under stress conditions, NRF2 dissociates from KEAP1 and is translocated into the nucleus. This mechanism permits the binding of NRF2 to AREs, which leads to the regulation of gene expression of a wide repertoire of enzymes that metabolize oxidants, including glutathione S-transferase, NADPH dehydrogenase (quinone 1), SODs, peroxiredoxin, catalase, and glutathione peroxidase genes [79][80][81]. Notably, activation of the Nrf2 pathway by exogenous compounds is possible [82] and can be potentially useful in the treatment of cardiovascular diseases [81,83]. Detection of Free Radicals A wide variety of detection methods are available to measure free radical levels. These methods have advantages and disadvantages that depend on multiple factors and, thus, an exhaustive review was beyond the scope of this text. Here we briefly describe some of the most common techniques used to detect biomarkers of oxidative stress; for a more complete overview, the reader is referred to [84][85][86]. Free radical levels can be measured, among others, by chemiluminescent and fluorescent probes, chromatography methods, electrochemical biosensors, fluorescent proteins, spectrophotometry methods, and electron spins resonance [86]. It is worth noting that the results obtained by a single technique should be used with extreme caution and, whenever possible, validation by another technique and, whenever applicable, determination of the ROS-forming enzymatic source expression should be pursued. Lipid peroxidation is commonly used as a marker of oxidative stress because this process is involved in a variety of acute and chronic diseases. Malondialdehyde and trans-4-hydroxy-2-nonenal are routinely used as biomarkers of lipid peroxidation [87]. However, for example, the analysis of F2-isoprostanes levels is more robust because they are more stable molecules produced by nonenzymatic free radical-catalyzed peroxidation of arachidonic acid [88]. Tyrosine nitration is defined as the addition of a nitro group in the aromatic ring of tyrosine residues. Analysis of nitrotyrosine levels is often used as a measure of oxidative/nitrative stress, since nitrotyrosine is a relatively stable biomarker that correlates with disease activity and its levels decrease with therapeutic interventions. Commonly used techniques to measure nitrotyrosine levels include liquid chromatography, enzyme-linked immunosorbent assay, Western blot, and immunofluorescence [85]. Another form of stable oxidative modification of proteins is the formation of protein carbonyls, which are usually detected spectrophotometrically, or by enzyme-linked immunosorbent assay, Western blot, immunohistochemistry, or by high-performance liquid chromatography [85]. Dihydroethidium is a widely used fluorogenic probe to evaluate "in situ" oxidative stress production. Dihydroethidium is oxidized by numerous oxidants (e.g., O 2 •− , H 2 O 2 , ONOO − , OH • ) to yield ethidium and 2-hydroxyethidium, which accumulate in cells and emit red (610 nm) fluorescence when interacting with DNA [89]. Interestingly, 2-hydroxyethidium constitutes a specific measure of O 2 •− -induced oxidation of dihydroethidium that can be measured by high-performance liquid chromatography [90]. Redox and Oxidative Stress in Genetic Diseases of Connective Tissue We next discuss only the genetic diseases in which redox and oxidative stress has been reported so far. Table 1 summarizes each disease, its OMIN and ORPHAN numbers, the causative gene, and the radical species involved. Genetic Diseases Affecting Collagen Fibers and Associated Components Collagens are associated with a wide variety of diseases for which treatments are needed. Here, we provide a brief overview of recent progress in mechanisms of disease related to oxidative stress caused by mutations in collagens and the development of therapeutic strategies. Collagen IV-Associated Pathologies: Alport Syndrome Alport syndrome (AS) is an inherited chronic kidney disease, characterized by nephritic symptoms that appear during early life and progressive impairment of renal function, leading to end-stage renal disease. Three distinct genetic forms of the disorder exist: (1) X-linked Alport syndrome, linked to mutations in the COL4A5 gene, (2) autosomal recessive Alport syndrome with mutations in both alleles of COL4A3 or COL4A4 genes, and (3) autosomal dominant Alport syndrome also associated with heterozygous mutations in the COL4A3 or COL4A4 genes. Because COL4A5 is located on the X chromosome, AS1 occurs more commonly in males and the condition usually progresses to end-stage renal disease by the age of 40 years [91]. However, the detailed mechanism of progression to end-stage renal disease has not been elucidated. Therefore, children or adults with AS have no specific treatment and the current therapy is the normalization of blood pressure and reduction of urine protein excretion to slow the rate of progression toward end-stage renal disease [92]. Few AS mouse models mimicking human clinical features have been developed [93][94][95]. Differences in the genetic background (e.g., C57BL/6 J or 129/Sv) are associated with different patterns of disease progression, which suggests that animal models are useful to elucidate the underlying mechanisms involved in the development and progression of the disease [96,97]. Furthermore, pharmacological therapy such as that with angiotensin-converting enzyme (ACE) inhibitors was shown to delay disease onset [98,99]. Importantly, various studies that take the mutant Col4a −/− as a model of AS have shown the implication of oxidative stress in this pathology. Evidence of oxidative stress is demonstrated by a significant rise in the urinary heme oxygenase-1 (HO-1) and H 2 O 2 excretion rate in the urine of Col4a3 −/− mice compared with age-matched wild-type controls [98,100]. Using dihydroethidium (DHE) staining as a marker of tissue ROS generation, Gomez et al. demonstrated that kidneys from Col4a3 -/mice produce higher levels of mitochondrial ROS together with a high concentration of H 2 O 2 in the urine [100]. In Col4a3 −/− hearts, oxidative stress was markedly elevated, including 50% reduction in the GSH:GSSG ratio, as well as reductions in the protein levels of the mitochondrial electron transport chain of complexes I, II, and IV and a 35% increase in malondialdehyde [101]. The results using RNA expression to compare the global transcriptome of whole kidney and hearts from Col4a3 −/− with littermate controls suggest that metabolic and mitochondrial dysfunction are major problems in AS mouse. Prominent among the downregulated genes in kidney were peroxisomal and mitochondrial fatty acid metabolism genes, such as Acox2, mitochondrial genes, such as Pgc1 and the Cyp450 gene family, and the antioxidant Mpv17l [100]. In hearts, the expression of Hbb-b1, Alas2, Cnn1, Aqp7, and Ogdhl genes was significantly reduced in Col4a3 −/− mice [101]. In addition, defective mitochondrial respiration has been observed in primary tubular cells and in cardiomyocytes isolated from Col4a3 −/− mice, as measured by oxygen flux analysis [102]. Electron microscopy images revealed stressed mitochondrial morphology in the Alport tubular renal cells and hearts [101,102]. In this context, two therapeutic approaches were taken with similar results. Treatment of Col4a3 −/− mice with anti-miR 21 that directly targets Mpv17l in kidney increases lifespan and protects Col4a3 −/− mice from kidney disease progression by preventing miR-21-mediated suppression of the PPARα fatty acid metabolism and mitochondrial biogenesis pathways and inhibition of mitochondrial ROS generation in the kidney [100]. Similarly, osteopontin deficiency improves renal function and mitochondrial respiration in the renal tubules, which reduces dynamin-3 expression [102] and the cardiac phenotype and myocardial mitochondrial respiration by rescue of 2-oxoglutarate dehydrogenase-like protein (OGHDL) expression [101]. Collagen VI-Associated Myopathies: Bethlem Myopathy, Ullrich Congenital Muscular Dystrophy, and Myosclerosis Myopathy Deficiency of collagen type VI (Col VI) caused by mutations of COL6 genes (COL6A1, COL6A2, and COL6A3) gives rise to three main muscle disorders: Bethlem myopathy (BM), Ullrich congenital muscular dystrophy (UCMD), and myosclerosis myopathy. BM is relatively mild with a later onset and displays a relatively mild and slowly progressive phenotype. UCMD is severe and shows diffuse wasting and weakness of skeletal muscles in the first year of life, associated with degeneration and regeneration of muscle fibers with more rapid progression of symptoms and premature death due to respiratory failure [103,104]. Myosclerosis is a nondystrophic myopathy characterized by early, progressive muscle and joint contractures that result in severe limitation of movement of axial, proximal, and distal joints, walking difficulties in early childhood, and toe walking. Muscle biopsy shows partial collagen VI deficiency at the myofiber basement membrane and absent collagen VI around most endomysial/perimysial capillaries [105]. Col VI myopathies share defective autophagy that impairs clearance of dysfunctional mitochondria [106,107] as well as mitochondrial dysfunction due to deregulation of the permeability transition pore (PTP), an inner membrane, high-conductance channel formed from dimers of the mitochondrial ATP synthase [108][109][110][111]. The mitochondrial defect has been identified in skeletal fibers and neurons of Col VI-null mice (Col6a1 −/− ) [108,112] and in myoblasts from UCMD and BM patients [113,114]. Mitochondrial monoamine oxidases (MAO), a ROS generator, is increased in Col6a1 knock-out muscle. Not surprisingly, ROS production is higher in Col6a1 knock-out muscle than control [115]. Additionally, suggesting a protective role for Col VI against age-induced oxidative damage, ROS production was significantly higher in the brain of aged Col6a1 −/− mice than in age-matched, wild-type samples, whereas younger mouse brains did not reveal any significant difference between the two genotypes [112]. Different therapies have been investigated for COL VI myopathies. Inhibition of cyclophilin D, which modulates the opening of the PTP in the mitochondrial inner membrane, reduces myofiber degradation and apoptosis in animal models of Col VI myopathy [110,116,117], cultured myoblasts [118], and muscle biopsies from patients with Col VI myopathy treated with cyclosporin A [119]. Myoblasts from patients, upon incubation with H 2 O 2 or tyramine (MAO substrate), upregulate MAO-B expression and display a significant rise in ROS levels, with concomitant mitochondrial depolarization. MAO inhibition by pargyline significantly reduced both ROS accumulation and mitochondrial dysfunction. However, cyclosporine A could not prevent mitochondrial depolarization induced by tyramine, suggesting that MAO-dependent ROS accumulation is upstream of PTP opening, and that oxidative stress makes the latter event insensitive to cyclosporine A [120]. Collagen VIII-Associated Pathologies: Fuchs Syndrome Fuchs endothelial corneal dystrophy (FECD) is a progressive, bilateral condition characterized by dysfunction of the corneal epithelium, leading to reduced vision. The corneal endothelium is essential for maintaining the transparency of the cornea by regulating corneal hydration. Ultrastructural features of FECD include the loss of endothelial cells with thickening and excrescences of the underlying basement membrane (i.e., guttae), which are clinical hallmarks of FECD, becoming more numerous with the progression of the disease [121]. Genetic studies have identified multiple gene mutations and loci associated with FECD. Mutations positioned in the triple helical domain of collagen type 8 (COL8A2) alter the structure and composition of Descemet's membrane, leading to the early onset of type I FECD [122][123][124]. Significant insights to understand Col 8 deposition in FECD and its relationship with young onset of the disease have been obtained due to the development of Col8a1 knock-in [125,126] and knock-out [127] mouse models. Corneal tissues from FECD patients display an overall increase of ROS, and human corneal endothelial cell lines derived from FECD patients are more vulnerable to oxidative insults (measured, among others techniques, by human oxidative stress and antioxidant defense RT-PCR arrays, high-sensitivity ELISA to quantify 8-hydroxy-2 -deoxyguanosine, and immunofluorescence) [128]. The corneal endothelial cells present an inefficient mitochondrial system including increased mitochondrial DNA damage, decreased mitochondrial membrane potential, and mitochondrial fragmentation [129,130]. Enzymatic antioxidants like SOD in cytosolic and mitochondrial forms, catalase, glutathione peroxidase, and glutathione reductase are also depleted in FECD [131]. Proteomic analysis of corneal endothelium from FECD patients showed specific downregulation of the peroxiredoxin family of antioxidants (Prdx1 and Prdx6) [132][133][134] and NRF2 [135]. In a therapeutic approach, researchers try to restore ATP production by stabilizing cardiolipin, a phospholipid present in the inner mitochondrial wall that is vulnerable to oxidative stress. Elamipretide, a synthetic mitochondria-targeted tetrapeptide that ameliorates mitochondrial dysfunction by preventing peroxidation of cardiolipin [136], is in phase II trials (Stealth Biotherapeutics ClinicalTrials.gov Identifier: NCT02653391). Interestingly, in addition to oxidative stress and apoptosis that are indicated as the underlying mechanism for the progressive loss of endothelial cells in FECD [128], corneal samples from FECD [137,138] and knock-in mouse models [125] show upregulation of the unfolded protein response (UPR) evidenced by dilated ER, deregulated transcript levels of UPR markers (by PCR-array among the significant 42, GRP78, phosphoeIF2α, CHOP, EDEM3) [137,138]. A treatment strategy could be a combination therapy. Experiments to determine whether a reduction of ER stress could reduce dystrophic conditions and restore corneal transparency would provide insight into therapeutic strategy. For example, lithium, which can inhibit UPR and oxidative stress, promotes endothelial cell survival in the knock-in mouse model of FECD [139]. Collagen XV-Associated Deficiencies Genetic analyses have suggested that COL15A1 is associated with atherosclerosis in aged individuals [140]. It can also act as a modifier of the severity of the thoracic aortic aneurysm [141]. It has a potential role in primary open-angle glaucoma [142] and has been implicated in Cuticular drusen, a subtype of age-related macular degeneration [143]. In the context of atherosclerosis, the expression of Col15a1 in SMCs is interesting because COL15A1 affects both the proliferative and migratory phenotypes of this cell type. Thus, Col15a1 knock-out in SMC markedly attenuated lesion formation by reducing SMC proliferation and impairing multiple proatherogenic inflammatory processes [144]. Other studies with knock-out mice have shown that collagen XV is important for the structure and function of microvessels in the striated muscle, heart, and skin [145,146]. Mice subjected to exercise-induced stress developed capillary rupture heart failure and muscle atrophy [145]. Under physiological conditions, these mice exhibited a reduced cardiac ejection fraction at 1 month of age, which was compensated at 5 months of age, by a still unknown mechanism [146]. Additional defects in Col15a1 −/− hearts included tortuous capillaries varying in thickness, frequent ruptures in the capillary walls, poor capillary perfusion, and abnormal extravasated erythrocytes [146]. In the skin of these mice, intravital microscopy revealed microvascular dysfunction including increased permeability, a decreased capillary perfusion index, reduced blood cell velocity, and lower microvascular blood flow rate [146]. Drosophila mutants of multiplexin (Mp), the orthologue of vertebrate collagen types XV and XVIII, exhibited morphological changes in cardiomyocytes and progressive dysfunction of the skeletal muscles, reminiscent of phenotypes observed in Col15a1-null mice [147]. Interestingly, Mp fly mutants showed morphologically altered mitochondria in indirect flight muscles, resulting in severely attenuated ATP production and enhanced ROS production. Mitochondria from Mp mutants showed abnormal cristae, swollen appearances, and diffuse outer membranes, which are signs of enhanced mitochondrial permeability due to mitochondrial PTP opening [108,148]. This suggests a pathomolecular mechanism shared with COL6A1 mutations. Mitochondrial PTP opening is enhanced in mutants, and Mp collagens are required for mitochondrial homeostasis. The progressive phenotypes of MP-related diseases are attributable to mitochondrial dysfunctions [147]. Integrins mediating signaling from Mp to mitochondria are in accordance with the biochemical evidence that integrins engage in the regulation of mitochondrial ROS production by Rho GTPases and Bcl-2 [149]. Mitochondrial dysfunction resulting from collagen VI or XV/XVIII deficiencies were ameliorated by cyclosporin A, an inhibitor of mitochondrial PTP opening or losartan, an angiotensin II type 1 receptor blocker. This suggests a potential convergent mechanism and treatment [119,147]. Elastin The importance of elastin is highlighted by the variety of diseases caused by genetic alterations in the ELN gene with clinical consequences ranging from mild to life-threatening. These genetic alterations affect either the quantity or the quality of the deposited elastin and thereby affect the function of elastic tissues [150]. Most reported mutations within the ELN gene cause supravalvular aortic stenosis and autosomal dominant cutis laxa. More than 100 pathogenic or presumed pathogenic variants have been described in ELN to date in the literature, according to the ClinVar database (https://www.ncbi.nlm.nih.gov/clinvar) and the Human Gene Mutation Database (http://www.hgmd. cf.ac.uk). The most common genetic alterations that affect the elastin gene are large deletions that remove one copy of ELN in addition to the neighboring 25-27 genes as part of the recurrent microdeletion disorder Williams-Beuren syndrome (WBS) [151]. Supravalvular Aortic Stenosis Supravalvular aortic stenosis (SVAS) is a heart defect that develops before birth (1:20,000 newborns). The condition is described as supravalvular because the section of the aorta that is narrowed is located just above the aortic valve. This narrowing usually makes it difficult for blood to leave the heart, which results in heart murmur and ventricular hypertrophy. Some people with SVAS also have defects in other blood vessels, most often stenosis of the pulmonary artery. If SVAS is not treated, the aortic narrowing usually leads to chest pain, shortness of breath, and, finally, to heart failure. Most of the ELN gene mutations that cause SVAS result from a decrease in the production of tropoelastin [152]. Due to the shortage of tropoelastin, elastic fibers of the tunica media of the aorta become thinner. To compensate this, SMCs concomitantly increase in number (hyperplasia), resulting in a thicker aortic wall that narrows the lumen. A thickened aorta is less flexible and, therefore, less resistant to the stress of blood flow and pumping of the heart. Over time, there is a tendency to develop high blood pressure. The severity of SVAS, even among members of the same family, is highly variable. Strikingly, some affected individuals die in infancy, while others never experience symptoms of the disorder. Notably, changes in oxidative stress seems to contribute to cardiovascular dysfunctions in individuals with elastin haploinsufficiency [153]. After a bioinformatic analysis of the quantitative trait locus peaks, Ren1, Ncf1, and Nos1 significantly emerge as modifiers to predispose to hypertension and stiffer blood vessels [154]. Elevated renin in Eln-deficient mice has been described [155]. Renin is a major component of the renin-angiotensin pathway, whereas NO is important for influencing vascular tone. Both have known effects on blood pressure. Higher oxidative stress in the elastin-insufficient vessels has been correlated with Ncf1 overexpression [154] Ncf1 (encoding for p47phox) acts as a regulatory subunit for several NOX family members that are expressed in the vasculature [156,157], which contributes to the production of ROS. Due to the dynamic and developmentally complex elastic fiber assembly, no therapy has been reported so far that can restore normal elastin to those with elastin insufficiency. Thus, the identification of genes that modify this pathology can serve as a basis for the identification of therapeutic strategies, which are mostly lacking in these types of pathologies. Williams-Beuren Syndrome Williams-Beuren syndrome (WBS) is a rare developmental disorder (1:10,000) with multisystemic manifestations caused by segmental aneusomy of 1.55-1.83 Mb at chromosomal band 7q11.23, which includes ELN and 25-27 additional genes (Williams-Beuren syndrome critical region, WBSCR). Besides the characteristic face and cognitive profile, the hallmark feature of WBS is a generalized narrowing of large elastic arteries, most notably SVAS, mainly due to ELN deficiency [158]. Histological characterization of arterial vessel walls of WBS patients shows increased number and disorganized elastic lamellar structures, fragmented elastic fibers, and hypertrophy of SMCs [159]. Arteriopathy is the main cause of morbidity in WBS, including systemic hypertension and other potential complications such as stroke, cardiac ischemia, and sudden death [160,161]. Differences in the WBS deletion that affect the copy number for NCF1 finally affect hypertension risk on the severity of vascular stiffness [162,163]. Studies performed in Ncf1 knock-out mice have revealed that p47 phox is one of the major effectors of Ang II [164], consequently Ang II-mediated oxidative stress in the vasculature was the proposed mechanism behind this protective effect in patients whose deletion includes a copy of NCF1 [162,165]. The entire WBSCR is conserved in mice on chromosome band 5G2 in reverse orientation with respect to the centromere [166]. Two mouse strains were generated, each carrying half of the WBSCR deletion. According to their location with respect to the centromere, the two half deletions were named proximal deletion (PD, Gtf2i to Limk1) and distal deletion (DD, Limk1 to Trim50, including Eln) [167]. DD mice presented with generalized arteriopathy, increased blood pressure, increased vessel stiffness, and cardiac hypertrophy [165,168]. Like humans, this cardiovascular phenotype has been associated with elevated AngII, increased oxidative stress markers, and Ncf 1 expression. Treatments aimed at reducing NOX activity either via decreasing Ncf1 gene dosage or pharmacologically with apocynin and losartan treatment both improved hormonal and biochemical parameters in DD mice, resulting in normalized blood pressure and improved cardiovascular histology [165]. A complete deletion (CD) model recapitulates the exact deletion observed in humans, in position and gene dosage [169]. The cardiovascular phenotype of CD mice is milder than that of DD mice [168,169], which suggests a modifying effect of gene(s) within or near the PD. Reduced expression of Ncf1 was observed in affected tissues of CD mice [169,170]. Therefore, as in humans, Ncf1 is likely to have an impact on blood pressure in this model. Cardiac hypertrophy present in CD mice was associated with increased levels of oxidative stress in the heart due to dysfunction of the NFR2 pathway. Chronic administration of the antioxidant epigallocatechin-3-gallate (EGCG) rescues the hypertrophic cardiomyopathy, which restores nuclear levels of NRF2 in correlation with normalization of mRNA expression of target genes [170]. The mechanism by which ROS formation is augmented in the hypertrophic heart is currently unknown. Besides NADPH oxidases, there are several potential sources of superoxide anion formation, including uncoupled NOS and mitochondria. In this regard, ascending aortas from CD mice show the presence of luminal stenosis and compromised contractile responses to α1-adrenoceptor activation associated with increased NO signaling. The increased nNOS signaling may act as a physiological response against the detrimental effects of stenosis [171]. Recent studies also involve mitochondrial dysfunction in WS pathogenesis. In WS-derived primary fibroblasts, decreased basal respiration and maximal respiratory capacity was found, as well as increased ROS generation and decreased ATP synthesis [172]. This mitochondrial dysfunction could be due to loss of DNAJC30, a gene included in the WBSCR. DNAJC30 knock-out mice showed reduced ATP levels as well as alterations in mitochondrial function. Cutis Laxa Cutis laxa (CL) is a collection of disorders that are typified by loose and/or wrinkled skin that leads to a prematurely aged appearance. Many CL-related genes have been identified to date such as (1) genes in elastic fiber biogenesis (elastin), fibulin-4, fibulin-5, and latent TGFβ-binding protein 4, (2) genes required for intracellular protein trafficking (ATP7A, ATP6V0A2, and RIN2), and (3) genes required for cellular metabolism (PYCR1, ALDH18A1, and SLC2A10) [173]. Only mutations in PYCR1 (ARCL2B) and SLC2A10 ATS (see below) have been related with oxidative balance. The loss of PYCR1 causes increased sensitivity to oxidative stress reflected by collapse of the filamentous mitochondrial network, decreased mitochondrial membrane potential, and a five-fold increase in cell death [174]. SLC2A10, which encodes GLUT10, was shown to transport dehydroascorbate (oxidized vitamin C) into mitochondria to limit the production of ROS [175]. Slc2a10 knock-down in zebrafish produces disorganization of the vasculature, wavy notochord, and cardiac edema, as well as mitochondrial dysfunction and reduced TGF-β signaling [176]. Thus, PYCR1 and SLC2A10 would be required to maintain mitochondrial redox balance. Fibrillins and Fibrillin-Associated Proteins Mutations in fibrillins (fibrillin-1 or fibrillin-2) lead to heritable connective tissue disorders known as fibrillinopathies such as Marfan syndrome (MFS), ectopia lentis (EL), Weill-Marchesani syndrome (WMS), MASS syndrome (Mitral valve prolapse, Aortic root diameter at upper limits of normal for body size, Stretch marks of the skin, and Skeletal conditions similar to Marfan syndrome), Shprintzen-Goldberg syndrome (SGS), and acromicric (AD) and geleophysic (GD) dysplasias. However, the molecular mechanisms that lead to their pathogenesis are less known, including the impact of ROS and redox stress. Marfan Syndrome Marfan syndrome (MFS) is an autosomal dominant negative disease with a prevalence of 1:5000, without gender or ethnic predisposition and affecting multiple organs and systems including cardiovascular, ocular, and skeletal ones. The most severe complications affect the cardiovascular system, including aortic root and ascending aorta aneurysms and dissections, and mitral valve regurgitation and prolapse. The lifespan of undiagnosed patients is around 40 years old, but this age is rather variable depending on the mutation occurring in fibrilin-1 gene (FBN1) and other unknown genetic and epigenetic determinants. The major ocular injury is ectopia lentis or the dislocation of crystalline lenses and myopia. The skeletal characteristics are rather evident and are reflected in high height (dolichostenomelia) due to overgrowth of long bones and longer and thin fingers (arachnodactyly). Additionally, patients can show pectus deformities (pectus excavatum or pectus carinatum), pes planus, and palate alterations [177,178]. Mutations in the FBN1 gene are the cause of MFS and up to now almost 3000 mutations have been reported. Despite this large number, there is no correlation between the location or type of mutation and the resulting clinical phenotype. Depending on the FBN1 mutation, the resulting mutant fibrillin-1 will cause a dominant-negative or haploinsufficiency effect in the disease, whose impact on the progress of the disease is poorly known, but it acquires potential relevance about the efficacy of current pharmacological treatments [179]. As indicated above, the fatal hallmark of MFS is aortic aneurysm, which usually ends with the dissection and rupture of the aorta. Whereas TGF-β has been postulated as an essential determinant in the pathogenesis of the aneurysm, its role has recently been questioned [180][181][182]. Nonetheless, it is now becoming clear that other molecular determinants significantly contribute to the aneurysm disorder, such as overproduction of ROS and the subsequent oxidative stress-associated damage to constituents of the aortic wall (tunica intima, media, and adventitia). Mice harboring mutations of Fbn1 or Fbn2 have provided significant insights into the understanding of microfibril-associated physiopathology. Nowadays there are several mice models bringing mutant forms of Fbn1 [183,184], which lead to MFS (with several degrees of pathology). Through their use, it has been reported that (1) fibrillins-1 and -2 form copolymers and fibrillin-1 is mandatory for the postnatal maturation and mechanics of the aortic wall and (2) such copolymers regulate the availability of family members of TGF-β and BMP and other differentiation factors [185]. In human abdominal aortic aneurysms (AAA), dysregulated inflammation induction, MMPs' activity, SMCs' apoptosis and/or phenotypic switching, and ECM remodeling contribute to a variable extent to disease progression [186]. Nonetheless, the role of oxidative stress in AAA and in thoracic aortic aneurysm and dissection (TAAD) is less known. However, there is increasing evidence of its impact on the pathogenesis or progression of these conditions. Therefore, aortic aneurysms of genetic origin are also probably affected by oxidative stress. TAAD of genetic origin involves different genes, which can be subdivided into three groups according to the (sub)cellular processes in which their encoded proteins are involved [187]: (1) ECM homeostasis (COL1A1, COL3A1, COL5A1, LOX, MFAP5, and PLOD1), (2) TGF-β signaling (TGFB2, TGFB3, TGFBR1, TGFBR2, SMAD2, SMAD3, and SKI), and (3) the SMC contractile apparatus (ACTA2, MYH11, MYLK, PRKG1, and FOXE3). To date, only LOX, ACTA2, MYH11, and PRKG1 have been shown to be associated with oxidative stress. LOX encodes lysyl oxidases (LOXes) and it is the only TAAD-related gene acting at the ECM that has been clearly linked to oxidative stress so far. LOXes are a group of ECM enzymes that initiate the formation of covalent cross-linkages between collagen and elastin, generating H 2 O 2 as a by-product [188]. Experiments in Lox-deficient mice have shown that LOX-mediated cross-linking is essential for the maturation of the ECM, providing tensile strength [189,190]. Elevated LOX expression levels in a haploinsufficiency mouse MFS model (FBN1 C1039G/+ ) correlated with the prevention of larger dilation of the aneurysm. Administration of LOX inhibitors blocked collagen accumulation and aggravated elastic fiber impairment, which initiated rapid progression of aneurysm dilatation [191]. Interestingly, LOX has also been identified as a novel vascular ROS source in hypertension. Thus, H 2 O 2 produced because of LOX-induced cross-linking contributes to the pathogenesis of the disease [192]. However, it is unclear why LOX seems to offer a protective role in TAAD development yet causes oxidative stress in hypertension, whereas clinical manifestations of both diseases clearly differ. MFS patients are usually normotensive or even slightly hypotensive, while hypertension is characterized by high blood pressure levels, particularly diastolic blood pressure [193]. Other TAAD-associated genes regulate TGF-β signaling, which is a crucial signaling pathway for embryonic development, cell differentiation and proliferation, apoptosis, and ECM production and (re)modeling [194]. To date, deleterious effects in none of the TGF-β signaling-associated TAAD genes are known to cause excessive ROS production. Nevertheless, it is well-established that TGF-β signaling indirectly contributes to oxidative stress by stimulating ROS production and/or suppressing antioxidant systems in fibrosis, tumorigenesis, and cerebral ischemia [195,196]. The contribution of TGF-β-mediated oxidative stress has been demonstrated in MFS. This effect occurs through indirect regulation of the expression of NADPH oxidase NOX4 in MFS patients and mice [197] (see below). Under normal circumstances, differentiated SMC expresses contractile-associated markers such as smooth muscle actin alpha 2 (ACTA2; encoding for α-SMA) and smooth muscle myosin heavy chain 11 (MYH11) [198,199]. Mutations in both of these genes are linked to TAA [187]. It is well known that vascular injuries are characterized by excessive production of ROS that, among other stimuli, can modulate SMC function and plasticity [200]. In such a pro-oxidant environment, contractile SMCs undergo a phenotypic switch toward a more synthetic fibroblast-like cell. This state is characterized by decreased expression of contractile-associated markers (e.g., α-SMA) and increased proliferation, migration, and ECM synthesis. It has been established that TGF-β signaling plays a dual role in SMC phenotypic switching in MFS [201]. However, the molecular mechanisms underlying oxidative stress-mediated SMC phenotype switching in TAA and MFS have not been elucidated. Oxidative stress can be both a cause and a consequence of loss of α-SMA. In vitro and ex vivo studies in human and mice aortic tissue could correlate excessive ROS production with increased expression of connective tissue growth factor (CTGF). Thus, oxidative stress regulates the SMC phenotype via CTGF [202]. NOX4 overexpression augments H 2 O 2 levels, an effect that regulates both the differentiation of stem cells into VSMCs and the phenotypic changes between contractile and synthetic states [203]. ROS also switch VSMCs from a quiescent physiological contractile phenotype to a proliferative phenotype, which facilitates VSMCs' migration, proliferation, and modification of the surrounding extracellular matrix [204]. VSMCs from p22phox-overexpressing mice exhibit increased H 2 O 2 production and increased expression of synthetic phenotypic markers concomitantly with decreased contractile markers [205]. H 2 O 2 also induces miR-145 expression in VSMCs to promote a contractile differentiation state [206], elevated ROS levels, and NOX4 expression in isolated VSMCs and aortic tissue derived from Acta −/− mice [207]. The loss of α-SMA favors the synthetic state, and ROS accumulation promotes NF-κB signaling, leading to increased expression of AngII receptor type 1a (AgRT1) [208]. Both TGF-β and AngII signaling phosphorylate Smad2 and Erk1/2 to initiate aneurysm formation. Consistently, losartan, an angiotensin receptor 1 inhibitor, prevented aortic aneurysm in MFS mice [209]. Of note, losartan, together with atenolol, is a pharmacological strategy given to MFS patients despite its demonstrated low efficiency in ameliorating aortic aneurysm [210,211]. A recent study in MFS mice has identified α-SMA as a possible redox stress target [197]. The α-SMA can undergo redox modifications (nitration and/or carboxylation), leading to impaired protein function. Thus, it can contribute to aneurysm formation and/or development. Therefore, regardless of whether α-SMA is mutated or not, oxidative stress seems to have a significant impact on SMC phenotype and function and, thus, on aneurysm formation and/or progression. More recently, gain-of-function mutations in PRKG1 have been associated with oxidative stress. PRKG1 encodes for cGMP-dependent protein kinase 1, which is an essential mediator of VSMC tone through NO/cGMP-signaling. Here, basal protein kinase G (PKG) activity was significantly increased in mice carrying the PRKG1 mutation, which leads to oxidative stress, increased VSMC apoptosis, and elastin fiber breaks [212]. Initial evidence for the contribution of oxidative stress in MFS came from a study examining endothelial function in MFS mice [213]. The endothelial-dependent relaxation in TAAD segments of Fbn1 C1039G/+ mice was severely affected because of the downregulation of eNOS/AKT signaling-induced NO. Subsequent preincubation of MFS aortic tissue with several ROS inhibitors improved acetylcholine (Ach)-induced aortic relaxation. Simultaneously, protein expression levels of ROS-producing enzymes XO, NOX, and iNOS increased with the concomitant reduction of SOD [214]. In another study, endothelial dysfunction was prevented in Nox4-deficient MFS mice [197], which involved, for the first time, NADPH oxidases in the progression of the MFS aortic aneurysm. In this study, MFS aortic tissue and cultured SMC derived from MFS patients showed NOX4 overexpression. In a newly generated MFS mouse model lacking Nox4 gene expression, the integrity of elastic fibers was preserved and aortic aneurysm progression was significantly reduced. Remarkably, this finding was only significant in 9-month-old mice but not in younger mice. This suggests that NOX4 negatively influences aneurysm progression in later stages of the disease [197]. A similar protective role of Nox4 has been reported in cerebral arteries and aorta of Marfan mice and patients, respectively [215,216]. Altogether, an imbalance between ROS-producing proteins, including NOX4, and ROS-scavenging proteins actively impairs vasomotor function in MFS. Recently, besides eNOS, iNOS has been involved in MFS aneurysm formation [217]. This group reported increased iNOS levels in MFS mice and human aortic tissue, which were reverted after the administration of an iNOS inhibitor quickly normalized aortic size. Ex vivo experiments using Fbn1 C1039G/+ aortic tissue samples showed that imbalanced production of COX-derived prostanoids, especially COX-2, also contributes to vasomotor dysfunction in MFS [218]. Moreover, the administration of a nonselective COX-inhibitor, indomethacin, to the hypomorphic mouse MFS model (Fbn1 mgR/mgR ) efficiently attenuated elastin degeneration, inhibited macrophage infiltration, and reduced MMP-2 and MMP-9 overexpression [219]. Interestingly, the contribution of COX-2 in MFS (Fbn1 C1039G/+ )-induced aortic dysfunction was sex dependent, since aortic anomalies of this enzyme were only detected in males [220]. These findings show that COX-2-derived prostanoids influence vasomotor aortic function in MFS. The role of oxidative stress in the pathophysiology of vascular alterations in MFS is becoming clearer. Pioneer studies showed that FBN1 mutation in MFS mice (Fbn1 mgR/mgR ) was related to increased ROS production together with increased TGF-β and p38 MAPK signaling [221]. Shortly afterwards, elevated oxidative stress levels in plasma and aortic homogenates (pooled ascending aorta and aortic arch) were reported in Fbn1 C1039G/+ mice [214] and MFS patients [222]. More recently, a study demonstrated that ROS levels are exclusively increased in the dilated segments of the aorta. Thus, ROS enhancement was only present in the ascending aorta of Fbn1 C1039G/+ mutants and not in the descending arm [223]. These results agree with the different impact that MFS produces in the aortic reactivity of Fbn1 C1039G/+ mice, in which it induces either increased or decreased α1 adrenergic contractions in ascending and descending thoracic aorta, respectively [197]. Active phosphorylated forms of SMAD2 and Erk1/2 only increased in the affected segments [180,224]. These results reinforce the postulate that an interplay between TGF-β and ROS contribute to TAAD development. However, we must bear in mind that different mice models of MFS might provide conflicting results. This is the case of MFS mice (mg∆ loxPneo ), a MFS mouse model in which Fbn1 exons 19-24 were replaced by a neomycin-resistant expression cassette [225]. Authors reported that, whereas ROS were enhanced in later stages of aortic dilation, ROS reduction with lipoic acid did not prevent aortic dilation and elastic fiber injuries. Therefore, oxidative stress was uncoupled from aortic wall injuries [226]. In Fbn1 C1039G/+ mice, ROS inhibition with apocynin (an unspecific NADPH oxidase inhibitor) attenuated aortic aneurysm progression and AngII-dependent enhanced ROS production in a TGF-β-dependent manner [223]. Oxidative stress comes not only from sustained increases in ROS production over time, but also from the reduced activity of scavengers from which glutathione is the main system. In aortic tissue from MFS patients, it has been reported that reduced activity of glutathione-S-transferase and glutathione peroxidase occurs, in conjunction with a decrease of reduced glutathione [227]. Therefore, the depletion of scavengers with or without increases of ROS generation could aggravate the aortic aneurysm in MFS. Mitochondrial-derived increased ROS production has recently been associated with cell senescence. Aortic tissue and SMC isolated from MFS patients show accelerated senescence. This effect is at least partly mediated by ROS-induced activation of NF-κB signaling [228]. Other important sources of ROS in the cardiovascular system are XO, NO, and COXes. XO links purine metabolism to redox signaling and stress. UA and superoxide anion are the two main products from XO activity. Elevated levels of serum UA in humans are often associated with an increased risk of cardiovascular disease [229]. It is important to bear in mind that UA in physiological serum concentrations acts as a powerful antioxidant in the blood by scavenging ROS [61,230]. UA accounts for more than 50% of the total antioxidant capacity of biological fluids in humans [231]. UA was found in the wall of human aortic aneurysms and atherosclerotic arteries [232]. These findings suggest that UA might aggravate or attenuate the formation and/or progression of aortic aneurysms, including those present in MFS [230]. Nevertheless, further research is needed to elucidate the exact role of UA in MFS, as it is currently unknown whether it acts as an antioxidant or pro-oxidant. Numerous therapeutic strategies are being investigated to fight against TAAD [233]. Therapies based on antioxidants have shown relative success, most probably due to the complexity of multiple pathways that tightly regulate the balance between ROS production and scavenger systems. In any case, their use deserves further attention considering recent results. This is the case of cobinamide, an analog of the free radical-neutralizing vitamin B 12 , which prevented aortic wall degeneration in a heterozygous mutant mouse for protein kinase G1 (PRKG1) that leads to its overactivation. The antioxidant N-acetylcysteine also ameliorated the aortopathy in these mutant mice [212]. Another promising example comes from the use of resveratrol in MFS [234]. Resveratrol is a potent polyphenol that is present in high concentrations in plants, nuts, and the skin of grapes. Treatment of MFS mice (Fbn1 C1039G/+ ) with resveratrol reduced NOX4 expression and MMP2 activity and changed eNOS/iNOS and miR21/miR29 balances, which improved SMC survival. Moreover, resveratrol improved cardiomyocyte homeostasis, which probably activates mitochondrial sirtuin1 (SIRT)1 and increases SOD expression [235]. Weill-Marchesani Syndrome Fibrillin-1 is composed of individual domains like multiple tandem arrays of epidermal growth factor-like domains (EGF-like domains) and cysteine-containing domains (TB domains). An RGD (arg-gly-asp) motif that binds integrins is present in TB4 domain [236,237] and adjacent TB5 binds heparin [238]. Different mutations in the TB5 domain cause autosomal dominant Weill-Marchesani syndrome (WMS) or acromicric (AD) and geleophysic (GD) dysplasias [239]. In contrast to MFS, these cause short stature, thickened skin, joint defects, and ocular problems. Unfortunately, very little is known about the contribution of ROS and oxidative stress in these diseases. Nevertheless, in a small cohort of WMS patients, plasma levels of lipid peroxide (LPO), TNF-α, and NO were elevated with a concomitant reduction of antioxidant capabilities. This suggests that redox dysfunctions contribute to the pathogenesis of WMS and point to antioxidants and free radical scavengers as a potential therapy to ameliorate the disease [240]. Systemic Sclerosis Tissue fibrosis is the hallmark of systemic sclerosis (SSc) and the uncontrolled wound-healing process. Systemic sclerosis (SSc) or scleroderma is a chronic autoimmune disease characterized by tissue fibrosis and immune abnormalities, and the most common form of acquired scleroderma [241]. It is characterized by progressive thickening and hardening of skin and multiple internal organs. Inflammatory infiltrates and fibrosis of blood vessels in the dermis precede human SSc. Tsk/+ (tight-skin) mice have evidenced dysfunctions of fibrillin-1 microfibrils in SSc in the dermis, particularly fibrillin-1 aggregates and fragmented elastic fibers [242]. It is well known that oxidative stress linked to vascular injury plays an important role in the pathogenesis of SSc [243][244][245][246][247]. Circulating levels of ROS and related markers correlate with SSc vasculopathy, fibrosis onset, and autoantibodies' production [248]. The excess of ROS stimulates endothelial injury and other vascular alterations, which activate TGF-β-mediated EMT, a process that converts endothelial cells into myofibroblasts [249]. ROS cause chemical modifications in some lipids, proteins, and nucleic acids [250], which generate new epitopes that induce strong autoimmune responses. This is what has been shown in SSc, with ROS-associated changes in gene expression pattern and the stimulated release of IL-6, IL-8, and IL17 among others [251,252]. In this respect, diverse therapeutic strategies to interfere with ILs' expression and/or their release have been successfully addressed with drugs such as the phosphodiesterase type 5 inhibitor sildenafil [253], the natural flavonoid kaempferol [254], EGCG [255], and the anti-IL-6 receptor antibody tocilizumab [256], among other treatments [257]. Many of these treatments also have an antioxidative stress effect that reduces or prevents the characteristic abnormal accumulation of ROS in this disease [258]. Other antioxidants, such as hydrogen sulfide, have been reported to interfere in the onset and progression of this disease [259]. NRF2, a transcription factor that induces the transcription of antioxidant genes (for example GSH) is downregulated in cultured skin fibroblasts derived from SSC patients. This observation was confirmed in both skin and lungs of SSc mice. Treatment with the NRF2 agonist dimethyl fumarate (DMF) reduced fibrosis and immune overactivation [260]. One of the main sources of ROS in SSC are NADPH oxidases [261,262]. Increased NOX2 and NOX4 have been reported in SSC fibroblasts, neutrophils, monocytes, and T lymphocytes [263][264][265]. In addition, NOX4 is highly expressed in skin of SSc patients and in cultured SSc fibroblasts. This overexpression is triggered by TGF-β via PKCδ and Smad2/3 [266]. Loeys-Dietz Syndrome Loeys-Dietz syndrome (LDS) is not a disorder caused by mutations in CT structural components, but CT is severely affected because mutations in the TGF-β receptor or SMAD3 have a strong impact on the homeostasis of this tissue [267]. Despite the different disease etiologies, LDS clinical manifestations are related to MFS in the vascular system because of TGF-β signaling pathway dysregulation. LDS results from heterozygous substitutions in crucial residues of the kinase domains of types I or II TGF-β receptor or Smad3, which theoretically abrogates TGFβ signaling. Paradoxically, such signaling remains hyperactivated. Much less is known about LDS in comparison with other syndromes, and even less regarding ROS and oxidative stress contribution to the pathology. Nevertheless, a couple of recent studies provide evidence of their involvement. In a similar study previously carried out in MFS patients, the content of enzymatic and nonenzymatic systems involved in redox stress was measured in plasma and TAA from LDS patients [268]. This group observed a significant reduction of antioxidative stress mechanisms (GSH, antioxidant capacity, glutathione peroxidase, glutathione-S-transferase, catalase, and thioredoxin reductase) accompanied by an increase in both SOD and XO activities. Moreover, NRF2 expression decreased, which explains the reduced expression or activity of antioxidant enzymes. In addition, reduced mitochondrial respiration was observed in cultured SMC from the mouse model of LDS (TGFBR1 M318R/+ ). Cultured human fibroblasts from LDS and Marfan patients also showed lower oxygen consumption [269]. Arterial Tortuosity Syndrome Arterial Tortuosity Syndrome (ATS) is a heritable disease characterized by twisting and lengthening of the major arteries, hypermobility of the joints, and laxity of skin [270]. ATS is caused by mutations in SLC2A10, which encodes Glucose Transporter 10 (GLUT10) [271]. In ATS, loss of GLUT10 results in defective collagen and/or elastin. Two models explain the onset and development of the disease: (1) The loss of GLUT10 induces a glucose-dependent increase in TGFβ that stimulates cell proliferation in the vessel wall and (2) GLUT10 transports ascorbate (vitamin C), an essential cofactor for collagen and elastin hydroxylases, into the secretory pathway [272]. Considering the essential connection between ascorbate and the redox state of cells (mainly in fibroblasts), the latter hypothesis acquires more relevance. GLUT10 is highly expressed in VSMC and adipocytes, where it facilitates the transport of the oxidized form of vitamin C (1-dehydroascorbic acid, DHA) into mitochondria, which protects against oxidative stress. The loss of function of GLUT10 in a mutant mouse showed much higher mitochondria-generated ROS levels than wild-type mice [175]. Transcriptomic analysis of cultured skin fibroblasts from ATS patients showed an increase in lipid peroxidation sustained by PPARγ function. The rescue of normal GLUT10 expression normalized redox homeostasis, PPARγ activity, and TGF-β signaling, accompanied by partial ECM reorganization [273]. These works highlight the relevance of vitamin C and ROS in arterial abnormalities. Proteoglycans and Glycosaminoglycans Hyaluronic acid (HA) is a widely distributed nonsulfated GAG and a major component of the cartilage extracellular matrix and synovial fluid. An oxidative stress environment of elevated OH • and ONOO •− , and increased lipid peroxidation, is associated with HA fragmentation. This effect contributes to the inflammatory response in chondrocytes [274]. In addition, HA can be cleaved by hypochloric acid in autoimmune diseases [275]. HA can also have anti-inflammatory and antioxidative actions in chondrocytes [276] by a mechanism involving AKT-NRF2 axis activation [277] Overall, the role of HA during oxidative and inflammatory damage seems to depend on the size of the HA molecule. High-molecular-weight HA provides tissue integrity and low-molecular-weight HA mediates inflammatory responses [278]. However, ROS are also involved in genetic pathologies that affect the biological cycle of proteoglycans (see below). Mucopolysaccharidoses Alterations in GAGs' degradation occur with an intra-lysosomal accumulation of nondegraded products, which cause a group of lysosomal storage disorders called mucopolysaccharidoses (MPS). The degradation of GAGs requires 10 different enzymes that have been widely studied: Five sulfatases, four glycosidases, and one nonhydrolytic transferase. Deficiencies have been found in each of them, resulting in seven MPS that share a series of clinical characteristics, though to varying degrees [279,280]. Typical manifestations include skeletal and joint deformities, dysmorphic facial characteristics, dwarfism, and, depending on type and severity, intellectual disabilities, spinal cord compression, increased intracranial pressure, ocular and hearing impairment, respiratory difficulties, gastrointestinal pathology, and umbilical or inguinal hernias [281][282][283][284]. GAGs are normal components of large vessels and cardiac valves [285][286][287]. Deposition of GAGs occurs in the myocardium, the cardiac valves, and the coronary arteries of all types of MPS, resulting in diffuse narrowing of the epicardial coronary arteries, cardiac valve dysfunction, ventricular hypertrophy, and cardiac failure [288]. The most prominent cardiac manifestation present in 60-90% of patients is progressive cardiac valve pathology. Although it is most prominent in MPS I and MPS II, coronary artery narrowing and/or occlusion has been described in individuals with all types of MPS. Large vessels in patients may show increased wall thickness and may either be narrowed or dilated. Systemic hypertension due to arterial narrowing is common among individuals with MPSI and MPSII. In addition, dilation of the ascending aorta and markedly reduced aortic elasticity has been reported in MPS I. This could be attributed to the downstream effects of GAGs on the assembly of tropoelastin, resulting in elastin that is decreased in content and abnormal in structure [288]. Available evidence on lysosomal diseases shows increased ROS production, dysfunctional mitochondria, aberrant inflammatory and apoptotic signaling, and perturbed calcium homeostasis, among other biochemical alterations [289]. An abnormal accumulation of nondegraded GAGs within the lysosomes leads to ROS increase. Given the acidic interior of lysosomes and abundance of the reducing amino acid cysteine, lysosomes would be a perfect environment to foster Fenton-type reactions, making them unusually sensitive to oxidative stress. The disruption of lysosomes can cause a release of hydrolases, undegraded metabolites and iron into the cytosol, causing cell apoptosis or necrosis, and, finally, tissue injury. Additionally, in a loop process, the release of lysosomal content induces secondary ROS production in cytoplasm, which aggravates the oxidative stress [290][291][292]. The involvement of ROS has been reported in MPS pathology. MPS I patients show high lipid peroxidation levels [293]. Furthermore, MPS II patients show global impairment in redox status, evidenced by an increase in lipids and protein oxidation, as well as alterations in SOD and catalase activities [294]. Accumulation of oxidative products are frequent in MPS IIIB [295]. A reduction of antioxidant defense systems together with oxidative-induced DNA, lipid, and protein damage have been described in MPS IVA disease [296]. Cells exposed to oxidative stress enhance the antioxidant defenses in an attempt to reestablish homeostasis. MPS I mice showed increased carbonyl groups and elevated SOD and CAT activities together with a decrease of thiobarbituric acid-reactive substances, which suggests an exposure to oxidative stress in this model [297]. An increase in oxidative damage-related hallmarks coincides with GAGs' accumulation as very early events in MPS II mice pathogenesis and precedes glial degeneration, which finally leads to neuronal death. Additionally, the anomalous mitochondrial pattern observed in astrocytes supported the presence of oxidative damage in MPS II progression [298]. An upregulation of NADPH and pro-inflammatory cytokines in MPS IIIB knock-out mice due to microglia activation has been reported [299]. Intense production of superoxide anion, whose main sources during inflammatory conditions are NOXes 1 and 2, enhances the production of other ROS, such as H 2 O 2 and ONOO − , which enhances the oxidant environment [300]. As a result, an increase of ROS and NOS occurs due to microglial Nox and iNOS activation [301,302]. In MPS IIIA mice, a potential link between inflammation and oxidative stress has also been reported [303]. In animal models of MPS VI and VII, an inflammatory process caused by intralysosomal accumulation of GAGs has been postulated, which could trigger the release of cytokines, chemokines, proteases, and NO, leading to apoptosis and connective tissue destruction [304]. Pro-inflammatory cytokines can induce the production of oxidants, prostaglandins, and mitochondrial ROS by macrophages, which contributes to the damage found in MPS patients [305]. Enzyme replacement therapy (ERT) with recombinant human enzymes is a treatment that intends to deliver sufficient enzyme activity to reduce and prevent the accumulation of undegraded substrates. MPS IVa patients with ERT presented oxidative and inflammatory imbalance even after eight months of ERT treatment [296]. In MPS II patients, a protective effect against oxidative stress was observed during the first six months of ERT treatment [294]. Nevertheless, even during long-term ERT, some degree of inflammation, oxidative, and nitrative imbalances occur in these patients. These alterations seem to be induced by GAGs' accumulation and pro-inflammatory cytokines. Notwithstanding, ERT is known to reduce GAGs' levels and was efficient at improving several biomarkers of oxidative stress [308]. After six months of gene therapy in a mouse model of MPS IIIb, there was a significant reduction in the expression of Ccl3, which plays an important role in the macrophage-dependent inflammatory response. There were also reductions in the inflammatory caspase Casp 4 and in Cybb (gp91 phox ), which is a component of the phagocytic enzyme complex NADPH oxidases [309]. Many studies suggest that as a complement of ERT, antioxidant drugs could be candidates to delay disease onset and progression. Neural stem cell cultures of an MPS II mouse model treated with vitamin E triggered full rescue of the phenotype, both of mutant glial and neuronal cells [298]. In fibroblast of MPS III patients, the accumulation of GAGs was partially restored by supplementation with CoQ10 or an antioxidant cocktail (α-tocopherol, N-acetylcysteine, and α-lipoic acid). The efficacy varied, depending on the characteristics of each patient, but the results were encouraging [310]. Concluding Remarks Redox reactions are necessary for the normal physiology of cells, tissues, and organs. Redox constituents and their products (radical species) have important autocrine-and, probably, paracrine-signaling functions. Radical species generated at cellular level have a great impact on cellular components (lipids, proteins, and nucleic acids), whose functions can significantly change in the short or long term depending on how long the radicals are present in the cell environment. It is evident that when radicals are constantly produced and exceed the buffering capacity of endogenous antioxidants, the physiological role of ROS becomes detrimental, which leads to oxidative stress. In this review, we examined the impact of ROS and oxidative stress in genetic diseases of CT. ROS and oxidative stress are involved in the CT pathology of different genetic diseases at molecular and cellular levels. They have gained relevance in ECM organization and dynamics because ECM (re)modeling is always determinant in the normal homeostasis of the tissue and associated pathologies. Due to increasing awareness of this factor, new therapeutic antioxidant approaches are applied to many of these reported diseases to halt or mitigate clinical symptoms and/or progression of the examined disease. However, this is not easy because oxidative stress can be generated by the dysregulated production of ROS or by dysfunctions of the scavenger systems. In the end, the results might be the same (i.e., oxidative stress), but identification of the precise mechanism by which oxidative stress is generated is essential for a successful therapeutic approach. Further work is necessary to understand the real impact of oxidative stress on the generation and/or progression of genetic diseases (in this case, those affecting connective tissue). Pharmacological interference of oxidative stress in genetic diseases that affect CT formation deserves more attention. Author Contributions: Conceptualization, G.E., writing-review and editing G.E., V.C. and F.J.-A. All authors have read and agreed to the published version of the manuscript. Funding: The work in the authors' labs was supported by the Spanish Ministry of Science and Innovation (MICINN) grants SAF2016-78508-R to V.C. and SAF2017-83039-R to G.E., the Jerome-Lejeune Foundation and Autour des Williams Association to V.C., and the National Marfan Foundation (NMF) to G.E.
2020-10-22T18:55:54.588Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "02e243840431c69bd9e39124ff4e16ac4f699780", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/9/10/1013/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7a42bfcc914e44f4645f22ca836121bc0ff6f02", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
228076793
pes2o/s2orc
v3-fos-license
Factors and Recommendations to Support Students’ Enjoyment of Online Learning With Fun: A Mixed Method Study During COVID-19 Understanding components that influence students’ enjoyment of distance higher education is increasingly important to enhance academic performance and retention. Although there is a growing body of research about students’ engagement with online learning, a research gap exists concerning whether fun affect students’ enjoyment. A contributing factor to this situation is that the meaning of fun in learning is unclear, and its possible role is controversial. This research is original in examining students’ views about fun and online learning, and influential components and connections. This study investigated the beliefs and attitudes of a sample of 551 distance education students including pre-services and in-service teachers, consultants and education professionals using a mixed-method approach. Quantitative and Qualitative data were generated through a self-reflective instrument during the COVID-19 pandemic. The findings revealed that 88.77% of participants valued fun in online learning; linked to well-being, motivation and performance. However, 16.66% mentioned that fun within online learning could take the focus off their studies and result in distraction or loss of time. Principal component analysis revealed three groups of students who found (1) fun relevant in socio-constructivist learning (2) no fun in traditional transmissive learning and (3) disturbing fun in constructivist learning. This study also provides key recommendations extracted from participants’ views supported by consensual review for course teams, teaching staff and students to enhance online learning experiences with enjoyment and fun. INTRODUCTION Online learning has been considered vital in 21st century to provide flexible education for students as well to address the gap between demand for higher education and supply. Governments have advocated increasing rates of completion of secondary and higher education in the face of rapid population growth. However, they face financial pressure to support these larger numbers directly through additional infrastructure, in addition to scholarships and student loans (Cooperman, 2014:1). In recent years, there has been an increasing interest in distance online learning not only to educate students who work but also who live too remotely or cannot access traditional campus universities for other reasons. However, literature shows that online distant education has dropout rates higher than traditional universities (Xavier and Meneses, 2020). Studies also suggest that the students' level of satisfaction about their online learning and own academic performance have significant correlation with their level of persistence toward completion (Gortan and Jereb, 2007;Higher Education Academy (HEA), 2015). Understanding components that influence students' enjoyment in distance higher education is fundamental to promote student retention and success (Higher Education Academy (HEA), 2015) during and post COVID-19 pandemic. There is a growing body of research about students' engagement in virtual learning environments (Arnone et al., 2011). However, there are key issues that whilst extensively researched in traditional teaching, remain relatively absent from research into distance education. For example, a long established body of research exists that demonstrates a link between students' epistemological beliefs and their study, engagement, and outcomes (Rodriguez and Cano, 2007;Richardson, 2013). The types of epistemological beliefs typically examined fall into two broad categories. The first is derived from Schommer's research (Schommer, 1990), in which she elicited dimensions that reflected students differing beliefs. This included "simple knowledge" (knowledge as isolated facts vs. knowledge as integrated conceptions) and "innate ability" (ability to learn is genetically determined vs. the ability to learn is enhanced through experience). The second category of research is more directly aligned with pedagogy. This has positioned epistemological beliefs in relation to traditional or constructivist beliefs. Traditional views of learning see learning occurring via the non-problematic transfer of untransformed knowledge from expert to student (Chan and Elliott, 2004). This contrasts with constructivist beliefs in which knowledge arises through reasoning, which is facilitated by teaching (Lee et al., 2013). This type of framing can be seen in large scale international comparative research, such as the Organization for Economic Co-operation and Development's survey of teachers' epistemological beliefs across 23 countries (Organisation for Economic Co-operation andDevelopment (OECD), 2010, 2013). However, in relation to online and distance higher education, epistemological research is relatively absent (Richardson, 2013;Knight et al., 2017). Given the impact of epistemological beliefs on students' study experiences there is a need for greater epistemologically focused research in the context of online education. Another underrepresented research area concerns fun in online learning; in particular, because the meaning of fun is unclear and controversial. There is no consensus about the value of fun in learning and what a fun learning experience means in higher education (McManus and Furnham, 2010;Lesser et al., 2013;Tews et al., 2015;Whitton and Langan, 2018). Tews et al. (2015) argue that fun is a term used regularly in various contexts including education. Yet there is no clear agreement about its role and relationships with students' learning experience. Congruently, McManus and Furnham (2010) highlight that fun has different meanings for different people and literature is limited about what generally comprises fun for learners. Similarly, Lesser et al. (2013) indicate that views about fun among educators are ambivalent as fun is perceived as too difficult or time-consuming to be implemented and it may distract students from serious learning. These three studies indicate that evidence about fun and learning are circumstantial and subjective for teaching staff to consider it as a compelling component for making their students' experience more impactful. So that, further studies would be worthwhile to examine the practical meaning and educational value of fun on Distance Higher Education with a systematic and rigorous methodological approach. To explore this challenge, this paper investigates students' reflective views about fun and online learning and whether fun and enjoyment are interconnected components to enhance enthusiasm to learn and excel in online distant education. This investigation considers a critical question framed by the authors from Whitton and Langan (2018:11)'s work. How can we explore the impact of fun in higher education in view of the complexity of factors involved? To explore this question, this work is based on Responsible Research and Innovation (RRI) approach to understanding the what, how and why fun might be a valuable key in education with and for distinctive representatives: learners, educators, researchers, consultants, and policy makers. "For pedagogic innovation to succeed, learners must personally perceive the benefits of learning activities" designed to be fun and also "these gains must be translated into outcomes that are viewed positively within the institution quality monitoring by teaching staff." Whitton and Langan (2018) also explain that there is a negative influence from the competitive job market that values "serious" performance -as the opposite of fun -so potentially this make course teams less likely to embed playful and fun approaches in the higher education curriculum. The RRI approach implies that community-members and researchers interact together to better align both its process and outcomes with the values, needs and expectations of society (European Commission, 2013;von Schomberg, 2013). The purpose of RRI is to promote greater involvement of societal members with research-authors in the process of research to increase knowledge, understanding and better decision-making about both societal needs and scientific research through eight principles: diversity and inclusion; transparency and openness, anticipation and reflexivity, adaptation and responsiveness (RRI-Tools, 2016;European Commission, 2020). These principles were used to adapt, implement and refine a self-reflective instrument about learning and fun. So that, the following section-"Previous Studies about Fun and Learning" present Learning and Fun views from literature. Section-"Methodology" shows the self-reflective instrument, which was used integrated with the methodological approach. Section-"Findings" shows the findings and section-"Discussion and Final Remarks" discussion with final remarks. PREVIOUS STUDIES ABOUT FUN AND LEARNING Studies that appear to research fun and learning, typically focus on types of activity and the extent to which these are seen as enjoyable and indicated as being fun, rather than drilling down to examine or define fun. While fun is consistently recognized as an important part of the lived experience of children, youth and adults, relatively few seek a deeper understanding of what the construct of fun means (Kimiecik and Harris, 1996;Harmston, 2005;Garn and Cothran, 2006). This situation is in stark contrast to how fun is generally positioned with regard to the domain of learning and education. There are different views in the literature about fun and learning, in terms of meanings and its effects. Negative perspectives describe fun as the opposite concept of meaningful "work" and consider it as an unnecessary distraction for learning. Fun is a term that has changed over time. In the 1900s, it came to indicate an absence of seriousness, work, and labor. "Fun can be seen both as a resistance to the rigid demarcation between work and leisure and also as a means of reproducing that dichotomy" (Blythe and Hassenzahl, 2018, p92). As it took on these meanings, fun became a loaded term that challenges the status quo (Beckman, 2014). It can be positioned as a challenge to the traditional split between fun and learning; welcomed by those who embrace social views of the learning process but seen as an unnecessary distraction for those who hold a traditional transmission view of how learning takes place. The etymological meaning of fun (fonne and fon from Germanic), which refers to "simple, foolish, silly, unwise" (Etymonline, 2020) have still influence on the meanings attributed by people and researchers nowadays. The argument that fun can have a negative influence on learning was highlighted in newspaper reports of research by the Centre for Education Economics (CEE): "Making lessons fun does not help students to learn, a new report has found. The widely held belief that learners must be happy in order to do well is nothing more than a myth" (Turner, 2018). Likewise, Whitton and Langan note in their analysis of fun in United Kingdom that many educators believe fun to be unsuitable in the "serious" business of higher education (Whitton and Langan, 2018, p3). They also highlight a need to research whether students believe that there is any place for fun in their university studies. So, for many, fun is seen as having little or no place within learning. Within the context of education, "fun" is often a derogatory term used to refer to a trivial experience (Glaveanu, 2011). Some researchers have identified a more positive relationship between fun and learning for children and adults. An analysis of outcomes from the United Kingdom's "Excellence and Enjoyment" teaching initiative concluded that "Learning which is enjoyable (fun) and self-motivating is more effective than sterile (boring) solely teacher-directed learning" (Elton-Chalcraft and Mills, 2015, p482;Tews et al., 2015). In the context of informal adult learning, fun has been linked to positive learning outcomes, including job performance and learner engagement (Francis and Kentel, 2008;Fine and Corte, 2017;Tews et al., 2017). This raises the question of why this conflict and controversy might exist. The positive effect is not due to fun being an integral part of the learning process, but rather because it has physiological effects such as reducing stress and improving alertness which enhance "performance" (Bisson and Luckner, 1996). Similarly, Whitton and Langan (2018) describe fun as a "fluid state" (Prouty, 2002) which makes learners feel good (Koster, 2005: 40) to engage with learning. This fluid state allows learners to take healthy risks beyond existing personal boundaries (Ungar, 2007). This is because learners are attracted to participate in learning activities that they enjoy and can "fail forward" and feel safe. In addition, Feldberg (2011:12) indicate that fun has a positive effect on the learning process for creating a state of "relaxed alertness" (Bisson and Luckner, 1996) which enables the suspension of one's social inhibitions and the reduction of stress. The author highlights fun may contribute to the maintenance of cognitive functioning and emotional growth (Crosnoe et al., 2004 cited by Feldberg). Dismore and Bailey's (2011, p.499) study indicates positive feelings associated with enjoyment, engagement and optimal experience. The authors described fun and enjoyment underpinned by the concept of "flow" (Csikszentmihalyi, 2015) which refers to "an optimum state of inner experience incorporating joy, creativity, total involvement and an exhilarating feeling of transcendence." The optimum state is a key component to lead students to enjoyable accomplishment and optimal learning when their perceived skill and challenge are balanced and suitable. Flow is an important concept for educators to be aware that students' anxiety caused when their challenge becomes higher compared to their skill, and boredom when challenge becomes too little compared to their skill will reduce their enjoyment and have a negative effect on their learning. Fun learning with flow experiences is relevant for learners to grow with positive opportunities where their skill meets their effort producing intrinsic rewards (Dismore and Bailey, 2011;Chu et al., 2017;Whitton and Langan, 2018). Literature about the meaning of fun in online learning is very limited. A set of studies about engaging e-learning games highlight that fun and challenge are essential for promoting students' enjoyment and making them want to learn (Fu et al., 2009). An engaging e-learning game facilitates the flow of experiences of students by increasing their attention, achieving learning goals and enjoyment with their learning experience (Virvou et al., 2005;De Freitas and Oliver, 2006). This study focuses on fun and learning in the context of Distance Higher Education supported by RRI. To explore what fun is, its meaning and the effects of the phenomenon need to be understood with learners. As a first step, there is a need to identify how the relationship between fun and online learning is conceived by learners based on their own learning experience. A second step is to examine whether this relationship connection has any connection with their epistemic views. The aim of this study is to address the following questions: • What are the relationships between fun and online learning practices identified by students? • What are the connections between students' epistemic views about online learning and fun? • What are the recommendations for students, teaching staff and course teams? METHODOLOGY This work is part of a research program OLAF -Online Learning and Fun led by Rumpus Research Group. The methodology used in this study adopts the established epistemological questionnaire approach (Feucht et al., 2017), and provides an opportunity to facilitate participants epistemic reflectivity (Feucht et al., 2017). In this way the study is underpinned by the concept of reflective practitioners, by which participants "think in action" about principles and practices to share their reflective views (Schon, 2015). This study is based on a mixed-method approach. Quantitative and qualitative data were generated through a self-reflective instrument (Feucht et al., 2017) constituted by two parts, both developed in Qualtrics. The first part was a Likert-scale survey with 25 statements about learning and fun. The second part was an open question (see "Instruments"). The approach used for qualitative analysis was a systematic and novel multi methodical procedure that combined: word cloud visualization in Qualtrics ( Figure 2); automated thematic analysis map (Figure 3) and sentiment analysis (Figures 4-6) in NVivo 12. This integration of visualizations enabled us to identify seven themes to analyze the value of fun; and 26 themes of relationships between fun and learning. The quantitative analysis was supported by PCA -Principal Content Analysis (see "Relationships Between Fun and Learning Supported by Quantitative Analysis"). This approach enabled us to group ourmulti-method qualitative analysis categorized by themes -into three groups (see "Relationships Between Fun and Learning Supported by Quantitative Analysis") as well present our findings (section-"Findings") with global recommendations underpinned by students' needs, priorities and expectations, which were revealed in the qualitative data and grouped by quantitative analysis. This study acknowledges 8 principles (Box 1) of RRI (von Schomberg, 2013;RRI-Tools, 2016) in the context of open educational research (Okada and Sherborne, 2018) by which all participants reflect about practices and beliefs for better alignment between learners' needs and research-based recommendations. The instrument with a special code to allow the withdrawal of participation without the collection of personal data was approved by the Ethics Committee and the Student Research Project Panel of the Open University-United Kingdom. Participants The OU offers flexible undergraduate and postgraduate courses and qualifications supported distance and open learning for 174,898 people from the United Kingdom, Europe and some worldwide. Approximately 76% of directly registered students work full or part-time during their studies; 23% of Open University United Kingdom undergraduates live in the 25% most deprived areas and 34% of new OU undergraduates are under 25, 14% with disabilities and 32% with lower qualification at entry. This study focused on one of the largest introductory modules offered by the Wellbeing Education and Language Studies -WELS Faculty of The Open University. Currently this module has more than 4,300 students and is part of various qualifications. So that, participants were students from all levels and qualification' interests with different occupations, include novices, undergraduates who had just completed secondary education, pre-service and in-service teachers; as well professionals interested in Education, Psychology and Social Care. A balanced and representative sample were constituted by a total of 625 students who participated in this study as volunteers, 551 completed a self-reflective questionnaire to reflect about fun and learning and 206 provided their reflective views by answering an "optional" open question. The response rate (40%) for the open views about fun and learning was higher than expected. In terms of students' previous study experience 48.55% students completed pre-A levels or equivalent (secondary school), 26.81% had already finished other OU course modules (level 1, level 2, and level 3) and 24.64% reported other different experiences. In terms of qualification pathway targeted by students: 28.80% are interested in childhood studies; 34.24% in psychology; 27.17% Education primary, 4.53% Open and 1.81% do not know and 3.44 other qualification such as Social Care. BOX 1 | RRI in the context of open education (Okada, 2020). Prinicples Recruitment Implementation Analysis Procedures This study focuses on a 9-month-module course with twenty-four weekly units and four assessment activities. The course integrates reading materials, online audio-visual materials, a YouTube channel "The student hub live" and radio-style broadcast audio repository. Students have also access to a set of library resources, news and special "quick guides" to provide extra-support for developing activities successfully. Students' interaction with peers and communication with tutors typically occur asynchronously in the online discussion forum and synchronously in online tutorials (in Adobe Connect) and face-to-face tutorials organized in a specific period and locations. In addition, the course provides a channel in social media (Twitter and Facebook) for students' social engagement. This course module presentations are opened 3 weeks prior to the start in order to provide time for students to smoothly engage in their initial activities including a series of fun and friendly online workshops to promote interaction. Recruitment Students' recruitment occurred at the middle of the online module. It was supported by the course chair and the module course tutors through an invitation shared in course news page and via central email sent to all students. Recruitment and data generation occurred during 5 weeks (February-March 2020) and was more effective after an email invitation sent to all students. Instruments The use of self-report questionnaires is well established as a methodology within research examining epistemological beliefs (Feucht et al., 2017). The self-reflective instrument was underpinned by previous work led by the second author (Sheehy et al., 2019b) and adapted to the context of online learning and fun. Box 2 indicates the questionnaire statements: The adapted questionnaire was implemented in Qualtrics with consent forms, study objectives and a novel embedded code to enable students' withdrawal. This is the first study that provides anonymous withdrawal in Qualtrics. It was then tested in two pre-pilots to check its reliability and the embedded code. In the first phase of implementation, the self-reflective instrument was used by online students to reflect about the topic "Fun and Learning" through a series of 21 statements using Likert-scale to indicate the level of agreement. In the second phase, students were invited to complete an optional open-ended question (What is your opinion about fun in online learning?) to provide their reflective views and freely express their feelings on this topic. BOX 2 | Self-reflective instrument about epistemic views related to Online Learning and Fun. Theoretical Principles Variables Statements Socio-constructivism 1. SocialActivities 1. Meaningful learning takes place when individuals are engaged in social activities. 3. SocialProduction 3. Learning can be defined as the social production of knowledge. 4. TalkProductively 4. Helping students to talk to one another productively is a good way of teaching FINDINGS Preliminary outcomes of this study (Figure 1) were presented to all participants through an article published in OpenLearn (Okada, 2020) and also in a journal paper (Okada and Sheehy, 2020: 608). The framework 'Butterfly of fun' including four types of fun in online learning was developed underpinned by Piaget and Inhelder (1969), Vygotsky et al. (1978), Csikszentmihalyi (2020), and Freire (1967Freire ( , 1984Freire ( , 1996Freire ( , 2009) and supported by students' views. Optimal fun is the joy of being fully involved in learning, moving toward full capability and creativity. Individual fun is the happiness of fulfilling accomplishments, supported by clear goals and strategies. Collaborative fun is the happiness of making connections with others, creating social bonding and developing group identity. Emancipatory fun is the joy of being curious, able to search and discover whilst being critically aware (Okada and Sheehy, 2020). Relationships Between Fun and Online Learning Supported by Qualitative Analysis This study started with a content analysis in NVivo 12 after importing from Qualtrics a csv file with 206 responses about students' views related to fun and learning (qualitative data). The word cloud visualization in Qualtrics (Figure 2) about students' views indicated the most frequent words: 148 fun, 123 learning, 50 enjoy/enjoyed/enjoyable/enjoyment, 45 students, 40 distance, 31 tutorials, 29 activity, and 26 time. The automated thematic analysis map (Figure 3) in NVivo 12; represented in Cmap tools provided 89 codes grouped through seven themes: fun, learning, students, tutorials, material, online and activities, which enabled to identify connections between fun and learning presented as following. NVivo12 sentiment analysis tool (Figure 4) indicated a significant amount of neutral and positive comments associated to narratives that included learning and fun. A small percentage of negative and mixed views emerged across all categories apart from course module "material." Three largest clusters emerged focused on fun, learning and activities. Four medium clusters were online, tutorials, fun activities, and students. Two small clusters were material and group. NVivo 12 sentiment analysis were used to obtain an overview about students' negative views ( Figure 5) and positive opinions (Figure 6) which were highlighted in red and green by the authors to show the students' responses with a significant narrative. These visualizations were useful to identify two sets of themes and sub-themes (Box 3) related to value and relationships between learning and fun as well review the automated sentiment analysis code manually to check nuances and recode it based on the meaning of narratives. A total of 206 students' testimonials were coded with these themes and the frequency of codes were represented by percentages (Box 3). The first set of themes was used to code the value of fun for students; a total of 43% students indicated positive values about fun in learning, 24% indicated neutral, and 23% mixed. Only 10% indicated negative views about fun in learning. The second set of themes were used to explore the value and relationships about fun and learning. Approximately 18% of students indicated that fun is valuable, 12% fun is important, 13% fun is useful, 24% fun is needed, 11% fun is difficult, 12% fun depends, and 10% fun is unnecessary. Relationships Between Fun and Learning Supported by Quantitative Analysis Quantitative data analysis (Graph 1) revealed largely positive views about fun and learning. Most students agreed that fun (as enjoyment) had value in supporting learning. The majority of students agreed with the following statements: 98% To learn effectively, students must enjoy learning; 91% To learn effectively, students must be happy to learn. 88.77% Learning should involve fun. However, a small group of students 16.66% beliefs that Fun activities can get in the way of student learning. The questionnaire data about 21 statements using Likert scale (1-5) were analyzed through SPSS 24. Cronbach's alpha 0.717 confirmed that the principal components analysis (PCA) was supported (Cohen et al., 2007). The instrument proved to be reliable for both PCAs (Tavakol and Dennick, 2011). The Kaiser-Meyer-Olkin score of 0.756 indicated sample adequacy and the Bartlett's sphericity test (Chi-square = 2329.046 with 210 degree of freedom, Sig. 0.000 < 0.5) confirmed consistency. Table 2 illustrates factor analysis with principal components, with Varimax rotation and Kaiser Normalization indicated six groups emerged: (1) socio-constructivist perspective, (2)traditional perspective (3) fun and learning perspective, (4)constructivist perspective, (5) banking perspective, and (6) Emancipatory Learning. Table 1 using the same method but unrotated solution, indicated three relevant groups: (1) Socio-constructivist learning with traditional teaching and fun; (2) Banking model, transmissive learning and no fun and (4) Constructivist learning and disturbing fun; This approach was selected to examine students' views and beliefs in order to develop recommendations. Therefore, based on the testimonies of the students grouped with PCA unrotated, twenty-one recommendations were listed and grouped according to three groups: apprentices, teaching professionals and the online course team. Three indexes were generated using the variables from the PCA to get an average among each group related to Fun, No Fun and Bad fun: • C1 Fun = (V19 + V09 + V03 + V18 + V02 + V05 + V04 + V01 + V08)/9; • C2 No fun = (V17 + V07 + V16 + V06 + -V21)/5; • C3 Fun bad (hampers learning) = (V10 + V20 + V11)/3. These indexes (above 3.5 -5) allowed to group participants' testimonies, select a variety of views and elaborate a representative list of recommendations to enhance students' enjoyment with online learning. NVivo 12 was used to carry out a thematic qualitative analysis with an interpretative approach to extract 21 recommendations supported by inductive mapping (Tables 3-5). A consensual review (Hill et al., 1997) through three systematic checks between the recommendations against qualitative data were developed with two experts and a student: individually, in pairs and in group. Five types of feedback enabled reviewers to suggest improvements: 1. Reduce (too long, use 2. Fun is ambiguous and subjective I think "fun" is subjective. Some people find the online activities fun, others find reading about a subject that interests them is fun. Some may find engaging with other students at a tutorial to be fun, for others it may be the opposite of fun (Student 59) 2. Fun must be sensible for productive time If the fun remains relevant and helps to highlight a point or theory then I believe it would be well received. Students do not want fun activities if they do not add benefit to their current learning, it would be deemed a waste of study time (Student 391). 2. Fun must not be forced I find the forced fun activities, ones that start with "now, just for fun let's try X" to be in many cases an annoying distraction (Student 380) Negative 10% short sentence), 2. Specify (very broad, use specific words), 3. Connect (unrelated, focus more on the data), 4. Simplify (complicated, use familiar vocabulary), 5. Clarify (confusing, revise the meaning). The results of the analysis from mixed methods are presented as follows. In addition, the graphical comparison between recommendations and full set of qualitative data both auto coded (Figure 3) in NVivo 24 (Graph 2) ensured diversity with a variety of views and consistency with a proportional representation among qualitative themes and quantitative components. DISCUSSION AND FINAL REMARKS The value of students' enjoyment with online learning has become fundamental in today's world. The World Bank (2020) and UNESCO (2020) emphasized that more than 160 countries are facing a crisis in education due to the COVID-19 pandemic with loss of learning and in human capital; and over the long term, the economic difficulties will increase inequalities. Various factors will affect educational systems; in particular, low learning outcomes and high dropout rates in secondary school and higher education. Students' confidence and satisfaction with online learning are highly relevant in a world in which distance education has rapidly become a necessary practice in response to the global the pandemic. This mixed-methods research revealed significant online students' opinions about fun for enjoyable and meaningful learning. Fun is as an important part of the lived experience; however, its meaning is underexplored by literature. This paper provided a methodology to examine fun in online learning supported by students' epistemic beliefs, underpinned by RRI -Responsible Research and Innovation. A self-reflective instrument with valid and reliable measurement scales with epistemic constructs of online learning and fun helped participants to think about their views about how learning occurs and its relationship with fun. An open database with a three sets of code scheme was generated and shared with all participants during the covid-19 pandemic. In this study, light is shed on the elements, meaning and relationships about fun and learning considering the students' "nuanced views" that integrate fun and learning in different ways. Our results provided evidence that a large majority of higher education students (88.77%) value fun because they believe it has a positive social, cognitive and emotional effects on their distance online education. A small group (16.66%) highlighted that fun impairs learning. This study confirmed that students should experience enjoyable learning so that learning should involve joy. Freire (1996) highlight that the joy of the "serious act" of learning does not refer to the easy joy of being inactive by doing nothing. "Emancipatory fun" (Okada and Sheehy, 2020) underpinned by Freire's pedagogy of autonomy is related to the hope and confidence that students can have fun by acting, reflecting and learning with enjoyment and consciousness. They can search, research and solve problems, identify and overcome obstacles as well transform and innovate their lives with knowledge, skills and resilience to shape a desirable future. A key contribution of this study is that different epistemological beliefs are associated with different conceptualizations of the relationship between fun and learning (Sheehy et al., 2019a;Okada and Sheehy, 2020). Principal component analysis revealed three groups of students who found (1) fun relevant in socio-constructivist learning (2) no fun in traditional transmissive learning and (3) disturbing fun in constructivist learning. A set of 21 recommendations underpinned by systematic mixed methods and consensual review is provided for Higher Education community including course teams, teaching staff and students to enhance online learning experiences with optimal fun, emancipatory fun, collaborative fun and individual fun. Creating opportunities for students to voice and reflect on their own views and values is fundamental to develop more effective online course designs aligned with their needs. Congruent with the positive effects of optimal experience in some online environments' studies (e.g., Esteban-Millat et al., 2014;Sánchez-Franco et al., 2014), this study confirmed that fun creates an opportunity and expectation for students to experience positive feelings in learning such as good mood, enthusiasm, interest, satisfaction and enjoyment that are all relevant for "optimal" learning. Researchers who see fun as having a close relationship with learning have proposed different types of fun. Lazzaro (2009) highlighted "easy fun" in activities such as games and role play that stimulate curiosity and exploration. Papert (2002) identified "hard fun" within goal-centered and challenging experiences, where the difficulty of the task is part of the fun. Tews et al. (2015:17) examined fun in two contexts, fun in learning activities developed by students and fun in teaching delivery by the staff. The former was characterized as "hands-on" exercises and activities that promoted social engagement between students. The latter concerned instructor-focused teaching that included the use of humor, creative examples, and storytelling. Their findings indicated that fun delivery, and not fun activities, was positively associated with students' motivation, interest and engagement. Notably, their findings indicated fun delivery, but not fun activities, was positively related to student' motivation, interest and engagement. Prior examining activities and delivery, our study highlights the importance of investigating students' epistemic views. There is therefore the opportunity for novel research to examine factors and effects of fun and student learning experience including epistemic-guided learning design. Our study highlights the importance of investigating students' epistemic beliefs and its connections with the essence of their views. There is therefore the opportunity for novel research to examine factors and effects of fun and within student learning experience including the influence of epistemic-guided learning and teaching design. A series of studies with Indonesian teachers (Sheehy et al., 2019a) suggested that their beliefs about how learning occurs are influenced by their views about happiness and, by implication, fun in relation to learning. These teachers often commented on the relationship between happiness and learning, and many saw happiness as an essential feature of good classroom teaching. However, they described a relationship between happiness and learning that was different in nature to that found in Western educational research. There is a tendency for Western educators to see happiness as "a tool for facilitating effective education" (Fox et al., 2013, p1), and as something that is promoted alongside educational excellence. In contrast, many Indonesian teachers see learning not as separate from happiness but as part of it (Budiyanto et al., 2017;Budiyanto and Sheehy, 2019). Other research has implied that this belief in separation arises when people see teaching as a simple transfer of "untransformed knowledge" from expert to student, in a traditional model of learning (OECD, 2009) also known as the "banking model of education" Freire (2000). This separation may be reflected in the balancing act between happiness with fun and academic achievement described in the CEE report mentioned above. In contrast, those who believe that learning is a social constructivist process are more likely to see happiness with fun as important to the process of learning. The situation remains that we have an incomplete understanding of fun in the domain of learning (Tews et al., 2017) and it remains to be clarified by empirical research (Iten and Petko, 2016); in particular under the lens of epistemological beliefs (Sheehy et al., 2019a) and practical experiences. Our study also complemented a previous research about fun on traditional university' campus whose students highlighted that fun in learning must integrate stimulating pedagogy; lecturer engagement; a safe learning space; shared experience; and a low-stress environment (Whitton and Langan, 2018). Some key effects of fun, for example, pleasant communication and creation of a relaxed state to reduce stress (Bisson and Luckner, 1996) are important factors to support learners during the isolation. Fun as an inner joy of wellbeing and engagement is an important component to propitiate learning with the creation of new patterns that are interesting, surprising and meaningful (Schmidhuber, 2010) to involve students with formal education during uncertain time of post-pandemic. As indicated by the research-authors and collaborators, further studies are important based on the RRI approach to construct new questions and also explore the issues indicated by preliminary studies (Okada and Sheehy, 2020). New issues must be also examined on the effects of fun on online learning, also considering age, gender, socio-cultural aspects, accessibility, digital skills, and geographical differences. Developing further recommendations at broader institutional, national and international levels about effective and engaging online learning is also important to empower individuals and society to face, innovate and reconstruct a sustainable and enjoyable world. DATA AVAILABILITY STATEMENT The open database can be accessed, downloaded and reused: Okada and Sheehy (2020) OLAF PROJECT data set. Open Research Data Online. The Open University. https://doi. org/10.21954/ou.rd.12670949 (November 2020). The Open Questionnaire can be accessed from the supplementary material Qualtrics Survey OLAF project.pdf. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Open University, HREC -Human Research and Ethics Committee. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS AO wrote the first draft of the abstract and prepared the manuscript. KS provided the instrument and feedback about the final version. AO was responsible for the survey implementation in Qualtrics, data generation, instrument's tests, data analysis through mixed methods, and validation supported by collaborators with consensual review. Additionally, AO created the figures, graphs, and tables. Both authors contributed to manuscript revision, read, and approved the submitted version. FUNDING This study was funded by the Open University UK and is part of the international project OLAF -Online Learning and Fun. http://www.open.ac.uk/blogs/rumpus/index.php/projects/ olaf/.
2020-12-10T14:08:39.821Z
2020-12-11T00:00:00.000
{ "year": 2020, "sha1": "42d17f398b76120c17714683b8c9a67b274c5d23", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feduc.2020.584351/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "42d17f398b76120c17714683b8c9a67b274c5d23", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
263157621
pes2o/s2orc
v3-fos-license
Evaluation of semen DNA integrity and related parameters with COVID-19 infection: a prospective cohort study Background In the context of Corona Virus Disease 2019 (COVID-19) global pandemic, Its impact on male reproductive function should be concerned. Methods Our study is a prospective cohort study that recruited participants infected or uninfected with COVID-19 between December 2022 and March 2023. All laboratory tests and questionnaire data were completed at the First Affiliated Hospital of Nanchang University. A total of 132 participants were enrolled, with 78 COVID-19 positive patients as the positive group and 54 COVID-19 negative participants as the negative group. Semen quality was assessed by the fifth World Health Organization criteria. The general characteristics of semen samples were assessed using CASA (computer-assisted sperm analysis). DNA damage and the high density stainability was assessed by sperm chromatin structure analysis (SCSA) based on flowcytometry. Results The sperm concentration, progressive motility and motility in COVID-19 negative group were significantly higher than positive group. In the following DNA damage analysis, a remarkably lower sperm DNA fragmentation index (DFI) in the COVID-19 negative group. In the positive group, unhealthy lifestyles had no significant effect on semen parameters, DNA fragmentation and nuclear compaction. Conclusions After excluding the interference of unhealthy lifestyle, the COVID-19 infection can have a significant impact on the quality of semen, especially the DFI,. Therefore, it shows that COVID-19 can adversely affects male fertility, and this result provides advisory guidance for clinicians. Introduction Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) is a single-stranded RNA virus belongs to the coronavirus family.Prior to the SARS-COV-2 outbreak epidemic, coronaviruses caused higher pathogenic and fatal diseases, including Middle East Respiratory Syndrome coronavirus (MERS-CoV) and Severe Acute Respiratory Syndrome coronavirus (SARS-CoV) [1].Several studies have shown that the virus not only affects the respiratory system but also causes pathological changes in other organs, such as the kidney [2], cardiac [3], liver, brain [4], and testes.The presence of SARS-COV-2 in saliva, respiratory fluids, blood, urine, and feces has been reported, and there is increasing evidence of SARS-COV-2 infection and inflammation in semen or testes [5,6].The testes and epididymis of patients who died from COVID-19 exhibited pathological changes such as interstitial edema, congestion, germ cell destruction, thinning of germinal tubules, and increased spermatogenic epithelial detachment [5,7], which may further reveal the negative impact of COVID-19 on male fertility. In some studies, it has been suggested that persistent fever during viral infection may disrupt the blood-testis barrier (BTB) [8], while the finding of SARS-COV-2 in the endothelial cells of the BTB offers the possibility of virus invasion [6].In addition, Angiotensin Converting Enzyme 2 (ACE2), which has a high affinity with SARS-COV-2, is also highly expressed in the male reproductive system, especially in spermatogonia [9], which may reveal the mechanism of entry into the male reproductive organs.Studies have shown that the immune response generated in testicular tissue adversely affects sperm production, which may impair male hormonal function and fertility [10,11]. In previous studies on the impact of COVID-19 on male reproductive system, positive results were reported.For instance, Holtmann et al. found statistically significant reductions in sperm concentration, total sperm count, total number of progressive sperm, and total number of motile sperm in 20 moderately infected COVID-19 patients [12].Similarly, Ma et al. conducted a study on 12 COVID-19 patients and found normal sperm parameters and low DFI in eight patients, while low sperm motility and high sperm DFI were observed in four patients [13].However, it is important to note that the sample size of these previous studies were small, despite the use of exclusion and screening criteria, they may still have lacked control of some confounders and the effect on sperm quality in infected patients needs to be confirmed in more studies with clinical samples.In our study, we aimed to analyze and compare the DFI and other parameters of sperm between the COVID-19 positive and negative groups and to exclude some confounders, after excluded the effects of disease and medicine, we investigated unhealthy lifestyle habits in the positive group. Study design This study was conducted by the First Affiliated Hospital of Nanchang University and approved by the Ethics Committee.As a prospective cohort study, in order to calculate the sample size required for our study, in the Gpower software, with two tails, input parameters of effect size = 0.6, alpha level = 0.05, statistical power = 0.8, the minimum required sample size was determined to be 47.Between December 2022 and March 2023, we recruited participants from the community who volunteers to participate in the study, provide semen analysis and had records of COVID-19 test.By stratified random sampling method, a total of 132 male participants were included to observe the effect of COVID-19 on male sperm.Male participants aged 30.58 ± 5.16 years old with a median age of 30 (IQR 27-34) years, and collect unhealthy lifestyle information in positive group.We have included four unhealthy lifestyle habits, smoking, drinking alcohol, staying up late and sedentariness as confounders.In the questionnaire, we defined smoking habits as smoking at least 10 cigarettes daily, alcohol consumption habits as a weekly alcohol intake over 150 g, staying up late as fall asleep after 11:00 pm [14] and sedentariness as sitting without movement for more than four hours due to work reason or habits.All participants had no history of cryptorchidism, chronic disease, infectious disease, varicocele surgery, or testicular surgery.Moreover, all patients did not take drugs affecting androgen levels and their androgen levels were within normal range.Among the 132 participants, 78 patients had COVID-19 within the last three months (COVID-19 positive group), and confirmed by two positive polymerase chain reaction tests (throat swab sampling).The other 54 participants were never infected with COVID-19 (COVID-19 negative group).Among the 78 patients, the symptoms of COVID-19 infection were between mild and moderate, and no patients were sent to the hospital for treatment.After they complete the questionnaire, semen samples were obtained through masturbation at our andrology research center and conducted further analysis.According to the fifth World Health Organization (WHO) criteria [15], three to seven days sexual abstinence duration were required. Semen analysis The Andrology Research Center in the First Affiliated Hospital of Nanchang University have completed the semen quality assessment.All analyses in the laboratory were performed according to the fifth WHO criteria.The semen samples received by the andrology clinic were placed in an incubator at 37℃ for 30 min to liquefaction, and the liquefied samples were treated in a laminar sterilization cabinet.The general characteristics of semen samples were assessed using computer-assisted sperm analysis (CASA), such as total sperm count (× 10 6 per ejaculate), Concentration(× 10 6 ml −1 ), Progressive motility (%), non-progressive motility (%), and immotility (%).The Semen assessment was performed according to the fifth WHO laboratory manual for the examination and processing of human semen.sperm motility was assessed as three types: progressive(sperm motile and active), non-progressive(sperm motile but inactive) and immotility(Sperm do not move at all). In order to check the DNA and nuclear compaction of sperm, the DNA fragmentation index (DFI) and sperm High DNA stainability (HDS) were detected by sperm chromatin structure analysis (SCSA) based on flowcytometry.DFI and HDS ≤ 15% is considered normal, when they are more than 15%, means the DNA and nuclear compaction damaged, as abnormol. Statistical analysis The data analysis of this study was mainly implemented through SPSS (IBM SPSS Statistics 25).To compare differences in DFI and other parameters between the COVID-19 positive and negative groups,the parametric method was used for measurements that conformed to a normal distribution, and the independent sample test (t-table value) method was used to compare measurements from two independent groups.The nonparametric method or Mann-Whitney U test (Z-table value) method is used for measurements that do not conform to normal distribution.After grouping the ages by median, multivariable logistic regression analysis were used by calculating the odds ratio and its 95% confidence interval to ascertain the risk factors of DFI abnormality as control of confounders.Two tailed p values ≤ 0.05 was considered statistically significant.Additionally, the results of this study were completed through the use of frequency tables and descriptive statistics. Results In our research included 132 participants, and the effect of lifestyles on sperm quality were analyzed in positive group.Compared to the negative group, the mean age of positive group (30.85 ± 4.58,30.81± 5.71; respectively) was observed not statistically significant.In the completed analysis of sexual abstinence days, semen volumes, total sperm count and sperm concentration, the sperm concentration in COVID-19 negative group were significantly higher than positive group.Sperm motility was significantly higher in COVID-19 negative group than positive group, especially progressive motility.Furthermore, the sperm DFI was significantly higher in the positive group, sperm HDS was not remarkably lower in the COVID-19 negative group (Table 1).In the COVID-19 positive group, the number of men with unhealthy lifestyles in the positive group were shown in Table 2 and no significant differences of semen quality parameters in the men with and without unhealthy lifestyles (Table 3). Table 1 Comparison of semen parameters between COVID-19 positive and negative groups Bold values indicate statistically significant differences in the parameters compared between the two groups (p < 0.05).The sperm concentration (× 10 6 ml −1 ), motility (%), and progressive motility (%) of the positive group were significantly lower than those of the negative group.In terms of the DNA fragmentation index (DFI, %), the positive group was remarkably higher than the negative group COVID-19 positive (OR = 6.760, 95% CI = 3.009-15.187)was a major risk factor for abnormal DFI (Table4). Discussion Our findings show that the DFI was remarkably higher in the COVID-19 positive group (P < 0.001), which indicates a damaging effect on sperm DNA.Haghpanah et al. [16] stated that sperm DFI may serve as a promising and important factor for male infertility due to COVID-19 infection.Likewise, according to the latest WHO standards [17], sperm DFI can be used as an important complement to assess male fertility.Furthermore, in agreement with the findings of Caliskan et al. [18] observed a negative correlation between sperm DFI, sperm concentration and percentage of motility in the analysis of sperm samples from 743 infertile men.In the study by Dipankar et al. [19], all 30 COVID-19 positive participants included in the survey had a DFI more than 30%.these results further support that COVID-19 affects male infertility and plays an important role of DNA damage in sperm.In addition, we investigated the impact of unhealthy lifestyle on sperm quality in the COVID-19 positive group, but no statistically significant differences were observed.The conclusions of Donders et al. [20] are consistent with ours, they found that smoking and BMI were not associated with any sperm quality parameter in a multiple regression analysis of infected patients.In a study on the effect of lifestyle changes on semen quality.Although there were studies proved that unhealthy lifestyles impact semen quality [21], these results and our findings both indicated that sperm quality did not show significant differences in postive group, whereas the adverse effect of COVID-19 on sperm quality was further validated and a positive COVID-19 is the only risk factor of abnormal DFI. Multiple studies have shown that SARS-COV-2 can infect the testes.Stanley et al. [22] pointed out that besides ACE2 expressed in the testis, another molecule, transmembrane serine protease 2 (TMPRSS2), is also expressed in testicular tissue and spermatozoa.It induces conformational changes by cleaving the viral S protein, thereby fusing the virus to the host cell membrane.In the study of Koch et al. [23], SARS-CoV-2 can invade target cells through the rapid pathway in TMPRSS2 + cells and the slow pathway in TMPRSS2 + cells.However, ACE2 lacked co-expression with TMPRSS2 in testicular tissues, and the association between them deserves further investigation. In our study, sperm concentration, and motility were significantly lower in the COVID-19 positive group.It has been reported that COVID-19 induced fever may impair spermatogenesis and lead to decreased sperm quality [24], but this conclusion still faces challenges, In the prospective study of 120 individuals included by Donders et al. [20] shown no significant effect of the severity of infection and fever on sperm was observed.In addition, this analysis revealed a short-term decrease in sperm concentration (P < 0.003) and progressive motility (P < 0.02) in infected patients.According to available clinical data, most infected patients develop varying degrees orchitis and genital tract inflammation, additionally, the overproduction of cytokines that regulate the immune response (IL-6, etc.) induced by a viral infection can lead to leukocyte infiltration in the interstitium of testis, resulting in autoimmune response and formation of anti-sperm antibodies (ASA) [25] and autoimmune responses appear to play an important role in the negative effects of COVID-19 on fertility.Ertaş et al. [26] concluded that COVID-19 can significantly reduce sperm concentration and total motility.Analogous results were observed in our study, with statistically significant differences in changes with sperm concentration (P = 0.042), motility (P = 0.037), and progressive motility (P = 0.027) in infected patients compared to negatives.Studies have shown that sperm quality may revert over time in COVID-19 positive patients, but it may takes more than three months.As shown in a prospective longitudinal cohort study by Dipankar et al. [19] comparing the second sampling after 74 days with the first sampling, the number of patients with sperm concentration < 150,000/mL was reduced from fourteen to five, and 30 COVID-19 positive patients with sperm concentration (P < 0.001), viability (P = 0.014), total motility (P = 0.002) and DFI (P < 0.001) were significantly improved, but the quality remained poor.In another prospective study, Enikeev et al. [27] analyzed semen samples of 44 COVID-19 positive patients during hospitalization and three months after discharge and compared them with 44 normal controls.It was observed that positive patients returned to normal levels of all parameters three months after discharge, even in moderate or severe COVID-19 patients.These findings demonstrate that COVID-19 may cause a temporary decrease in sperm quality, gradually recovery or even rehabilitation over time, but it also needs more clinical data to confirm. Inflammatory responses and oxidative stress (OS) have been proposed as possible mechanisms for the negative effects of COVID-19 on male fertility [28,29].OS and inflammation are usually correlated.SARS-CoV-2 can induce inflammatory responses and overproduction of reactive oxygen species (ROS) through immune responses and ultimately lead to OS [30].Direct evidence is provided by the study of Hajizadeh et al. [31].In their prospective longitudinal cohort study, the levels of inflammatory markers (IL-1β, IL-6, IL-8, IL-10, TGF-β, INF-α, and INF-γ) in the semen of the COVID-19 positive group were significantly higher than those of the control group from the first sampling to up to 60 days of follow-up thereafter (p < 0.05).In a study on the treatment of varicocele-induced decline in semen quality with medication, Melissa officinalis was found to effectively improve sperm count, motility, and chromatin structure [32], indicating the protective effect of antioxidant on the male reproductive system [33].Similarly, in the treatment of a unilateral testicular ischemia reperfusion injury model, citral demonstrated a powerful protective effect [34], displaying strong anti-inflammatory and antioxidant effects.This also suggests that anti-inflammatory and antioxidant therapies may have a positive role in the treatment of COVID-19 patients. The strength of our study was that we compared the semen from negative and positive participants, identified the differences and compared whether unhealthy lifestyles had an effect on positive patients.However, there are some unavoidable limitations in our study.Firstly, due to the specificity of specimens, we lack of pre-COVID-19 sperm to perform pre and postinfection comparisons.Secondly, our study was developed by clinical samples, potential selection bias and measurement error could have adversely affected the conclusions, In addition, due to the limited information on the variables, it was not yet sufficient to conduct sensitivity analysis.Thirdly, more long-term effects of the COVID-19 on male need more observations of the spermatogenic cycle.Therefore, to learn more about the effects of COVID-19 in men, need a long time patients follow to study the underlying mechanisms and find ways to mitigate the impact during and after COVID-19 infections. Conclusion In our study, we found that semen quality can be significantly affected during COVID-19 infection, semen concentration, progressive motility, motility, especially sperm DFI were significantly decrease, which has a greater impact on male fertility, therefore, reproductive advice can be offered to men after a COVID-19 infection to prevent adverse fertility results, for instance, it is not recommended that patients with COVID-19 infections have a pregnancy plan within three months as a spermatogenic cycle. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Table 2 Distribution of the four lifestyles in the COVID-19 positive group Table 3 Comparison of sperm parameters within each lifestyle There was no statistically significant effect of the four lifestyles (smoking, drinking, sleeping late, and sedentariness) on sperm parameters (p > 0.05) Table 4 Multivariable logistic regression analysis of risk factors of DFI abnormality patients
2023-09-29T14:03:34.332Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "b2c31188883e961903a97ed548eb8bb27beedc93", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/counter/pdf/10.1186/s12985-023-02192-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "147935e3bf82e35fc6f2ab8a51ed3773d417ccae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252634388
pes2o/s2orc
v3-fos-license
Using machine learning in prediction of ICU admission, mortality, and length of stay in the early stage of admission of COVID-19 patients The recent COVID-19 pandemic has affected health systems across the world. Especially, Intensive Care Units (ICUs) have played a pivotal role in the treatment of critically-ill patients. At the same time however, the increasing number of admissions due to the vast prevalence of the virus have caused several problems for ICU wards such as overburdening of staff and shortages of medical resources. These issues might have affected the quality of healthcare services provided directly impacting a patient’s survival. The objective of this research is to leverage Machine Learning (ML) on hospital data in order to support hospital managers and practitioners with the treatment of COVID-19 patients. This is accomplished by providing more detailed inference about a patient’s likelihood of ICU admission, mortality and in case of hospitalization the length of stay (LOS). In this pursuit, the outcome variables are in three separate models predicted by five different ML algorithms: eXtreme Gradient Boosting (XGB), K-Nearest Neighbor (KNN), Random Forest (RF), bagged-CART (b-CART), and LogitBoost (LB). With the exception of KNN, the studied models show good predictive capabilities when evaluating relevant accuracy scores, such as area under the curve. By implementing an ensemble stacking approach (either a Neural Net or a General Linear Model) on top of the aforementioned ML algorithms the performance is further boosted. Ultimately, for the prediction of admission to the ICU, the ensemble stacking via a Neural Net achieved the best result with an accuracy of over 95%. For mortality at the ICU, the vanilla XGB performed slightly better (1% difference with the meta-model). To predict large length of stays both ensemble stacking approaches yield comparable results. Besides it direct implications for managing COVID-19 patients, the approach presented serves as an example how data can be employed in future pandemics or crises. Introduction The Intensive Care Unit known as ICU is a critical department in a hospital with special equipment and trained medical personnel for critically sick or injured individuals (Merriam-Webster, 2022). This unit is responsible to provide emergency care for those who require immediate treatment as to deal with life-threatening conditions. In times of crises like natural hazards and pandemics, which create an influx of patients, providing immediate health services for cases with critical situations becomes of paramount importance (Bohmer et al., 2020). During the recent COVID-19 pandemic health system all over the world have been heavily burdened by the sudden influx of patients. Countries have been faced by numerous challenges while attempting to maintain the health system responsive and capable to provide essential health services (WHO Headquarters (HQ). 2021). The increasing number of hospital admission due to the vast prevalence of this virus has caused several problems for ICU wards such as overburdening of staff (Mehta et al., 2021) and shortages of medical resources, see for example Cohen and Y. van der M. Rodgers (2020). A recent cohort study in the USA revealed that strains on critical care capacity were associated with the increased number of ICU mortality for COVID-19 patients (Bravata et al., 2021). Therefore, there is impetus to reconsider and improve the management plan for the ICU. A potentially beneficial source to assist healthcare providers is the vast amount of data captured. However, these huge volumes of medical data such as patient's characteristics, medication administration records, and genomic sequences, make it bewildering and perhaps impossible to make decisions as an individual. Fortunately, prediction models powered by Machine Learning (ML) are capable to learn and provide tangible insights and it is no surprise that it is reported that such models are becoming a necessity for the modern health systems (Beam and Kohane, 2018). ML can be seen as a subset of Artificial Intelligence (AI) which has the capability of emulating human intelligence (El Naqa and Murphy, 2015). The ML algorithms are classified into six types (Oladipupo, 2010): supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, transduction, and learning to learn. Supervised learning, the most common type of ML, attempts to predict the outcome for unseen data based on learning the mapping between the input variable and output variable by means of train data (Cunningham et al., 2008). A common task in supervised learning is classification, in which the algorithm learns to classify an unknown object into one of a set of pre-determined classes (Carrizosa and Romero Morales, 2013), which is commonly employed in healthcare (Tomar and Agarwal, 2013). When the data is classified into two classes the problem is called binary classification; if the task is to classify the dataset into more than two classes the problem is referred to as multi-class classification. In the light of the current COVID-19 pandemic, the integration of ML can be considered to cope with various challenges related to the management of healthcare resources, consideration of treatment plans, informing policies, and research challenges (Schaar et al., 2021). As mentioned, managing scarce resources is a key challenge in times of a pandemic, for example the distribution and production of face masks (Tirkolaee et al., 2022), but also efficient use of the limited capacity of the ICU. Our research question is whether ML-prediction models are capable to provide insights about the ICU. To do so we focus on a threefold of elements, which directly affect the ICU capacity (required number of beds): the admission to the ICU, the likelihood of an excessive length of stay (LOS), and the mortality. In line with this research questions this study presents a comprehensive ML-based framework to predict three target variables: ICU admission, ICU mortality, and ICU LOS of hospitalized COVID-19 patients. For the prediction well-established ML algorithms are used. Using the advanced idea of ensemble stacking, which, by means of a meta-model, combines the separate ML approaches in a single model, one can benefit from the different predictive capabilities of each ML algorithm. Also, the research uncovers the important features to these three target variables, which from a medical point of view is of significant value. Finally, we point out that the same as for example the contemporary works (Goli and Malmir 2020; Alinaghian and Goli 2017) on disaster management this paper serves as an example how one can leverage ML in the case of a crisis situation. The rest of the paper is organized as follows. Section 2 presents research works in the literature that relate to our study. In Sect. 3 the materials, including the dataset, and preprocessing actions on data and ML algorithms are described, which also provides the relevant features on which the ML algorithms will be trained. The ML prediction results are presented in Sect. 4. Finally, we discuss the results and conclude in Sect. 5 and Sect. 6 respectively. Related works There is a wide field of literature on ML in healthcare. We summarize relevant literature that focus on the ICU. For example there are various works on applying ML models on the ICU admission and mortality for hospitalized COVID-19 patients, see for example Altini et al. (2021), Campbell et al. (2021), Hou et al. (2021), Podder and Mondal (2020), Ryan et al. (2020) and Vaid et al. (2020). In more detail, a study in Spain (Aznar-Gimeno et al., 2021) in which the cohort information of 3623 patients is used to provide a decision-making tool to assist clinicians to estimate the risk of ICU admission or mortality. Chieregato et al. (2021) developed a hybrid machine learning/deep learning model to predict the need to ICU among COVID-19 patients. They used the data of 558 patients admitted to a hospital in Italy. In another study (Mahdavi et al., 2021), to predict the mortality prognosis which can help in declining the mortality rates, by applying invasive and noninvasive biomarker three ML models were presented. To predict the requirement to intensive care, Kim et al. (2020) used a nationwide cohort in South Korea including data of 100 hospitals. Applying ML approach, Izquierdo et al. (2020) find out that age, fever, and tachypnoea was the most parsimonious predictor of ICU admission. The results of a study revealed that ensemble-based models perform better in predicting both ICU admission and mortality of COVID-19 cases (Subudhi et al., 2021). Hernández-Pereira et al. (2021) predicted the need of COVID-19 patients to regular hospital admission or intensive care unit admission using several ML algorithms. To predict the ICU admission in next 5 days, Famiglini et al. (2021) developed three ML models based on the complete blood count data. Various studies focused on ML algorithms to only predict the fatality among COVID-19 individuals (Churpek et al., 2021;Kuno et al., 2022;Rozenbaum et al., 2021;Wanyan et al., 2021). In a research in Iran for predicting COVID-19 mortality, seven ML algorithms were used. Random forest showed a better performance in comparison to others (Moulaei et al., 2022). A multi-center cohort study was conducted to predict the ICU mortality of COVID-19 patients. The three ML models in this research presented acceptable and similar predictive performances (Lorenzoni et al., 2021). Parchure et al. (2020) used a ML-based approach for near-term COVID-19 in-hospital mortality. To predict the ICU outcome, Cunningham et al. (2008) applied an Explainable Boosting Machine approach. Elhazmi et al. (2022) applied conventional logistic regression and decision tree to predict 28-day ICU mortality for a cohort consisting of 14 hospitals in Saudi Arabia. To develop an in-hospital mortality score at admission for COVID-19 patients, Laino et al. (2022) used several supervised ML algorithms. For predicting the probability of death among inpatients COVID-19, Zarei et al. (2022) obtained the highest performance using the C5.0 decision tree algorithm. For LOS, Ebinger et al. (2021) created three ML models on hospital days 1, 2 and 3 to classify COVID-19 patients' LOS into two classes. To predict the ICU admission, mortality, and survivors' LOS (Dan et al., 2020) developed three ML prediction models. They used support vector machine (SVM) algorithm for the three prediction models. Their models obtained acceptable performance. Based on the literature, only one paper considered predicting ICU admission, mortality, and LOS of COVID-19 patients' simultaneously, but it only made use of one specific ML algorithm. The contribution of this study is to extend the studies above by developing a comprehensive framework to predict ICU admission, mortality and LOS of COVID-19 patients by applying and comparing several classical ML algorithms. As we find that the correlations between the models' predictions are low, we further leverage their individual predictive capabilities by integrating them in a stacking ensemble approach. Lastly, as also reported in literature, the outlined framework itself demonstrates how ML can be employed to swiftly retrieve valuable information for healthcare managers and practitioners, which insights specific to this case are summarized. Materials and methods The proposed framework of this study involving three main steps is illustrated in Fig. 1. The first phase demonstrates the database and the series of actions that are carried out, to prepare the data for modeling. These includes integration of datasets, data cleaning, dealing with missing values, balancing the dataset, and feature selection which will form the basis for the ML models to be applied. The output of this step is used as input for the second and third levels. The second module applies five different ML algorithms to make prediction and evaluate them based on the statistical index and ROC curves. In the third step, an ensemble approach, using one linear and one non-liner meta-learner algorithm predicts the outcomes. Lastly, the final results of the second and third phases compare and the best approach is selected. All of these steps are applied for the three ML models (ICU admission, mortality, and LOS) correspondingly. These processes are described in detail in the following sections. All analyses and modeling are done in R, which is an open-source programming language. Study population For this study, the data were collected from the files and electronic records of two Iranian local hospitals from September 7, 2020 to March 7, 2021 comprising six months. All included patients were admitted to ICU and confirmed with positive real time reverse transcriptase polymerase chain reaction (RT-PCR) test for COVID-19. An ethic approval was obtained from the Bushehr University of Medical Science, Iran. The collected information consists of demographic data, chronic comorbidities, symptoms, vital signs, and laboratory results at admission. The initial database contained more than 200 variables, from which only the primary information collected at admission and lab test were extracted as in accordance to the purpose of this study. This led to a final dataset with a total of 41 input variables and the three outcome variables. The outcome variables included ICU admission versus non-ICU admission, ICU patient's mortality versus discharge, and to make ICU LOS a classification problem whether it is under 7 days versus more than 7 days. In the latter case the threshold of 7 days is chosen to reflect a below average LOS or an above average LOS as the mean LOS was around 7 days. The flowchart illustrating the case selection is presented in Fig. 2. Of the total 963 patients who tested positive for COVID-19, 956 were kept in the dataset, whereas 7 were excluded due to the incomplete medical records. From the remaining total of 956 patients, 844 were admitted to the HDU (High Dependency Unit) and 112 received ICU care. The ICU group included direct admissions, and a set of transferred cases from the HDU. Among the ICU patients, about 31% died and 77 patients were ultimately discharged from this unit. To Fig. 2 The selection of cases of ICU admitted COVID-19 patients determine the ICU LOS, the dead cases were excluded, such that 54 persons were hospitalized for 7 or fewer days, whereas 23 patients were hospitalized for a longer time period. Missing values and data imputation One of the common issues in medical data is the presence of missing values in independent variables (features), which if omitted, may cause a great reduction in sample size (Royston, 2004). In our dataset, as illustrated in Fig. 3. 6.8% of all observations that include 2669 fields relating to 40 different features were missing, and 36,527 fields, were fully completed. LDH, Total bilirubin, and ESR had the greatest number of empty fields and breathing problem, cough, and systolic pressure were the variables with the fewest missing values, respectively. Furthermore, ARI (acute respiratory infection) and NCD (non-communicable diseases) showed no blank fields at all. Fortunately, our three target variables, ICU admission, ICU deaths, and ICU LOS did not contain any missing values. To decide how to deal with the variables with a high rate of empty fields medical specialist were consulted. These talks identified some variables with a high rate of empty fields as important (such as ESR and LDH), because they likely have a high impact on the target variables. Therefore, it was decided to keep all of them and-instead of deleting-to apply a suitable approach for data imputation. To impute the data, the Multivariate Imputation by Chained Equations (MICE) algorithm also known as "fully conditional specification" in R was applied. This R package, imputes incomplete multivariate data by chained equations (Buuren and Groothuis-Oudshoorn, 2011). Single imputation methods, like use of mean and median and maximum likelihood methods have some limitation. The former, ignores uncertainty that may lead to excessively accurate results and the latter, is used for specific kind of models such as longitudinal or structural , AST (aspartate aminotransferase), ALT (alanine transaminase), L.disease (chronic lung disease), Nd.disease (chronic neurological disorder), K.disease (chronic kidney disease), S.cough (sputum cough), A.pain (abdominal pain), H.disease (heart disease), INR ((international normalized ratio), High.bp (high blood pressure), PT (prothrombin time), O2.s (O2 saturation), WBC (white blood cells) count, R.rate (respiratory rate), Diastolic (diastolic pressure), Temp (temperature), H.rate (heart rate), Systolic (systolic pressure), ARI (acute respiratory infection), NCD (non communicable diseases) equation models that run under particular software (Azur et al., 2011). Compared to single imputation, the MICE algorithm has the benefit of considering uncertainty and multiple possible values in imputing missing data as reported by Zhang (2016). To perform the MICE algorithm in this study, the number of multiple imputations was set to 5, which means for each missing value in the initial dataset there will be 5 probable values to be replaced. The selected imputation method for all variables was random forest with maximum iterations of 40. Kernel Density Estimation (KDE) used to estimate the probability density function of both initial and imputed data for variables. KDE is a widely used data smoothing technique which plots the data and creates distribution curves (Gramacki, 2018). Figure 4 illustrates the density plots of observed versus imputed data for 23 features out of 40. Generally, imputed values demonstrate acceptable distribution compared to observed values. Fig. 4 Kernel Density Estimation of initial and imputed data for some of the variables. The red curves denote the imputed data distribution and the blue curves demonstrate the distribution of initial data. Abbreviations: Temp (temperature), H.rate (heart rate), R.rate (respiratory rate), Systolic (systolic pressure), Diastolic (diastolic pressure), O2.s (O2 saturation), Fever.H (history of fever), PT (prothrombin time), INR (international normalized ratio), ALT (alanine transaminase), LDH (lactate dehydrogenase), ESR (erythrocyte sedimentation rate) Balancing the dataset Another issue in datasets with the goal of classification is class imbalance (in the target variable), which hinders a ML algorithm to distinguish the relatively uncommon, but important class (Kotsiantis et al., 2005). Imbalances can manifest itself between classes, when one class has more examples than the other, or among some subsets of one class (Gu et al., 2008). In such situations, it is reported that classification ML models demonstrate poor performance and unreal predictions (Poolsawad et al., 2014). This problem origins from the assumptions of learning algorithms that consider accuracy (overall error) minimization as a goal in which the minority class contributes very little (Visa and Ralescu, 2005). From various domains of real-world datasets, medical data usually include a low number of positive or special cases against relatively many negatives. In this research, the ICU admission variable, which is a binary variable was clearly imbalanced. This imbalance affected the prediction results badly and led the trained model to highly accurate results without considering the minority cases (admitted patients to ICU). To deal with this issue, the ROSE (Random Over-Sampling Examples), an R package for binary classification problems, was used (Lunardon et al., 2014). ROSE is a synthetic data generation method which produces artificial data based on a bootstrap approach. The primarily percentage for the two classes of ICU admission were 11.8% and 88.1% for ICU admitted and non-admitted. After data balancing, the numbers changed to 52.2% and 47.8% respectively, which in turn will yield more accurate predictions. Feature selection To select the relevant attributes for each of the three ML models, the R package Boruta algorithm was applied . This method is an extension of the random forest algorithm by providing criteria for selection of important features . For all models, the maximum random forest run was set to 500, and the doTrace which refers to the verbosity level was set to 2. The graph of variable importance for ICU admission is shown in Fig. 5 In this figure, the X axis represents the features and the Y axis indicates the importance of these attributes in predicting the target variable. The three blue box-plots are correspond to the minimum, average, and maximum of shadow variables. The irrelevant features are given the color red, whereas the green box-plots represent the features that are qualified as important. A yellow box-plot indicates that these variables are tentative as the algorithm cannot advise to include (confirm) or exclude the feature. For ICU admission, from The top five relevant attributes to predict the ICU mortality based on the Boruta result are O2 saturation, LDH, AST, WBC, and Urea (Fig. 6). From all variables, 28 were considered unimportant, and 4 as tentative. Important variables related to predict of ICU LOS that are hematocrit and ESR, which are depicted in Fig. 7. Total bilirubin and NCD determined tentative, and the rest of the features were qualified unimportant. Machine learning models Three separate ML models were developed to predict three probable outcomes: (1) the need of ICU admission, (2) ICU mortality versus survival, and (3) LOS at the ICU for more, or less, than 7 days. According to feature selection results of the previous section, 27, 15, and 4 features were selected as input variables for our models respectively. To predict these three variables, we are employing five established, yet well-performing ML algorithms: RF, BL, b-CART, KNN, and XGB; each of them is briefly explained below. Furthermore, we provide the settings (hyperparameters) chosen as obtained by means of cross-validation on the train set; for more details see Sect. 4. Random forest The RF is a supervised Decision Tree based ML algorithm, which has the capability of coping with the overfitting problem (Breiman, 2001). This ensemble method constructs a multitude of decision trees on different samples and utilizes them for classifying an element based on the majority vote (Oshiro et al., 2012). Alongside making predictions, RF is capable of determining variable importance according to their impact on predicting the target variable (Boulesteix et al., 2012). To specify the best branch to split and thus the importance of a variable, the RF applies a splitting criterion, for example by using the Gini impurity. This index computes the overall probability of misclassifying at a node (Qi, 2012), and it is calculated as: where p i denotes the frequency of class i at a node and c represents the number of classes in the target variable. It ranges from 0 to 1. So, while one makes subsequential branching decisions one should opt to choose a split that lowers the weighted sum of the resulting indices the most. By continuing the process until a stopping criterion, e.g., the maximal number of data points at a node (terminal node size), a tree is formed. These trees are the basis for the random forest (RF) algorithm as it constitutes a randomly generated collection of trees. To set the parameters for the RF algorithm, the number of trees to grow after each split (ntree) is 200 in this study. The final values for mtry, which refer to the number of variables randomly sampled as candidates at each split time, are 2, 2, and 6 for ICU admission, mortality, and LOS. The minimum terminal node size for all three prediction models is one, and there is no limitation on the maximum number of terminal nodes. LogitBoost Boosting algorithms include several weak learners, which will be combined to construct a final powerful learner. One of the popular boosting algorithms is LogitBoost (LB) proposed by Friedman et al. (2000). This method can be seen as the successor of the Adaboost algorithm which was sensitive to outliers and noise. Applying a binomial log-likelihood instead of an exponential loss function is brought up as a solution to this vulnerability, see (Kamarudin et al., 2017) for a discussion. LB consists of three main elements: (1) a multi-class logistic loss, (2) additive tree models, and (3) an optimization algorithm which minimizes the logistic loss (Sun et al., 2014). So, this algorithm minimizes the logistic loss over the training dataset of size n where F denotes the final classifier based on the features contained in the vector x i and y i ∈ {−1, 1} is the set of labels (Karlos et al., 2015). Considering the model parameters, the final number of iterations for which boosting should be run for ICU admission, mortality, and LOS were set to 21, 31, and 11 respectively. Bagged CART Bootstrap aggregating, often referred to as bagging is a common ensemble method to reduce the problem of overfitting problem and thereby improving the accuracy on the test set (Breiman, 1996). Bagging can be applied to high-variance algorithms such as classification and regression trees (CART). The first step in the bagging algorithm is creating bootstrapped samples from the training dataset. Then, train, either classification or regression on each subset, and finally, aggregate the results by simply taking the average or majority vote in case of regression or classification (Polikar, 2006). In this study, the bagged CART was used for the three classification models. CART algorithm splits a node based on the Gini Index criterion (Rutkowski et al., 2014). The value of ensemble size (nbagg) for all prediction models was considered 25. KNN The k-nearest neighbor (KNN), firstly developed by Fix and Hodges (1989), is a nonparametric method that can be used for both classification and regression problems. Since this algorithm considers the whole dataset each prediction and does not require a specific training stage, it is called a lazy learner algorithm. The KNN classifies a new data point in the test set based on k points that are relatively close, so-called neighbours. Therefore one needs to introduce a concept of distance, which is readily incorporated in KNN algorithms by relying on the Minkowski distance. The distance between two points, represented by the vectors x and x , is calculated as, where x i denotes the i th element of a vector x: So, p (a positive value) permits the calculation of different distance measures such as the standard Euclidean ( p = 2) and Manhattan distance ( p = 1). For more discussion on the choice of the distance measures, see Abu Alfeilat et al. (2019). We rely for this research on the standard. The final settings for k are 5, 5, and 7 -uneven numbers to break ties-for predicting ICU admission, mortality, and LOS, respectively. Extreme gradient boosting Extreme gradient boosting (XGB) is a celebrated ensemble ML algorithm based on the gradient boosted decision trees framework, which works more efficiently in terms of speed and performance compared to most ML approaches (Chen and Guestrin, 2016). In this algorithm, the objective function (measuring the model performance) consists of two parts: a loss function L and a regularization term . So when training, L evaluates the loss between a prediction y i = K k=1 f k (x i ) with f k additive decision trees (which are found in successive and efficient manner by considering residuals) and y i , while avoids overfitting by regulating the model F = K k=1 f k (x i ). The hyperparameters of the XGB algorithms for the three prediction models are exhibited in Table 1. Maximum depth refers to the longest path between the root node and a leaf. Higher values of this parameter make the model more complex and may lead to overfitting. Eta, which lies within 0 and 1, controls the learning rate. Gamma determines the minimum loss reduction for making a split. Column sample by tree specifies the subsample ratio of columns when a new tree is constructed. Minimum child weight is the minimum sum of sample weight required in a child. Subsample denotes the ratio of the training instances (Chen and Guestrin, 2016). To optimize the hyper parameters for all the algorithms (RF, LB, b-CART, XGB, and KNN) in the train set, the grid search method in Caret package (Kuhn, 2008) in R programming language were used. Results In this section we present our results. We first discuss the findings in our data, after which we apply the ML algorithms that are introduced in the previous section. Then we compare the models to conclude that there is potential to combine them by means of a meta-model, which results a so-called ensemble model. Data description All 41 predictor attributes can be classified into four groups: demographic information, symptoms, patient background, and lab results (Table 2). About 57% of all hospitalized patients belong to the age category of 19-60 years, which also has the highest rate compared to other categories among all ICU admitted and survived cases. For non-survived individuals, In the case of patients' background diabetes is most prevalent as of the total of 956 hospitalized cases, 317 (33.15%) patients have diabetes. In the second place, high blood pressure with the occurrence of approximately 25% for all cases, 28.31% in all ICU patients, 26.92% in survived ICU, and 31.42% in dead ICU individuals received the highest rate. In the lab result category, the prothrombin time (PT) of 48.57% of non-survived ICU cases was more than 17 s. The total bilirubin of 30.08% of all ICU patients and 34.28% of non-survived ICU persons was more than 1 mg per deciliter (mg/dL). Among the 35 dead ICU cases, the AST lab results of 17 people were more than 54 units per liter of serum. For Creatinine factor, 512 of all patients showed values more than 1 mg/dL which its frequency for non-alive ICU people was 62.85%. The sodium level of 60% of non-survived ICU patients was less than equal to 137 milli-equivalents per liter (mEq/L). Lactic Acid Dehydrogenase (LDH) results of 331 of all cases indicated results of more than 600 units per liter (U/L). Among ICU patients, 55 cases out of 112 demonstrate Erythrocyte Sedimentation Rate (ESR) more than 42 mm per hour (mm/hr). Prediction models Using the data for prediction, we applied five ML algorithms. To do so the data set was split randomly, but balanced, in the ratio of 80:20 for predicting ICU admission, and 70:30 for ICU mortality and ICU LOS-we chose 70:30 split in the latter two cases to ensure sufficient data points in the test set. For model validation we applied ten-fold cross-validation. The Receiver Operating Characteristic (ROC) curves and corresponding area under the curve (AUC) metric are used for comparing and evaluating the performance of ML algorithms for each target variable. Cohen's kappa, accuracy, sensitivity, and specificity are other measurements which are used in the assessment. Note that kappa is a statistical metric for categorical variables, which takes into account chance agreement; it is zero if the agreement coincide with random guessing, and one if there is perfect agreement; for more information see McHugh (2012). Accuracy refers to correct predictions, while sensitivity and specificity denote the rates of true positive and true negatives; they are defined as: Accuracy = T rue positive predictions + T rue negative predictions All pr edictions ; Sensitivit y = T rue positive predictions (T rue positive predictions + False negative predictions) ; Speci f icit y = T rue negative predictions (T rue negative predictions + False positive predictions) . Based on these measures, one can consider so-called balanced accuracy, which is especially useful in the case when dealing with imbalanced classes -as in our case. It is defined as: Figure 8 demonstrates the ROC curves of five ML algorithms (b-CART, XGB, LB, KNN, and RF) for predicting ICU admission of COVID-19 patients. There is are slight differences between the AUC scores, except for KNN. RF with the AUC of 0.976 has the highest score followed by XGB and LB; the other metrics are shown in Table 3. We see that overall RF, and after that XGB, achieves the best performance. From the 19 selected independent variables to predict admission, the total bilirubin and INR were among the most important features, in line with the findings of Sect. 3.4. The ROC curve of ICU mortality prediction is displayed in Fig. 9. The XGB obtained the highest AUC score (0.928), followed by KNN and RF with values of 0.917 and 0.868. Here LB was underperforming with only 0.746. In Table 4, we find that XGB also performs well in the other metrics. For variable importance, out of the total 13 predictors that were selected for ICU mortality, O2 saturation and LDH were among five top attributes of all five algorithms. LDH with the value of 100 was the most important feature in XGB, KNN, RF, and LB algorithms, whereas in b-CART, LDH came second with an importance score of 78.06 and O2 saturation was most important. Other important features that were identified by the models were urea, PT, Fig. 10 shows that XGB outperforms with an AUC score of 0.795, while RF is close with AUC of 0.778. Again, as for predicting admission KNN showed the worst performance. In terms of other performance metrics demonstrated in Table 5, we find that the b-CART algorithm generally provides better results. But, except for KNN the models have comparable scores, which is likely due to the fact that each model was fed with only four features. Ensemble models Ensemble algorithms are those learning methods which aim to construct a robust and more accurate prediction model by combining multiple learning algorithms (Rokach, 2010). Homogeneous and heterogeneous ensembles are two ways of integrating weak learners in ensemble learning techniques (Alazzam et al., 2017). Weak learners also known as base models are any ML algorithms which perform slightly better than random guessing. Bagging and boosting are two popular homogeneous ensemble learning, and stacking is one of the heterogeneous algorithm. In this study, stacking ensemble learning was applied to predict three intended outcomes (ICU admission, mortality, and LOS) and then the results compared with the best performed ML algorithm. In stacked generalization or stacking algorithm first, various learning algorithms do prediction, then their results are integrated through a combiner algorithm, which is another machine learning method. The starting point of using an ensemble method for improving model performance is variation among the base models, i.e., low correlations. The idea is that the lower correlations between models the better the accuracy of an ensemble model. For our case the correlations between the ICU admission prediction algorithms are displayed in Table 6. The correlation coefficients range between -1 and 1, where -1 denotes a perfect negative correlation and 1 indicates a perfect positive correlation, but of course a correlation value of 0 is desired. The highest correlation is found between the b-CART and XGB with a value of 0.5518, whereas KNN and LB show overall the most correlations close to zero. For ICU mortality, the correlations are shown in Table 7. We see there that KNN and b-CART have the least in common with other ML algorithms. Finally in Table 8, studying the correlations among the algorithms for ICU LOS, we see that KNN represents the lowest correlation with other algorithms. Of course, its performance was also considerably off compared to the other four ML algorithms. To use the models in a combined fashion, one should stack them, which is illustrated in Fig. 11. The method consist of two phases. In our research, the first phase corresponds to the five ML algorithms (base models) that were trained to predict a target variable. The applied base algorithms are the same as the models used in Sect. 3.2. In the second phase, a so-called meta-model is introduced that integrates the predictions of these five base models, using the probability scores, into a single model, resulting a single probability score. To have a more comprehensive analysis, we consider as meta-model a Generalized Linear Model (GLM) and a single-layered Neural Network (NNs). Doing this stacking by means of the two ensembling algorithms for each of the three target variables helps to answer whether the prediction performances can be improved. The results are presented in Table 9. A boldface value under the two metrics, accuracy and kappa, indicates that the corresponding prediction model performed the best compared with rival models. To predict the ICU admission, the ensemble model with the neural networks as the meta-model achieved the best result with an accuracy of 0.9577 (kappa of 0.9155). For ICU mortality, the XGB does a slightly better job. To predict the ICU LOS it seems that an ensemble model outperforms a single model, but the results are mixed as if considering accuracy the GLM as the meta-model wins, but if kappa is considered one should opt for the NN. These inconclusive results likely come from the fact that we deal with a limited dataset, both in number of features as well as data points. Discussion Predicting the patient's disease course can help the resource allocation and planning in the hospital, which is especially important when there is a huge influx due to a pandemic. For that purpose, we developed three predictive models for ICU admission, mortality, and LOS by applying machine learning algorithms on hospitalized COVID-19 patients' data. Thereby, we show how data can be utilized to support physicians and healthcare staff in the early stage of COVID-19 patient admission at the hospital. Specifically, we determine the probability of admission to ICU, the mortality and whether a prolonged length of stay is likely. As reported in our study, one of the difficulties with clinical data, perhaps even exacerbated in times of crises, is dealing with missing critical data such as lab results or accurate intake records. In our framework, instead of eliminating these values, an algorithm was applied to impute missing data. Another highlight is that, instead of relying on expert opinions, which might be unavailable for a variety of reasons, the algorithm provides the information and thus can accelerate the decision process. Moreover, the algorithm also selects which features (aspects) are important to consider. These features were similar with examined biomarkers related to the COVID-19 severity and mortality reported in literature, see for example Bousquet et al. (2020) and Yan et al. (2020). According to the selected features for ICU admission and variable importance of the applied ML algorithms, total bilirubin is one of the key attributes in determining the requirement to the ICU. Several studies, such as Araç and Özel (2021), Liu (2020), and Roedl et al. (2021), confirm the impact of bilirubin on COVID-19 severity and mortality, specifically high levels of bilirubin are associated with higher probabilities of a severe disease course and mortality. Another prominent factor in the ICU admission prediction is creatinine. For example Ghosn et al. (2021) and Lowe et al. (2021) conclude that this feature is associated with admission, severity, mortality and LOS of hospitalized COVID-19 patients. Another important factor in our study to predict ICU admission and mortality is INR, which is confirmed in the systematic review of thirty-eight studies by A. Zinellu): Paliogiannis et al. (2021). These remarkable connections between our findings and the medical literature underpin the reliability and credibility of our model, and using ML in general. For predicting the mortality of ICU patients, LDH is recognized as one of the important variables by our ML algorithms. The significance of this biomarker in the fatality rate of COVID-19 individuals is reported in for example Bousquet et al. (2020) and Yan et al. (2020). As indicated in prior articles, e.g., Mejía et al. (2020) and Mansab et al. (2021), O2 saturation plays a prominent role whether a patient will survive and indeed it belongs to top fives of important variables of each ML algorithms that is applied. Also, Age is often recognized to be important to predict mortality and is backed by Bonanad et al. (2020), which performs a meta-analysis of COVID-19 cases from five different countries on the impact of age on the death rate. Finally, in our model to predict an extensive LOS at the ICU, the erythrocyte sedimentation rate (ESR) is crucial. A meta-analysis, which analyzed 16 studies, points out the association of inflammatory markers such as ESR with severity of COVID-19 cases (Zeng et al., 2020). In addition, the importance of Hematocrit (Hct) is supported by Kilercik et al. (2021), wherein much lower Hct values are observed among critical COVID-19 cases. This research has several limitations. Firstly, the data stems from two local hospitals in Iran, which might limit the generalizability of results. Secondly, this study is limited to one wave of the COVID-19 pandemic. So, in the case of another wave, corresponding to for example a different strain of the virus, will likely yield slightly different results. We believe nevertheless, based on our discussion above, that the ML algorithms are capable to pick-up the important medical factors in such a new situation, because the framework and procedures followed are generic, i.e., not case dependent. Thirdly, considering the modeling, a limiting factor is the small-sized dataset, especially for predicting the LOS. If more data becomes available, it is likely that the predictive capability improves. With more data at our disposal the models can be extended; currently the prediction revolves binary classification problems. But, with for example ICU admission or mortality one might also be interested in the time until such an event. So, in the case of a predicted ICU admission, when will this admission likely take place, or with a positive mortality outcome what is the most likely moment that the patient will decrease. For LOS one might be more interested to predict the actual duration or bed occupation than merely whether it will be more than 7 days. Note that in these extensions, instead of classification, one should consider it as regression problem. Finally, we scoped our study to predicting three variables by means of five established algorithms, a logical extension is to consider other ML algorithms and to predict other relevant variables, for example ones that relate to a treatment plan. Conclusion The main objective of this research is to provide a comprehensive approach for clinicians and managers to better manage scarce resources such as ICU beds, staff, and ventilators. Therefore, we propose a data-driven methodology using machine learning (ML) to predict ICU admission, mortality, and length of stay (LOS) of hospitalized COVID-19 patients. To alleviate the issue of missing values, and not to delete data, the MICE algorithm is applied. Then, because of the imbalanced classes in the datasets -which degrades the prediction performance -a synthetic data generation balancing method is used to create a balanced datasets. For these three outcome variables, potentially relevant features are selected by using the Boruta feature selection algorithm. Next, five different ML algorithms are applied, XGB, KNN, RF, b-CART, and BLR, which are all coded in R programming language. They show promising performance scores in terms of accuracy and AUC. Finally, in an attempt to further boost performance, an ensemble model is employed, which for predicting ICU admission and LOS outperforms relying on a single ML model and yields better accuracies of 0.95 and 0.71 respectively. However, for ICU mortality, XGB with an accuracy of 0.82 outperforms ensembling. Our research showcases how data, although the dataset is limited and incomplete, can be leveraged by means of ML to support decision makers in times of a healthcare crises, such as the COVID-19 pandemic, which centers this work. The fact that many of the key features of the prediction models coincide with the factors found in medical literature confirms the reliability and credibility of using our approach and ML in general. The models studied focus on determining whether the patient will be admitted to the ICU, will decease and whether a prolonged LOS is likely. Besides enriching this study with more data or repeating the study in other healthcare settings (different hospital, another COVID-19 wave, new virus) or for other variables, predicting the actual timings of those events is a logical starting point for further research. Funding No funding was received for this study. Data availability Due to the sensitive nature of data used in this study, the hospital authority was assured raw data should remain confidential and should not be shared. Conflict of interests The authors declare that they have no conflict of interest. Ethical approval This study was approved by the ethical committee of Bushehr University of Medical Science in Iran. The study was conducted in accordance with the ethical standards of the Helsinki declaration.
2022-10-01T15:23:38.642Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "fb78dbf3e8dad287ba0f3743bd877d56a64030fd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "72855ab4ee04c7d9d7fe704707dde60d7ca66c9b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
18377031
pes2o/s2orc
v3-fos-license
Precise quantification of mixtures of bispecific IgG produced in single host cells by liquid chromatography-Orbitrap high-resolution mass spectrometry ABSTRACT Bispecific IgG are heterotetramers comprising 2 pairs of heavy and light chains. Co-expression of the 4 component chains in a single host cell typically yields the desired bispecific IgG plus up to 9 additional incorrect chain pairings. Several protein engineering strategies have been reported to facilitate the heterodimerization of antibody heavy chains or cognate pairing of antibody heavy and light chains. These technologies have been used to direct the efficient assembly of bispecific IgG in single host cells and minimize unwanted chain pairings. When purifying bispecific IgGs, the identification and quantification of low levels of closely related IgG contaminants are substantial analytical challenges. Here we have developed a robust high-throughput method for quantitative analysis of bispecific IgG preparations using novel online liquid chromatography in conjunction with an extended mass range Orbitrap-based high-resolution mass spectrometer. A mathematical method was developed to estimate the yields of the 2 isobaric species, namely the desired bispecific IgG and the light chain-scrambled IgG. The analytical methods described herein are anticipated to be broadly applicable to the development of bispecific IgG as drugs and potentially to other complex next-generation biotherapeutics. Introduction Bispecific antibodies are of growing interest for drug development, and at least 40 such molecules are currently in clinical studies. [1][2][3] Combining 2 (or more) antigen specificities within a single antibody can endow them with new properties, such as the ability to retarget effector cells to kill tumor cells. Bispecific antibodies can also serve as an alternative, or potentially an improvement, for antibody combination therapies. 1,2 Extensive technology development with bispecific antibodies in recent years has led to the generation of at least 60 different alternative formats or scaffolds. 1,2,4 The bispecific IgG (BsIgG) format has gained popularity because it may provide IgG-like properties, such as long serum half-life and optional effector functions, as well as the ability to tailor these Fcassociated functions. A BsIgG is a heterotetramer consisting of 2 pairs of heavy and light chains, with each pair providing a different antigen (or epitope) specificity. Efficient production of BsIgG using a single host cell can be challenging due to promiscuous pairing of the component chains. 5 Multiple strategies have been devised to overcome (or avoid) antibody chain pairing problems, as reviewed. 2,6 For example, efficient heterodimerization of the 2 heavy chains in BsIgG has been achieved by using the knobs-into-holes (KiH) mutations 7,8 and, more recently, by several other elegant strategies. [9][10][11][12] BsIgG were first produced efficiently in a single host cell using 2 different heavy chains containing KiH mutations in conjunction with a common light chain. 13 This strategy circumvents light chain mispairing, but constrains the antibodies that can be used in preparing BsIgG and may require purposedesigned antibody discovery stratagies. 14 More recently, separately expressed half-antibodies containing KiH-modified heavy chains and different light chains have been assembled efficiently in vitro. 15 More general strategies for assembling BsIgG in single host cells have been developed by engineering antibodies for orthogonal pairing of the 2 light chains to their cognate heavy chains. [16][17][18][19] For example, a typical design will involve residue modifications at the heavy/light chain interfaces on one or both arms in addition to mutations to facilitate heavy chain heterodimerization. [16][17][18] The success of such antibody engineering designs in facilitating BsIgG assembly can be evaluated following transient coexpression of the component heavy and light chains in mammalian cells. The various IgG species produced are typically purified by protein A or protein G chromatography, and then the BsIgG component of the IgG mixture is quantified by liquid chromatography (LC) in conjunction with mass spectrometry (MS). [16][17][18] Nevertheless, the analytical characterization of BsIgG preparations remains challenging, and new methods are still needed. Native MS and ion mobility (IM) MS are emerging as important tools for the characterization of antibody-based products. 20 For example, native MS coupled to sizeexclusion chromatography 21 and native IM MS 22 have been used to analyze BsIgG obtained from the CrossMab technology and antibody-drug conjugates, respectively, under more physiologically representative conditions. Previously, quadrupole time-of-flight (Q-TOF) LC-MS analyses have been used successfully to measure the relative amounts of different IgG species. 23,24 For example, Woods et al. 23 coupled a C4 reverse phase LC system with an electrospray ionization (ESI) Q-TOF mass spectrometer to quantify homodimers and associated half-antibody impurities in BsIgG samples. The limit of quantification of antibody impurities was estimated as 2% based upon spiking of standards into purified heterodimer. However, the Q-TOF methodology was not able to resolve IgG species close in mass, impairing sample quantification in some cases. Heck and colleagues have demonstrated quantitative high-resolution analysis of complex mixtures of antibodies by native MS using direct infusion. [25][26][27][28] The peak width of a single antibody charge state was narrower for an Orbitrap instrument compared to a Q-TOF instrument, which improves the quantification accuracy. Moreover, for the Orbitrap, the centroid of the peak was shifted to slightly lower and closer to the expected mass, due to more efficient desolvation and reduced adduct formation under native conditions. 28 Previous work from our lab has demonstrated the benefits of Orbitrap resolution for the identification of unwanted IgG byproducts down to 1% through the use of direct infusion after buffer exchange to either partially denaturing solvents or neutral pH (unpublished data). Although sensitive and effective, this workflow lacked the high-throughput capabilities necessary for large scale evaluation of impurity screening, largely due to the fact that buffer exchange and manual infusion without upfront chromatography can be laborious, thereby limiting the number of samples that can be conveniently analyzed. Furthermore, distinction between the correctly paired BsIgG and the isobaric mispair was not addressed. Here, we describe an improved platform process for the analysis of BsIgG preparations containing IgG contaminants using reversed phase high-performance (HP) LC coupled with Orbitrap-based high-resolution LC-MS. IgG constructs were engineered to minimize product heterogeneity by deleting the carboxy terminal lysine of the heavy chain (DK447) and mutation to prevent N-linked Fc glycosylation (N297G). LC conditions and mass spectrometric parameters were optimized to enable the routine analysis of hundreds of BsIgG samples with high reproducibility and sensitivity. A mathematical method was developed to estimate the proportion of BsIgG in an isobaric mixture containing BsIgG and IgG with both light chains mispaired. Lastly, DNA ratios of the component chains used for single cell coexpression were evaluated for their effect on BsIgG production. Taken together, the platform developed here for BsIgG quantification shows exquisite sensitivity and robustness, and has essential utility in evaluating BsIgG designs and in the development of BsIgG therapeutics. IgG samples are typically deglycosylated prior to intact mass analysis by LC-MS. Enzymatic removal of the glycan attached to residue N297 in the C H 2 domain of the heavy chain eliminated a major source of mass heterogeneity. This mass heterogeneity was avoided by installing the N297G mutation into the heavy chain of both antibodies, thereby preventing N-linked glycosylation. Another common source of mass heterogeneity in IgG results from proteolysis of the heavy chain C-terminal lysine (K447), from one or both chains, during recombinant IgG production. 31 This additional source of mass heterogeneity was circumvented by deleting this lysine residue (DK447) from the heavy chain sequences of both the anti-HER2 and anti-CD3 antibodies. The heavy and light chains for the anti-HER2 and anti-CD3 antibodies were transiently co-transfected at equivalent DNA weight ratios into HEK 293 cells. The secreted IgG was affinity-purified by protein A chromatography from the cell culture conditioned media and analyzed by size-exclusion chromatography. The N297G and DK447 heavy chain mutations, alone or in combination, did not affect the transient expression yield of anti-HER2/CD3 BsIgG nor the high proportion of monomeric IgG observed in the size-exclusion chromatography profile (Fig. S1). Therefore, both of these modifications were incorporated into all later constructs so as to minimize the mass heterogeneity for reliable MS analyses. Detection of IgG using high-resolution LC-MS In order to quantify IgG species, an HPLC instrument employing a supermacroporous reverse-phase column, MAbPac RP, was coupled to an Exactive Plus extended mass range (EMR) Orbitrap mass spectrometer, and, after optimization, a rapidly obtained, rugged, high quality, baseline-resolved signal was obtained (Fig. 1). The performance of the HPLC system was first evaluated with 30 identical injections of a commercially available IgG mass standard. Reproducibility was significantly improved by replacement of all solvent carrying biocompatible polyetheretherketone lines to stainless steel, and optimization of the composition of the gradient solvents and the column temperature (Fig. S2A, S2B and Table S1). For the same 30 injections, the relative standard deviation (RSD) for major peak parameters (retention time, area, height, width at half height and asymmetry) was greatly reduced (Table S2). The carryover between injections was <0.6% for loading amounts of 10 mg IgG (Fig. S2C). Ruggedness, reproducibility and low carry-over were prerequisites for the high throughput screening of large sample sets. Instrument optimization in Orbitrap Exactive Plus EMR, such as interface condition (sheath gas, Aux gas, S-lens RF level), the desolvation energy (CID, CE) and trapping gas pressure (tapping gas pressure and entrance lens voltage), were performed (see Materials and Methods). A symmetric sharp peak in total ion chromatogram (TIC) with a well-defined ion envelope resulted ( Fig. 1A, B). Expansion of 2 adjacent charge states (Fig. 1C) demonstrates baseline resolution of all glycoforms of the glycosylated IgG standard. The mass accuracy with different IgG species was also evaluated, and the average mass error observed was 8 ppm, comparable to previous reports. 28 Precise quantification of BsIgG expressed in a single host cell The quantification ability for this high-resolution LC-MS system was evaluated using BsIgG. For BsIgG production, anti-HER2 and anti-CD3 heavy chains carrying the KiH, N297G, and DK447 mutations (H1 and H2) and their respective light chains (L1 and L2) were co-expressed in HEK 293 cells, resulting in 4 major IgG species with minimal contribution from heavy chain homodimerization due to KiH mutations ( Fig. 2A). The observed IgG species included the correctly paired anti-HER2/CD3 BsIgG (H1L1/H2L2) and the isobaric light chain-scrambled IgG (H1L2/H2L1). The additional IgG species contained 2 copies of either the anti-HER2 light chain (H1L1/H2L1) or the anti-CD3 light chain (H1L2/H2L2). The mass difference between the BsIgG and either of the IgG containing 2 copies of L1 or L2 reflects the mass difference between the 2 light chains. To assess the limits of detection and quantification of the LC-MS platform for BsIgG samples, in vitro assembled anti-HER2/ CD3 BsIgG standard (H1L1/H2L2) was titrated with decreasing amounts of 2 mispaired common light chain IgG species (H1L1/H2L1 and H1L2/H2L2) ( Fig. 2B and Table S3). For all titrations, the mass spectra showed baseline resolution between the 3 peaks. The close correlation (R 2 D 0.9998) between the spiked and measured percentages demonstrated the capability of the optimized LC-MS system to precisely quantify BsIgG species from single-host expressions with contamination levels lower than 1% (Fig. 2C). For all of these data, the standard deviation was below 0.2% (Table S3). Even at the lowest mispair levels, reproducibility was not compromised, which demonstrated the robustness of this platform. As we had demonstrated that the platform enabled the precise quantification of different IgG species in samples with low-level impurities, it was subsequently used to screen panels of BsIgG. Based on the titration experiments, our limits of reproducible quantitation are estimated to be 1%, while the limit of detection (defined as 3 times standard deviation) is 0.3%. Our experience is that peaks below 1% are repeatedly and reliably detected and deconvoluted. The high-resolution LC-MS system was then employed to analyze the protein A-purified BsIgG samples from single-host expressions. For anti-HER2/CD3 ( Fig. 2D and Fig. S3A,C), the peaks of lowest and highest mass represent the common light chain mispaired species, H1L1/H2L1and H1L2/H2L2, respectively. The intermediate peak represents a mixture of the BsIgG and the isobaric light chain-scrambled IgG. The composition of the IgG mixture was measured as 47.4% BsIgG combined with L-chain scrambled IgG, along with 36.9% H1L1/H2L1 and 15.8% H1L2/H2L2. As an additional example, the expression of an anti-VEGFA/ VEGFC BsIgG was assessed ( Fig. 2E and Fig. S3B and S3D). Similarly to anti-HER2/CD3, anti-VEGFA and anti-VEGFC heavy chains carrying the KiH, N297G and DK447 modifications (H1 and H2) and their respective light chains (L1 and L2) were co-expressed in HEK 293 cells. The desired BsIgG and light chain-scrambled IgG together constituted 70.6%, while the fraction of the IgG with 2 copies of VEGFA or VEGFC light chains were 6.0% and 23.4%, respectively. To accurately assess the efficiency of correct chain pairing, individual percentages of the correctly paired BsIgG and the light chain-scrambled IgG need to be individually determined. Limits of LC-MS method for resolving different IgG species close in mass Successful quantification of BsIgG impurities using instrument conditions amenable to high-throughput analysis is dependent on the mass difference between individual antibody arms. To determine the quantification capabilities at a standard operating resolving power (17,500 at 200 m/z), we tested available BsIgG of varying mass differences. Baseline resolution was achieved where antibody impurities differed by 118 Da (Fig. S4A). In contrast, with samples differing in mass by 55 Da, only partial resolution was achieved (Fig. S4B). Specifically, with a 55 Da difference, 2 shoulder peaks were present and amenable to quantification, albeit with reduced accuracy. Due to the fact that high resolving powers require additional spectral acquisition time, they are not compatible with the rapid, efficient chromatographic separation described herein. The peak width of the TIC for a typical LC-MS run was about 0.15 min, which represents 9 data points across the peak (Fig. 1). Increased instrument resolving power (35,000 at m/z 200) on our automated method resulted in 5 useable data points across the peak and overall lower quality results. Additionally, the sensitivity and signal-to-noise ratio was compromised at higher resolving powers, resulting in loss of minor components. Estimation of the BsIgG and light chain-scrambled IgG content from the MS data A probability-based mathematical method was developed to estimate the percentage of each of the BsIgG (H1L1/H2L2) and the light chain-scrambled IgG species (H1L2/H2L1) from the combined quantity measured by MS (see Materials and Methods). The anti-HER2/CD3 BsIgG and the corresponding light chain-scrambled species were estimated as 23.7% each (Fig. 2D). For the anti-VEGFA/VEGFC, BsIgG and the light chain scrambled IgG were estimated as 68.5% and 2.1%, respectively (Fig. 2E). To validate the mathematical model that was used for calculating the component species of the intermediate MS peak, experiments were carried out using the anti-HER2/CD3 and anti-VEGFA/VEGFC BsIgG samples. First, the anti-HER2/ CD3 IgG sample was treated with lysyl endopeptidase, which cleaved the heavy chains on the C-terminal side of the K222 residue, located in the upper hinge region. Based on the chain pairing in the IgG sample, 4 Fab species were anticipated in the digested mixture, namely, H1L1, H1L2, H2L1 and H2L2 (Fig. 3A). The H1L1 Fab is derived from BsIgG and the H1L1/ H2L1 mispaired IgG. Similarly, the H2L2 Fab is derived from BsIgG and the H1L2/H2L2 IgG. The H1L2 Fab is contributed by both H1L2/H2L2 and H1L2/H2L1 mispair IgG species. The H2L1 Fab was derived from H1L1/H2L1 and H1L2/H2L2 IgG species. The contribution of each Fab species can be calculated from the known IgG content (Fig. 3B, C). To compare with the calculated values, the digested anti-HER2/CD3 BsIgG sample was analyzed using the LC-MS method to quantify the percentages of the Fab fragments (Fig. 3B). In addition, the same experimental method was applied to the anti-VEGFA/VEGFC IgG sample (Fig. 3C). In both cases, the MS-measured percentages closely approximated the calculated compositions of the 4 Fab fragments. Thus, the data supported the use of the mathematical formula for estimating the BsIgG yields from LC-MS measurements. Effect of DNA chain ratios for co-transfection on the percentage of BsIgG Multiple weight ratios of anti-HER2/CD3 DNA were tested to study whether the chain ratios have an effect on the percentage of assembled BsIgG. With the total DNA amount for co-transfection being fixed, the L1:L2 ratio was varied while H1:H2 was held constant at 1:1. With the L1:L2 ratios varied from 2.8:1 to 1:2.8, the anti-HER2/CD3 BsIgG content varied over a narrow range: from 19.6% to 24.7% (Fig. 4A, Table S4). The highest BsIgG percentage was observed with an L1:L2 ratio of 1:1.4. In contrast, the percentage of the 2 mispaired common light chain species (H1L1/H2L1 and H1L2/H2L2) varied over more extensive ranges and in a reciprocal manner. The same DNA chain ratios were also used for production of the anti-VEGFA/VEGFC BsIgG. In this case, the BsIgG yields varied dramatically (from 15.8% to 68.5%) over the different chain ratios (Fig. 4B, Table S5). The percentage of BsIgG was elevated with increasing L1:L2 ratios from 1:2.8 to 2.8:1. Discussion As novel therapeutics, bispecific antibodies are of growing interest because they can be used to support and explore new mechanisms of actions for disease treatments. Of all bispecific antibodies, BsIgGs have advantages, and thus are of particular interest, due to their resemblance to conventional IgG therapeutics. However, obtaining pure BsIgG is challenging because of the complexities that arise from the co-expression of 2 pairs of heavy and light chains. Therefore, both the discovery and manufacturing of BsIgG therapeutics benefit from the development of efficient production methods. As the technologies have improved, near-quantitative yields of BsIgG from single host cell expressions have been achieved. 7,8,[16][17][18] To confirm the success of engineering solutions, use of a precise and robust companion quantification method for discriminating various engineering designs is critical. For that purpose, we developed a high-resolution LC-MS-based quantification platform compatible with high throughput analytical needs. Antibodies can be redesigned to improved their homogeneity and their potential for development, as reviewed by Beck et al. 32 In this study, humanized anti-HER2 and anti-CD3 antibodies in IgG 1 format were modified to avoid 2 sources of heterogeneity. Specifically, the removal of the carbohydrate by the heavy chain N297G mutation eliminated an extra deglycosylation step before LC-MS analyses. Additionally, the deletion of the heavy chain Cterminal lysine residue (DK447) circumvented the heterogeneity that may result from proteolytic removal of this lysine. 23 As a result, simpler IgG mass spectra could be obtained immediately following protein A chromatography. Previously, sample analyses for the single-cell expressed BsIgG were mostly performed with the ESI-Q-TOF MS system. By utilizing high-resolution capabilities available on the Orbitrap platform, it was possible to acquire and interpret intracharge state baseline resolved MS data. With the inclusion of upfront chromatography enabled by a newly designed column demonstrating minimal carry-over (<0.6%) and excellent reproducibility (1.3% RSD for peak area), we quickly, repeatedly and precisely quantified BsIgG impurities under 1% and detected impurities down to 0.3%. This Orbitrap-based highresolution LC-MS platform performance is superior to the Q-TOF-based LC-MS system due to improved desolvation and increased signal-to-noise ratio. The antibodies used in our study include impurities or modifications that differ by 118 Da or more and are readily and precisely quantified. The higher resolving power necessary to distinguish IgG close in mass, and the concurrent extended spectral acquisition time required, were not compatible with the efficient chromatography methods described herein. In practice, BsIgG combinations that are <100 Da in mass are measured by direct infusion into the mass spectrometer at increased instrument resolution. Barring this complication, our general workflow allows us to determine accurately the distribution of the different IgG species in thousands of BsIgG samples to date at a rate of »100 samples per 24-hour period. Another key aspect of our BsIgG quantification platform is the mathematical method that we developed to estimate the percentages of BsIgG and the light chain-scrambled IgG species. This approach was validated by demonstrating the close approximation between the calculated and experimentally measured values of different Fab fragments obtained by proteolytic digestion of BsIgG mixtures. Two main assumptions were employed in developing these computational methods. First, we assumed that the binding of the light chains to a heavy chain is completely independent from the other heavy chain because it is commonly understood that the 2 Fab arms of an antibody are usually formed independently. Second, we assumed that the BsIgG should have a higher or equal percentage than the light chain-scrambled IgG. This second assumption is consistent with previous reports that some coexpressed antibody pairs show significant preference for pairing of cognate heavy and light chains, 16 and other antibody pairs have little (if any) preference for cognate chain pairing. 24,33 There are no reports (to our knowledge) of co-expressed antibodies exhibiting a preference for non-cognate heavy and light chain pairing. In this study, an intrinsic cognate heavy and light chain pairing preference was observed for anti-VEGFA and anti-VEGFC antibodies, resulting in 68.5% yield of BsIgG. In contrast, the heavy and light chains of anti-HER2 and anti-CD3 antibodies paired randomly, giving rise to 24.7% anti-HER2/CD3 BsIgG (Table S4 and S5). Before mathematical correction, these estimates were 70.6% BsIgG for anti-VEGFA/VEGFC and 49.4% BsIgG for anti-HER2/CD3, reflecting a deceptively high estimate of properly paired BsIgG in the latter case. The quantification platform developed here provides a potentially broadly applicable tool to detect and quantify any cognate chain preference when pairs of antibodies are co-expressed. For the anti-VEGFA/VEGFC and anti-CD3/HER2 BsIgG, the constant domains (C L and C H 1) are identical for each antibody pair. Therefore, differences in the variable domain (V L and V H ) sequences presumably account for the observed preferential cognate chain pairing for anti-VEGFA/VEGFC and random chain pairing for anti-HER2/CD3. Almost all of the variable domain sequence differences between VEGFA and VEGFC antibodies reside in the complementarity-determining regions, with only a few differences in the framework regions (data not shown). Mutational analysis may help identify the residues responsible for the preferential cognate chain pairing, including the relative contributions of framework region and complementarity-determining region residues. We have no evidence that antibody light chains are able to swap heavy chain partners post secretion. Once the heavy and light chains are assembled, a disulfide bond is formed between them. This interchain disulfide bond likely serves as a kinetic trap that prevents chain exchange. Even after purposefully reducing the light chain/heavy chain disulfide bond, we were unable to find evidence for light chains swapping heavy chain partners. 15 In the case of the anti-HER2/anti-CD3 BsIgG, we constructed the forced chain mispairings (anti-HER2 heavy chain with anti-CD3 light chain or anti-HER2 light chain with anti-CD3 heavy chain) and demonstrated that both cognate heavy and light chains are required for antigen binding for HER2 and also CD3 (J. Zhou, unpublished ELISA data). We also investigated the effect on the BsIgG yield by the DNA ratios of heavy and light chains used for co-transfection. In most of the previously reported studies involving engineering BsIgG, 18-20 the light chain ratio was kept non-optimized as 1:1. However, in this study, the anti-VEGFA/VEGFC BsIgG showed a much lower yield (40-50%) with the equal light chain DNA ratio, compared to the optimized yield (»69%). Therefore, it may be necessary to evaluate multiple light chain DNA ratios to optimize the percentage of a selected BsIgG. This becomes especially important when comparing different designs or evaluating their performance on various antibodies. The quantification platform developed here offers high reproducibility and robustness, allowing for the rapid analysis of hundreds of clones in an automated fashion. Applications extend beyond evaluating BsIgG yields of different designs, such as screening clones in the development of stable cell lines. Thus, the platform has the potential to be broadly useful in the development of BsIgG therapeutics. Additionally, our platform may be suitable for other applications in the development of next-generation biotherapeutics, including quantification of bispecific antibody-drug conjugates 34 or mixtures of antibodies. 20 Constructs The sequences of the anti-HER2 (huMAb4D5-8), 26 anti-CD3 (huMAbUCHT1v9), 17 anti-VEGFA 33 and anti-VEGFC 29,35-37 antibodies were obtained according to earlier publications. The EU numbering scheme for antibody residues is used throughout this manuscript. 38 The heavy chains of anti-HER2 and anti-VEGFA were modified with the "knob" mutation (T366W), and the heavy chains of anti-CD3 and anti-VEGFC with the "hole" mutations (T366S:L368A:Y407V). 7 The heavy chains were further modified by site-directed mutagenesis to prevent Fc glycosylation (N297G) and to delete the carboxy terminal lysine (DK447). All the antibody constructs were cloned as human IgG 1 into the pRK5 mammalian expression vectors. Antibody expression and purification The plasmids encoding heavy and light chains for making the BsIgG were mixed according to the weight ratios described in the results section. The DNA mixtures were then co-transfected into Expi293F TM cells (Thermo Fisher Scientific). For sizing analysis as the later application, antibody expressions were performed at the 30 mL scale. The IgG species were purified from the supernatant using the MabSelect Sure protein A agarose beads (GE Healthcare Life Sciences) according to the manufacturer's protocol. For MS as the later application, both antibody expression and purification were performed at the 1 mL scale with high throughput methods that were previously reported. 39 Total antibody yields were calculated based upon an extinction coefficient of 1.4 at 280 nm using a Nanodrop instrument (Thermo Fisher Scientific). Gel filtration analysis Antibody samples (10 mL) were injected on to a 4.6 mm-diameter TSKgel SuperSW3000 size exclusion column (TOSOH Bioscience) on an Infinity 1260 HPLC instrument (Agilent). The samples were eluted with 200 mM K 2 PO 4 , 250 mM KCl, pH 7.0 at a flow rate of 0.35 mL/min. High-resolution LC-MS An UltiMate 3000 RSLC (Thermo Fisher Scientific) LC system was configured with HPG-3400RS binary gradient pump with a 400 mL static mixer, WPS-3000TRS thermostatted split loop autosampler, TCC-3000RS thermostatted column compartment, and DAD-3000RS diode array detector with a semimicro flow cell (2.5 mL, 7 mm). Control of the system was via DCMSlink through Xcalibur software also provided by Thermo Fisher Scientific. A unique reversed phase column was designed specifically for the high throughput characterization of antibodies and antibody fragments by HPLC and LC-MS methods. The MabPac RP column (2.1 mm £ 50 mm) consisted of a phenyl hydrophobic supermacroporous 4 mm polymeric resin with 1500 A pores capable of operation over a wide pH range (pH 0-14) and at temperatures of 110 C, offering optimal method development flexibility. The HPLC system was optimized for solvent path, gradient solvent composition and column temperature (Table S1). The final optimized MacPac RP run conditions used a flow rate of 300 mL/min and a column temperature of 80 C. A binary pump was used to deliver solvent A (water containing 0.1% formic acid and 0.02% trifluoroacetic acid) and solvent B (90% acetonitrile containing 9.88% H 2 O plus 0.1% formic acid and 0.02% trifluoroacetic acid) as a gradient of 20% to 65% solvent B over 4.5 min. The solvent was then step-changed to 90% solvent B and held for 6.4 min to clean the column. Finally, the solvent was step-changed to 20% solvent B and held for 4 min for re-equilibration of the column. For Intact IgG Mass Check Standard, 500 ng sample was injected via auto-sampler for each run. For BsIgG samples, 3 mg sample was auto-injected from a 96-well plate. The HPLC was coupled to a Thermo Exactive Plus EMR Orbitrap instrument (Thermo Fisher Scientific). The IgG samples were analyzed using the following parameters for data acquisition: 3.90 kV spray voltage; 325 C capillary temperature; 100 S-lens RF level; 15 Sheath gas flow rate and 4 AUX gas flow rate in ESI source; 1500 to 6000 m/z scan range; desolvation, in-source CID 100 eV, CE 0; resolution of 17500 at m/z 200; positive polarity; 10 microscans; 3 £ 10 6 AGC target; fixed AGC mode; 0 averaging; 25 V source DC offset; 8 V injection flatapole DC; 7 V inter flatapole lens; 6 V bent flatapole DC; 0 V transfer multipole DC tune offset; 0 V C-trap entrance lens tune offset; and trapping gas pressure setting of 2. Spectra were visualized using Thermo Xcalibur Qual Browser, then mass spectrum deconvolution was performed with Thermo Protein Deconvolution 4.0 under the following parameters: 10 minimum adjacent charges, 95% confidence noise rejection, 1000»6000 m/z range, 25 ppm mass tolerance, and 5»100 charge state range. The relative quantification was based on the intensity reported by Protein Deconvolution 4.0 of each individual peak versus total summed intensities. and H2L2 and H2L1 share the same heavy chain H2, Assuming that the pairing of light chains to the knob heavy chain and the hole heavy chain are completely independent events, the population proportion of BsIgG can be calculated as: where x D %[H1L1/H2L2]; and the population proportion of the light chain-scrambled IgG can be calculated as: where y D %[H1L2/H2L1]. From the MS quantification (e.g. Fig. 1D-E Assuming that the percentage of BsIgG, %[H1L1/H2L2], is larger than or equal to the percentage of light chain-scrambled IgG %[H1L2/H2L1], from equations (1)-(7), the percentage of BsIgG can be calculated as: In some experiments, e.g., anti-HER2/CD3 at H1:H2:L1:L2 ratios of 1:1:1:1.4 and 1:1:1:2.8 (Table S3), the value of ½ a 2 À Á 2 ¡ b£c: a 2 À Á 2 ¡ bc may be a negative number. When this is the case, it is manually forced to be zero; then x D y D a 2 Preparation of Fab fragments One hundred mg protein A-purified IgG were incubated at 37 C with 1 mAU of MS grade lysyl endopeptidase (Wako Laboratory Chemicals) in 100 mL 100 mM Tris-HCl, pH 8.0. The reaction was stopped after 1 h by addition of 5 mL 10% acetic acid. Digested samples were then analyzed by high-resolution LC-MS. Disclosure of potential conflicts of interest YY, GH, JZ, MD, LM, DE, CS, WS and PJC are current or former employees of Genentech, Inc., which develops and markets drugs for profit. This work was funded by Genentech, Inc.. YY and LM were employees of Genentech while this work was conducted. LG is an employee of Thermo Fisher Scientific, the company who manufactures and markets the mass spectrometry instrumentation used.
2018-04-03T04:01:05.542Z
2016-09-09T00:00:00.000
{ "year": 2016, "sha1": "eb80cb5d1a90457bcd6e98f2b64ead9f8760761b", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19420862.2016.1232217?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb80cb5d1a90457bcd6e98f2b64ead9f8760761b", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
20105818
pes2o/s2orc
v3-fos-license
Minipig as a potential translatable model for monoclonal antibody pharmacokinetics after intravenous and subcutaneous administration Subcutaneous (SC) delivery is a common route of administration for therapeutic monoclonal antibodies (mAbs) with pharmacokinetic (PK)/pharmacodynamic (PD) properties requiring long-term or frequent drug administration. An ideal in vivo preclinical model for predicting human PK following SC administration may be one in which the skin and overall physiological characteristics are similar to that of humans. In this study, the PK properties of a series of therapeutic mAbs following intravenous (IV) and SC administration in Göttingen minipigs were compared with data obtained previously from humans. The present studies demonstrated: (1) minipig is predictive of human linear clearance; (2) the SC bioavailabilities in minipigs are weakly correlated with those in human; (3) minipig mAb SC absorption rates are generally higher than those in human and (4) the SC bioavailability appears to correlate with systemic clearance in minipigs. Given the important role of the neonatal Fc-receptor (FcRn) in the PK of mAbs, the in vitro binding affinities of these IgGs against porcine, human and cynomolgus monkey FcRn were tested. The result showed comparable FcRn binding affinities across species. Further, mAbs with higher isoelectric point tended to have faster systemic clearance and lower SC bioavailability in both minipig and human. Taken together, these data lend increased support for the use of the minipig as an alternative predictive model for human IV and SC PK of mAbs. Introduction The primary advantages of monoclonal antibodies (mAbs) as therapeutic molecules are their target specificity and their prolonged serum persistence. During the development of mAb therapeutics, the selection of a relevant preclinical animal model is essential for the prediction of the human pharmacokinetic (PK) profile, as well as for assessing overall safety and exposureresponse relationships prior to clinical studies. [1][2][3] Moreover, Subcutaneous (SC) delivery is a common route of administration for therapeutic monoclonal antibodies (mAbs) with pharmacokinetic (PK)/pharmacodynamic (PD) properties requiring long-term or frequent drug administration. An ideal in vivo preclinical model for predicting human PK following SC administration may be one in which the skin and overall physiological characteristics are similar to that of humans. In this study, the PK properties of a series of therapeutic mAbs following intravenous (IV) and SC administration in Göttingen minipigs were compared with data obtained previously from humans. The present studies demonstrated: (1) minipig is predictive of human linear clearance; (2) the SC bioavailabilities in minipigs are weakly correlated with those in human; (3) minipig mAb SC absorption rates are generally higher than those in human and (4) the SC bioavailability appears to correlate with systemic clearance in minipigs. Given the important role of the neonatal Fc-receptor (FcRn) in the PK of mAbs, the in vitro binding affinities of these IgGs against porcine, human and cynomolgus monkey FcRn were tested. The result showed comparable FcRn binding affinities across species. Further, mAbs with higher isoelectric point tended to have faster systemic clearance and lower SC bioavailability in both minipig and human. Taken together, these data lend increased support for the use of the minipig as an alternative predictive model for human IV and SC PK of mAbs. Minipig as a potential translatable model for monoclonal antibody pharmacokinetics after intravenous and subcutaneous administration of a diverse class of humanized and human antibodies consisting of IgGs with such altered properties as different molecular weights, domain architectures, glycosylation patterns, electrical charge, subclasses and interactions with Fc receptors or target molecules. 31 As a result, the impact of these properties on the PK and distribution properties of IgGs is of keen interest. In particular, it has been well established that the binding of the IgG Fc domain to the neonatal Fc receptor (FcRn) plays a principal role in the long serum persistence of IgG by salvaging it from a default catabolic pathway in the vascular endothelium and bone-marrow derived cells. [32][33][34][35][36][37] Hence, altering the binding affinity of IgG to FcRn can have a significant impact on serum IgG half-life, particularly in the case where affinity is reduced. 38 In addition, previous studies also indicate that nonspecific electrostatic interactions caused by differences in cell membrane surface charge and antibody charge can affect the tissue distribution and pharmacokinetics of mAbs. 39 For instance, modification of the isoelectric point of an antibody of approximately one pI unit or more can result in significant differences in its PK properties. [39][40][41] Therefore, for successful non-clinical PK evaluation of mAbs, understanding the various characteristics of the mAb, such as pI, specific and non-specific binding, and FcRn affinity, can be helpful. In this study, the PK of a series of therapeutic mAbs following IV and SC administration was assessed in Göttingen minipigs. The tested mAbs are active against soluble or membrane-bound targets with indications in oncology, inflammatory diseases or metabolic diseases. The derived PK parameters were compared with the known PK properties of these mAbs in humans. In addition, given that binding to FcRn plays an important role in the disposition and serum half-life of mAbs, 32,37 the binding affinity of these antibodies to human, pig and cynomolgus monkey FcRn was compared. Further, the impact of various characteristics of these mAbs, including FcRn binding, isoelectric point (pI) and in vitro blood cell and plasma protein binding, on their PK properties were evaluated. The goal of this study is to pave the way for further evaluation of minipig as a potential translatable model for human PK of mAbs. Results Pharmacokinetic study in minipigs after intravenous and subcutaneous administration. The pharmacokinetic properties of eight therapeutic human IgG antibodies were evaluated in Göttingen minipigs after IV and SC administration. The PK of one additional mAb (mAb8) was tested after IV injection only. Profiles of the mean serum/plasma mAb concentration vs. time after a single IV or SC administration for each mAb are presented in Figure 1. The estimated PK parameters from a compartmental analysis of both IV and SC data are summarized in Table 1. Following IV injection, the serum/plasma concentration of most molecules tested exhibited a biphasic PK profile typical for mAbs, with a relatively rapid distribution phase, slow elimination phase and log-linear terminal phase with no signs of nonlinear pharmacokinetics. An exception is adalimumab, whose PK appeared to be affected by anti-therapeutic antibodies (ATA). has not been clearly established. 7 Relatively little is known about the mechanism of SC absorption of mAbs in different species. The major parameters that affect this process are thought to include the role of lymph and blood capillaries in systemic absorption, cross-species differences in hypodermis morphology and physiology, drug formulation, stability of the molecule, the site of injection, the depth of injection, as well as the molecular properties of the mAbs themselves. [8][9][10][11] Published data suggest that animal models are not necessarily reliable predictors of antibody SC PK in humans as there is often no apparent relationship in SC bioavailability between humans and animals. 7 Indeed, for larger biotherapeutics, absolute SC bioavailability has been reported to be higher in cynomolgus monkeys than in humans, while for biotherapeutics with molecular weights <40 kDa no clear relationship was obvious between human and animal data, both in rodents and cynomolgus monkeys. 7 Given these discrepancies, it is believed that the lack of predictability could be attributed to differences in hypodermis structure and physiology between humans and rodents or non-human primates. 12 With the increasing ethical concerns regarding the use of primates in non-clinical testing, attention has been increasingly focused on the potential use of minipigs as non-rodent alternatives for pharmaceutical testing. [13][14][15][16][17] As such, minipigs are becoming more frequently used for toxicological and PK studies of small molecules. [18][19][20] Recently, minipigs have been used to test the immunogenicity of therapeutic proteins; 21,22 however, limited data exist for the pharmacokinetics of macromolecules in minipigs. [23][24][25] In particular, to the best of our knowledge, there are no studies that describe the PK of mAbs in minipigs. The combined results from previous studies have demonstrated similarities between pig/minipig and human skin and lymph architecture 26,27 that are likely key contributors to SC absorption and bioavailability of macromolecules. In addition, the thickness of the epidermis and the stratum corneum as well as the lipid composition of the stratum corneum is similar between human and pigs, which has led to the frequent use of the pig/minipig model for dermal administration. 28 The pig also has a tight link between dermis and underlying muscle, which is similar to the human situation, and these features have been attributed to the arrangement of elastic fibers in the hypodermis/subcutis, the target for SC administration. 29 The structure of the hypodermis in pigs differs from that in furred animals like rodents and cynomolgus monkeys, which have less abundant elastic fiber, and thus looser skin. 30 Similar to humans, the dermis of pigs is connected to the deep fascia via a fibrous network in the hypodermis. 29 Finally, pigs/minipigs have an easily accessible SC space with a tissue thickness similar to humans, making this species suitable to mimic the human situation for SC administration of proteins in preclinical studies. Because of these similarities, it is hypothesized that the PK of macromolecules in minipigs following SC administration may better resemble the SC PK in humans than other commonly used laboratory animals. A robust preclinical model for predicting human PK of mAbs is also desirable to study the effects of various molecular properties of mAbs on their in vivo behaviors. Advances in antibody engineering technologies have enabled the development injections of these mAbs. The estimated systemic bioavailability (assessed as drug fraction absorbed by compartmental modeling) following SC injection was variable among different molecules, ranging from 36 to 98%. Comparison of minipig and human pharmacokinetic properties. Allometric scaling of clearance. In order to examine whether minipigs can be used to predict the clearance of mAbs in human, the correlation between the clearance rates of mAbs observed in this study and those previously obtained in human studies was examined using interspecies scaling. The results are tabulated in Table 2. The calculated allometric scaling exponent (w) ranged from 0.75 to 1.17, with an arithmetic mean value of 0.98 and a standard deviation of 0.16, using data from the eight mAbs in the present study. MAb2 was excluded from the calculations due to an abnormally fast clearance in minipigs, albeit normal clearance in humans. Further, analysis of linear correlation between the CL In addition, mAb7 showed a slight trend of nonlinearity in the terminal phase after IV injection. The elimination half-lives of the various mAbs were relatively long, ranging from 6.9 to 26 d. The central volume of distribution (V c ) ranged from 36 to 62 mL/kg, consistent with the expected range of plasma volumes in animals. 42 The systemic clearance (CL) was slow, ranging from 2.5 to 11 mL/day/kg for most molecules, except for mAb2 which had an unusually fast CL of 36 mL/day/kg. Following SC injection, the antibodies were slowly absorbed into the systemic circulation with a median time to maximum concentration (T max ) between 1 and 4 d and with estimated rates of absorption ranging from 0.32 to 4.6 d -1 . With the exception of adalimumab, whose PK appears to be affected by anti-therapeutic antibodies (ATA), the elimination profiles for SC and IV administration were nearly parallel for all mAbs tested, suggesting similar systemic elimination kinetics following SC or IV and the remaining four had higher bioavailability in minipig. Notably, three mAbs with the highest bioavailability in minipig (mAb1, mAb5 and adalimumab) also had the highest bioavailability in human. The SC rate of absorption is higher in minipigs than that in humans for all molecules (Tables 1 and 3). Correlation between systemic clearance and bioavailability. Interestingly, there is an apparent correlation (r 2 = 0.82) between the systemic clearance and bioavailability (after excluding mAb2 which had an unusual PK in minipig) such that as clearance increases, the bioavailability tends to decrease (Fig. 3A). However, from the limited data available, no obvious correlation was observed in human (Fig. 3B). Effect of FcRn binding, pIs and blood/plasma protein binding on pharmacokinetics. To compare the FcRn binding properties across species and to determine if differences in FcRn binding affinity could account for observed differences in PK parameters, we evaluated the binding affinity of each IgG to recombinantlyexpressed human, pig and cynomolgus monkey FcRn via surface plasmon resonance (BIAcore). Purified FcRn was injected over immobilized IgGs, and the equilibrium dissociation constant (K D ) for each interaction was calculated using a simple 1:1 binding model ( Table 4). Values for K D are reported in nM and standard errors appear in parentheses for each value. No significant cross-species variability in the IgG-FcRn binding affinity was observed for any of the IgGs tested. In addition, binding across different IgGs was fairly consistent for a given species of FcRn. No apparent correlation was found between FcRn binding affinity and the clearance or SC bioavailability of the various mAbs in minipig or human (data not shown). To assess the potential impact of mAb pI on PK properties, the pI values for all mAbs were obtained either experimentally by IEF or from the literature and are shown in Table 4. Interestingly, a clear trend was observed: mAbs with higher pI values (>~9.0) appeared to have faster clearance (after excluding mAb2 with unusually fast clearance in minipig) and lower bioavailability in both human and minipig, with the exception of mAb5 (Fig. 4). To examine the effect of blood and/or plasma protein binding on PK behavior of the mAbs, in vitro binding studies were observed in minipigs and humans showed a positive correlation between the two (r 2 = 0.69) ( Fig. 2A). Comparison of SC bioavailability and absorption rate. The available human SC bioavailability/fraction absorbed (F) and rate of absorption (K a ) data for these mAbs are summarized in Table 3. The correlation between human and minipig bioavailability is shown in Figure 2B. There is a weak correlation between minipig and human SC bioavailability (r 2 = 0.32) (Fig. 2B). In general, molecules with higher bioavailability in minipig tend to have higher bioavailability in human, although a few molecules showed higher bioavailability in minipigs than in humans. Out of the six mAbs with both human and minipig SC bioavailability data, one mAb had similar bioavailability in human and minipig (mAb4), one had lower bioavailability in minipig (mAb5) n.a. = not available; a for both IV and SC animals; b only for SC animals; c mean ± standard error of the population mean parameters estimated from compartmental PK analysis; d mean (standard error not available since T 1/2β is a derived parameter); e percentage of drug absorbed; f median observed T max ; g Nonlinear PK; only linear clearance is tabulated (nonlinear clearance parameters: V max : 165 µg/day/kg, K m : 8.27 µg/mL); h not reported due to nonlinear pharmacokinetics. which depends on the cross-reactivity of the mAb to its target in the minipig species. Allometric scaling of nonlinear CL, however, is hampered by the potential interspecies differences in target expression, turnover and affinity and has not yet been successful. 4 Conversely, a weak correlation observed between the SC bioavailability of mAbs in minipig and human was observed (Fig. 2B), although it should be noted that the number of molecules available for this analysis was quite limited. Out of the six mAbs tested, one had similar bioavailability in minipig and human, one had higher bioavailability in human and four had higher bioavailability in minipig. Notably, three mAbs with the highest bioavailability in minipig (mAb1, mAb5 and adalimumab) conducted in minipig whole blood. The results demonstrate that none of the mAbs bound to blood cells (Fig. 5A) or plasma proteins from minipig, as determined by size-exclusion HPLC (Fig. 5B). Discussion Characterization of serum clearance, SC bioavailability and systemic absorption of mAbs in relevant animal models is crucial for designing improved formulations or drug delivery systems, as well as for the interpretation of exposure-response relationships and development of useful PK models with predictive value. 8 Preclinical experimentation remains an essential component of antibody PK testing, but there is no requirement to solely rely on traditional animal species. 43 As established models, rodents and monkeys are generally the default animal species of choice to characterize mAb PK. However, there are known differences in the skin architecture and physiology between humans and rodents or non-human primates, 12,30 and the predictability of these species for human SC PK of macromolecules has been poor. 7 The minipig is a potentially better predicative animal model given its similarity in skin physiology to human. [26][27][28][29] To date, there has been no published data on the PK of mAbs in minipig, and no systematic study to compare the applicability of minipigs and non-human primates as translatable preclinical models for this class of biotherapeutics. 44 Our present study was conducted to fill this gap. Cross-reactivity of the respective mAb to its target has to be considered when conducting PK studies in animals, since targetmediated disposition can play a relevant role in the PK of mAbs in vivo. The cross-reactivity of the tested mAbs in minipigs was not studied, but it was not expected in this non-primate species. Also the appearance of the concentration-time curves gave no indication for a nonlinear pharmacokinetics, as they showed a log-linear elimination phase after IV administration. Thus, any potentially existing target-mediated CL pathway was saturated at the doses used in our study. An exception to this was mAb7, for which a nonlinear pharmacokinetics was evident and considered in the compartmental PK model. Thus, except for mAb7, minipig data reflect linear CL. In order to ensure an appropriate comparison between minipig and human CL data, human CL data at doses that saturated the target-mediated CL were used in the analysis for mAbs with relevant target-mediated CL in humans. The results from this study showed that, in general, the CL of mAbs in minipig is predictive of that in human, with an estimated allometric scaling exponent of 0.98 ( Fig. 2A and Table 2) in the absence of relevant contributions from target mediated drug-disposition to the overall clearance. The inter-species scaling of the CL of mAbs have been evaluated in previous studies in reference 4, 5 and 45. These studies suggested that cynomolgus monkeys can be successfully used to predict human CL of therapeutic antibodies using an allometric scaling exponent of 0.85. The data suggest that the minipig may be used as an alternative animal model to project human CL for mAbs. It is of note, however, that these projections address only the linear clearance processes, but not target-mediated, nonlinear clearance processes, also had the highest bioavailability in human. Also, in the correlation of SC data from minipigs and humans, the potential role of target-mediated disposition needs to be considered. However, the chosen compartmental PK assessment and PK parameters for description of SC absorption (fraction absorbed and absorption rate constant) are independent of systemic target-mediated disposition processes. Thus, these parameters are still useful when assessed in a non-responder species, unless there is relevant target expression in the hypodermis or draining lymph vessels. The latter may lead to a relevant target-mediated 'first-pass clearance' during the absorption phase, which would reduce the fraction absorbed. Recently published work by McDonald et al. described the SC bioavailability of therapeutic proteins between different species and showed that four out of the five marketed monoclonal antibodies reviewed had higher bioavailability in cynomolgus monkeys than in humans, with adalimumab having markedly higher bioavailability (96%) in cynomolgus monkeys. 7 Only one of the five had similar bioavailability in both species. 7 Of the mAbs discussed in this report, our in-house data suggest that two mAbs (mAb2 and mAb3) had higher SC bioavailability in cynomolgus monkeys (85% and 74%, respectively, unpublished data) than in humans. For these three antibodies (adalimumab, mAb2 and mAb3), the SC bioavailabilities measured in minipigs are closer to those obtained in humans than those obtained in cynomolgus monkeys. Hence, the results presented here provide initial evidence to suggest that minipig may be a better predictive model for human SC bioavailability of mAbs than cynomolgus monkeys although additional data are needed to confirm this. Interestingly, a clear relationship was found between clearance and SC bioavailability in minipigs when mAb2 is excluded due to its unusually fast clearance (Fig. 3A). This relationship suggests that the same molecular characteristics or pharmacokinetic processes may determine systemic clearance and SC absorption, the latter being reflected in the SC bioavailability. Generally, the rate of absorption following SC administration was about 2-to 5-fold higher for most monoclonal antibodies in minipigs than in humans (Tables 1 and 3). This is consistent with previous studies demonstrating that the SC rate of absorption of recombinant human erythropoietin across species scales inversely with body weight. 46 Although the exact cause for the faster absorption in minipigs is not known, differences in mAb transport through the extracellular matrix of the SC space prior to lymphatic uptake 12 or differences in the lymphatic transport 8-10 may be involved. Nevertheless, it is unlikely that transport of the antibodies through lymphatic vessels is the rate-limiting step for SC absorption because the residence time in both the macro-and micro-lymphatic systems is estimated to be on the order ~1 h, [47][48][49][50][51] which is much shorter than the time scale of SC absorption of mAbs (on the order of days). The serum clearance, SC bioavailability and rate of absorption of therapeutic mAbs could be affected by many factors, including, but not limited to, nonspecific binding, development of immunogenicity, target-mediated disposition, affinity to FcRn, pI, the site of SC injection and injection depth. 7 In this study, an unusually fast clearance was observed for mAb2 in minipig identical constant regions may be due to non-FcRn-dependent mechanisms, such as non-specific or target-mediated clearance. 41 We next sought to determine if the PK differences in the mAbs may be due to differences in their pI values since changes in isoelectric point have been shown to affect the PK behaviors of intact antibodies. [39][40][41] The pI values of the mAbs used in this study varied moderately, ranging from 6.1 to 9.4 ( Table 4). Interestingly, a clear trend was observed for most mAbs tested (with the exception of mAb5), with higher values pI (greater than ~9.0) tending to be associated with faster systemic clearance rates and lower SC bioavailabilities in both humans and minipig (Fig. 4). This trend is consistent with previous findings showing that increases in the pI of antibodies resulted in increased blood clearance and decreased half-life in vivo due to the nonspecific electrostatic interactions between the anionic cell membrane surface and the antibody. [39][40][41] While these previous observations were primarily made in rodents, our study confirmed these findings in minipigs and humans, utilizing a set of antibodies with different specificities and frameworks. In addition, we demonstrated that pI not only affects systemic clearance, but also potentially SC bioavailability. It is worth noting that in vitro binding studies showed no evidence of protein or cell binding in the minipig plasma or whole blood, respectively, for any of the antibodies tested (Fig. 5), suggesting that the postulated nonspecific interactions driven by mAb pI may take place primarily in tissue, or that the affinity of these interactions may be below the assay detection limit. Taken together, the current study suggests that the characteristics/processes governing systemic clearance and SC bioavailability of the mAbs tested herein are less likely to be dependent upon FcRn, plasma protein or blood cell binding and are possibly related to the electric charge of antibodies and its effect on electrostatic interactions with negatively charged cell surfaces. 39 On the other hand, the lack of a clear correlation between human clearance and SC bioavailability may be due to other factors contributing to the clearance or absorption processes, such as target-mediated disposition. To our knowledge, this study represents the first comprehensive evaluation of the minipig as a potential translational model despite normal clearance in human and cynomolgus monkey (data not shown). While the exact cause of the fast clearance of this mAb in the minipig is unknown, it is potentially due to nonspecific binding in minipig tissues causing fast elimination of the antibody, given its clean blood/plasma protein binding profiles ( Fig. 5A and B). The fast clearance of antibodies due to off-target binding has been previously reported in both rodents and cynomolgus monkeys. 52,53 The unexpected fast clearance of mAb2 in our study is likely a non-generalizable phenomenon specific to this particular antibody, although further monitoring of mAb PK in minipig is warranted. In addition, the PK profiles of adalimumab in minipigs indicate the presence of ATAs, consistent with findings reported in reference 21. The effect of ATA on PK of mAbs may be addressed by excluding the periods of accelerated clearance due to ATAs from the PK assessment prior to scaling, as the ATA formation against a human mAb in a laboratory animal bears no relevance to the human situation. 54 Further, targetmediated disposition can also play a major role in the clearance of mAbs in vivo. In such cases, a binding species would be more appropriate for studying the target-mediated clearance of mAbs, whereas non-binding species (such as minipig) can still be useful to investigate the linear portion of the clearance pathway. These confounding factors should be considered when using minipig as a predictive model for evaluating human clearance of mAbs. Since FcRn protects circulating IgG from systemic elimination by recycling it away from the default catabolic pathway in vascular endothelial cells and bone marrow-derived cells, 32,37 altering the binding affinity of IgG to FcRn can have a significant impact on the PK of monoclonal antibodies. 38,55 In the study reported here, we characterized the in vitro IgG-FcRn interaction using recombinantly expressed human, pig and cynomolgus monkey FcRn in order to systematically investigate the role of FcRn in regulating the serum PK of IgG across these three species. Given the sequence identity between human and pig FcRn (~76%) and the presence of canonical intracellular trafficking motifs in the pig FcRn cytoplasmic tail (di-leucine and WXXϕ motifs), 56 it is hypothesized that the binding, trafficking and IgG salvage behavior could be similar between the pig and human/ cynomolgus monkey FcRn. Indeed, our current results indicate that, as expected, the binding affinities of the human IgG molecules tested are similar across these three species (Table 4), thus providing additional rationale for the use of minipigs and cynomolgus monkeys as a translatable species for mAb PK. Notably, to our knowledge, this is the first controlled study to test the affinities of clinically used therapeutic mAbs to FcRn in a comparative PK study across humans, cynomolgus monkeys and pigs. In addition to the similar binding properties of these IgGs to FcRn, no apparent correlation was observed between the FcRn binding affinity and PK of the mAbs in minipigs or human. This is consistent with recent findings that 3-to 4-fold differences in FcRn-binding affinity arising from mutations in the Fc region of a single human IgG did not result in significant PK differences in cynomolgus monkeys. 34 This lack of a correlation suggests that small differences in FcRn binding may not be the major driver for the differences in the PK of mAbs in this case, and that unexpected differences in PK among antibodies with Materials and Methods Antibodies. This study used recombinant IgG mAbs that were produced at Genentech or Hoffmann-La Roche (with the exception of adalimumab, which was produced by Abbott Pharmaceuticals and obtained from commercial sources). All non-disclosed mAbs except one were humanized IgGs, while adalimumab is a human IgG. The non-disclosed mAbs were of either IgG 1 or stabilized IgG 4 subtype, with a molecular weight around 150 kDa. Antibodies were expressed in Chinese hamster ovary cells and were purified using Protein A affinity chromatography followed by size exclusion chromatography. All materials and reagents used for this study were formulated and ensured to be pyrogen-free, either by limulus amebocyte lysate test or for human IV and SC PK of therapeutic monoclonal antibodies. The results support the minipig as an alternative, less expensive and sufficiently predictive model over other commonly used species (e.g., cynomolgus monkeys) for evaluation of linear clearance rates and SC bioavailability of mAbs. This is further supported by the similar binding properties of human and pig FcRn for the mAbs tested herein. Moreover, both the systemic clearance and SC bioavailability of most antibodies tested appear to correlate with their pI values, suggesting the nonspecific electrostatic interactions may play a role in both processes in minipigs and humans. Taken together, this study serves as a starting point for the evaluation of the minipig as a potential alternative or better translatable model to predict IV and SC PK of mAbs in humans. analyte. MAb8 and adalimumab were analyzed with a generic human IgG ELISA using an anti-human Fc antibody for both capture and detection. Minimum quantifiable concentrations for each assay were: mAb1, 65 ng/mL; mAb2, 280 ng/mL; mAb3, 81 ng/mL; mAb4, 81 ng/mL; mAb5, 391 ng/mL; mAb6, 500 ng/mL; mAb7, 20 ng/mL; mAb8, 12.5 ng/mL; and adalimumab, 12.5 ng/mL. Pharmacokinetic data analysis. The serum mAb concentration-time data for all mAbs following IV or SC administration were analyzed by compartmental PK analysis using nonlinear mixed effects modeling. A two-compartmental model with firstorder absorption and elimination kinetics was used to simultaneously fit the concentration-time profiles from individual animals following a single IV or SC injection for most antibodies. For mAb7, an additional nonlinear elimination term was included. The structural model is composed of two compartments with linear clearance from the first compartment ("central compartment") and linear inter-compartment exchange. Additionally, first-order absorption from an absorption site compartment was assumed in the case of SC administration. The structure model is demonstrated in: material certification. Concentration was measured for all the mAb formulations, which are specified in Table 5. Animals. Female and male Göttingen minipigs were purchased from Ellegaard Göttingen Minipigs A/S, Dalmose, Denmark or Marshall BioResources, North Rose, NY. Animals were examined and weighed on the day following receipt, and were allowed to acclimate to the laboratory environment for 15-20 d prior to the first day of dosing. Prior to study initiation, animals were also trained repeatedly to be accustomed to the handling procedures for dosing and blood sampling. While on study, animals were housed individually in swine cages or housed jointly in the respective dose group. Housing and care were as specified in the applicable USA-, Danish and UK-regulations. Intravenous and subcutaneous pharmacokinetic studies in minipigs. The PK studies of all nine IgGs were conducted in Göttingen minipig across four contract research organizations (CRO): Pipeline Biotech, Trige, Denmark; Charles Rivers Labs, Ohio, USA; Covance Laboratories, Harrogate, UK; and LAB Research, Lille Skensved, Denmark. Before each study began, the animals were quarantined and acclimated for at least 7 d. During this period, they were weighed, physically examined by a staff veterinarian and determined to be healthy at the beginning of each study. Protocols were reviewed and approved by the Institutional Animal Care and Use Committee at each CRO. Minipigs were assigned into groups of three to five animals each, with similar average body weight in the SC and IV groups for the respective test substance. Animals received a single IV bolus dose of test article either via jugular cannula or via a catheter in the ear vein. SC dosing was done in the scapular (20 g needle) or inguinal area (27 g needle) ( Table 5). Blood samples were collected from the femoral or jugular vein for each animal at pre-dose and multiple time points up to 28 d post-dose. Following the final blood collection on Day 28, all surviving animals were either returned to the stock population of the respective laboratory or euthanized by sodium pentobarbital injection followed by exsanguination and discarded. Samples collection and processing. For preparation of serum samples, PK blood samples were collected into serum separator tubes (with clot activator) or into plain tubes. Samples were allowed to clot at room temperature for at least 20 min, but no longer than 1 h. The clotted samples were maintained at room temperature until centrifuged, commencing within 1 h after the collection time, at a relative centrifugal force of 2,000x g for 10 min in a refrigerated centrifuge set to maintain 4°C or, alternatively, at 3,500x g for 10 min at room temperature. The serum was separated from each of the blood samples within 20 min after centrifugation and transferred into two approximately equal aliquots of 0.5 mL each. Samples were held on dry ice until stored in a freezer set to maintain -60°C to -80°C. In the studies with adalimumab, mAb5 and mAb8, plasma was prepared using EDTA or heparin as anticoagulants. Analysis of serum/plasma samples. Serum or plasma samples were assayed for mAbs 1 through 7 concentrations using enzyme linked immunosorbent assays (ELISA), where each analyte was captured using a recombinant human protein specific for that Inter-species scaling of CL. The clearance data in minipigs (CL minipigs ) were extrapolated to clearance in humans (CL humans ) using the allometric equation: where w is the scaling exponent for clearance. Based on the observed mean CL minipigs and CL humans and the average body weights of the minipigs in the respective studies and the typical body weight of humans (70 kg), w was calculated for each antibody using Eq. 3. Human CL data at doses that saturated the target-mediated CL were used in the analysis for mAbs with relevant target-mediated CL in humans. Cloning of human, pig and cynomolgus monkey FcRn. The coding regions of the cynomolgus monkey, human and pig FcRn α-chain ectodomain and the cognate full-length β 2 -microglobulin light chain (β 2 m) genes were generated by gene synthesis (Blue Heron, USA). The coding regions of FcRn and recombinant β 2 m were subcloned into a previously described pRK mammalian cell expression vector. 58 For expression and purification of FcRn constructs, human embryonic kidney cells 293 were transfected using FUGENE (Roche) according to the manufacturer's protocol. After 24 h of incubation with transfection complexes, cells were switched to serum-free PSO4 medium (Genentech; 1 g/L Pluronic F-68, 5.5 g/L combination nonselect medium (Life Technologies), 4.3 g/L glucose, 1.22 g/L sodium bicarbonate, 0.1 g/L gentamicin sulfate (pH 7.1); 350 milliosmolar) supplemented with 5 mg/L recombinant bovine insulin and trace elements and grown for 7 d. Cells were collected by centrifugation and soluble FcRn was purified from the culture supernatants by pH-dependent binding to human IgG-Sepharose (Amersham). Briefly, supernatants were acidified to pH 5.8 with 50 mM MES and flowed over a 4 mL hIgG-Sepharose column at ~1.5 mL/ min. After washing with >10 column volumes of wash buffer (20 mM MES, 150 mM NaCl, pH 5.8), bound FcRn was eluted Eq. 1 where A abs is the amount of the drug in the absorption site depot, C c and C p are the drug concentrations in the "central" and "peripheral" compartments, respectively. The structural model parameters included clearance (CL), volume of distribution of the central compartment (V c ) and peripheral compartment (V p ), inter-compartmental clearance (Q), first-order rate of absorption (K a ) and bioavailability or fraction absorbed (F) for subcutaneous administration. In the case of mAb7, the linear clearance term was substituted by the sum of a linear and a saturable (nonlinear) term as follows: Inter-individual differences of parameters were modeled by log-normal distribution. Proportional and additive error models were used for the residual errors of the observed concentration data. For mAb1-4 the modeling was performed using NONMEM (version VI; UCSF; San Francisco, CA), using the FOCE method. For mAb5-8 and adalimumab, the nonlinear mixed effect modeling software MONOLIX 3. The plasma samples were subsequently analyzed by size-exclusion high performance liquid chromatography (HPLC). Sizeexclusion HPLC separation was performed on a Phenomenex TM BioSep-SEC-S 3000, 300 x 7.8 mm, 5 μm column. The mobile phase was PBS and the flow rate was 0.5 mL/min (isocratic) for 30 min. The ChemStation analog-to-digital converter was set to 25,000 units/mV, peak width 2 sec, slit 4 nM (Agilent Technologies). Iodine-125 was detected with a raytest Ramona 90 in line with a standard Agilent 1100 HPLC module system. The plasma samples were diluted 1:1 with PBS prior to loading on the column. pI determination by isoelectric focusing (IEF). The pIs of native mAbs except mAb5 and adalimumab were determined by imaged capillary isoelectric focusing (iCIEF) using an iCE280 analyzer (ProteinSimple, Toronto, Canada). Solutions of anolyte, catholyte and pI markers were purchased from ProteinSimple. The pI of mAb5 was determined by conventional gel IEF on Immobiline TM Dry Strip pH 6-11 gel (18 cm: GE Healthcare Bio-Sciences). with 20 mM HEPES, 150 mM NaCl, pH 8.0. Eluted FcRn was concentrated and further purified by size exclusion chromatography on a Superdex 200 column (Amersham) with PBS pH 6.0 as the running buffer. Fractions containing monomeric FcRn were pooled, and the concentration was determined on a Nanodrop 8000 spectrometer (Thermo Scientific) using the species-specific mass extinction coefficient at 280 nm. Affinity measurements binding to human, pig and cynomolgus monkey FcRn. Binding kinetics and affinity studies were performed on purified FcRn by surface plasmon resonance (SPR) using a Biacore T-100 TM instrument (GE Healthcare, Piscataway, NJ). All experiments were performed at 25°C. IgGs (5-10 μg/ mL) were immobilized onto three of the individual flow cells (FC) of a Series S CM5 sensor chip (GE Healthcare), using a standard amine coupling procedure according to the manufacturer's protocol, with FC1 serving as the reference flow cell. The immobilization levels were approximately 1,000 response units (RU) per flow cell. Eight serial 3-fold dilutions of each FcRn (10 μM-1.5 nM) were prepared in running buffer (25 mM MES, 25 mM HEPES, pH 5.8, 150 mM NaCl, 0.05% Tween-20) and were injected for 60 sec at a flow rate of 50 μL/minute followed by a dissociation phase of 30 sec. Surfaces were regenerated between cycles by a single injection of running buffer at pH 8.0 (30 sec at 50 μL/min). Raw sensogram data were reduced and referenced using the Scrubber II software package (BioLogic Software, Campbell, Australia) and fit to a simple 1:1 binding model under equilibrium conditions. In vitro blood binding studies. All antibodies were radioiodinated using the indirect iodogen addition method as previously described in reference 59. The radiolabeled proteins were purified using NAP5 TM columns pre-equilibrated in PBS. The specific activities of the antibodies were in the range of 11.5 to 15.5 μCi/ μg. Radioiodinated antibodies were spiked into Göttingen minipig whole blood (Bioreclamation LLC, Hicksville, NY) followed by gentle mixing. Three 0.5 mL aliquots of blood were removed and incubated for 1 h at 37°C. The aliquots were centrifuged at 12,800x g for 5 min at 4°C to separate the plasma samples from the cell pellets, after which the cell pellets were washed with 0.5 mL cold PBS. All samples were counted for radioactivity on a gamma counter (Wallac 1480 Wizard 3" EC&G Wallac; Turku, Finland).
2018-04-03T05:10:57.210Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "d2671dab6f26f0207628a45903e5954e1e5ceaf4", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/mabs.4.2.19387?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "2cac44938d5b1bd18fc19e875c8c824b8969cde4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54203194
pes2o/s2orc
v3-fos-license
The pathogenesis related class 10 proteins in plant defense against biotic and abiotic stresses Plant growth and survival is always influenced by several factors including abiotic and biotic stresses. Plants respond to these factors by inducing their defense mechanism which includes expression of several effectors, receptors, signaling and protective molecules. One of the most commonly induced proteins during plant defense mechanism is pathogenesis related (PR) protein. Accumulation of PR proteins is an integral component of innate immune responses in plants during pathogen attack or under abiotic stress conditions. The PR proteins not only accumulate locally in the infected leaf, but are also associated with the development of hypersensitive response (HR) or systemic acquired resistance (SAR) against infection by fungi, bacteria and viruses.1,2 The PR proteins are grouped into 17 families depending upon their primary structure, serological relationships and biological activities.3 Different families of PR proteins exhibit different antimicrobial and secondary metabolic enzyme activities, for example chitinases (PR3, PR4, PR8 and PR11).4,5 β-1, 3-glucanase (PR2),4 osmotin with thaumatin-like protein (PR5), RNase (PR-10), defensins (PR12),6 thionin (PR13), lipid-transfer protein (PR14) and oxalate oxidase (PR15 and 16).7–11 Most of the PR protein families are extracellular in nature, but some of the PRs are found in the cytoplasm also, abundantly in the vacuole.3 The role of different types of PR proteins during abiotic and biotic stresses and their defense responses in plants are very well documented in literature; however, their mechanism of action is sparsely described. The PR-10 family is the largest among all different classes of PR10 proteins, with more than 100 members reported across more than 70 plant species.3 This review article will summarize the current status, structural and functional diversity of PR-10 proteins with special emphasis on their role in abiotic and biotic stress tolerance. PR-10 proteins: an overview Introduction Plant growth and survival is always influenced by several factors including abiotic and biotic stresses. Plants respond to these factors by inducing their defense mechanism which includes expression of several effectors, receptors, signaling and protective molecules. One of the most commonly induced proteins during plant defense mechanism is pathogenesis related (PR) protein. Accumulation of PR proteins is an integral component of innate immune responses in plants during pathogen attack or under abiotic stress conditions. The PR proteins not only accumulate locally in the infected leaf, but are also associated with the development of hypersensitive response (HR) or systemic acquired resistance (SAR) against infection by fungi, bacteria and viruses. 1,2 The PR proteins are grouped into 17 families depending upon their primary structure, serological relationships and biological activities. 3 Different families of PR proteins exhibit different antimicrobial and secondary metabolic enzyme activities, for example chitinases (PR3, PR4, PR8 and PR11). 4,5 β-1, 3-glucanase (PR2), 4 osmotin with thaumatin-like protein (PR5), RNase (PR-10), defensins (PR12), 6 thionin (PR13), lipid-transfer protein (PR14) and oxalate oxidase (PR15 and 16). [7][8][9][10][11] Most of the PR protein families are extracellular in nature, but some of the PRs are found in the cytoplasm also, abundantly in the vacuole. 3 The role of different types of PR proteins during abiotic and biotic stresses and their defense responses in plants are very well documented in literature; however, their mechanism of action is sparsely described. The PR-10 family is the largest among all different classes of PR10 proteins, with more than 100 members reported across more than 70 plant species. 3 This review article will summarize the current status, structural and functional diversity of PR-10 proteins with special emphasis on their role in abiotic and biotic stress tolerance. PR-10 proteins: an overview The PR-10 class of PR proteins was first described in parsley and referred as 'classic' PR-10 proteins. 12 PR-10 proteins are ubiquitous proteins that have been identified in a number of dicot and monocot plant species. They are small, slightly acidic and resistant to proteases. PR-10 proteins are classified as intracellular PR (IPR) proteins and are present in cytoplasm because they lack signal peptide. They are closely related to a group of major tree pollen allergens and food allergens based on sequence homology to classic PR-10 proteins (~50% identity). The common allergens found in birch pollen, 13 celery, 14 apple, 15 peanut 16 and tomato 17 are included in the PR-10 class. Most PR-10 genes share an open reading frame (ORF) from 456 to 489bp (154-163 amino acids) which is interrupted by an intron of 76-359bp at a highly conserved position. 3 Amino acid sequence alignments of PR-10 proteins clearly show the most divergent and most conserved segments ( Figure 1). This ORF codes for a small protein with conserved sequence features such as a Glycine-rich loop or GXGGXGXXK motif (aa 47-55), a signature motif of PR-10 proteins which is conserved even in distant homologs. This motif has remarkable sequence similarity to P-loop, the Bet v 1 motif (IPR000916) characteristic of proteins from the Bet v 1 super family and three amino acids E96, E148 and Y150 (as positioned in Bet v 1) are possibly involved in ribonucleasic activity. 18 P-loop, is a phosphate-binding loop found in nucleotide binding proteins. 18 However, PR-10 proteins do not have affinity for ATP and the glycinerich loop is conformationally different from the P-loop. 19,20 Interestingly, the glycine-rich loop is the most rigid structural element in the PR-10 fold despite being glycine rich. A characteristic START-like domain (IPR023393), an alpha/beta sandwich structural domain is found in a wide variety of PR protein families. Bet v1 and PYR/PYL/RCAR domains typically bind phytohormones such as brassino steroids, cytokinins and abscisic acid. Superposition of the PR-10 structures reveals very significant structural differences, mainly at the C-terminal helix a3, displaying different axial shifts as well as a variable degree of deformation at the center and at its N-terminal connection with loop L9. 21 The internal cavity formed with the participation of a3, displays a remarkable variability in terms of the volume. PR-10 genes are multi gene families having low intra specific variation but higher inter specific variation which makes them interesting phylogenetic markers. 22 For example, at least five PR-10 genes in pea, 23 eighteen Mal d 1 genes in apple, 24 ten Bet v 1 genes in birch, 25 eight Fra a 1 genes in strawberry, 26 six PR-10 genes in Solanum surattense, 27 eight in yellow lupine, 28 five in rice, 29 and eight Pru p 1 and Pru d 1 genes in peach and almond, respectively, 30 have been identified. They also tend to form physical clusters on specific chromosomes, e.g. in apple, 24 peach 30 and poplar 31 PR-10 genes. Lebel et al., 22 found that thirteen out of the seventeen Vitis vinifera PR-10 sequences are present on the chromosome in direct orientation suggesting that most copies were produced by unequal crossing over events, as described in Arabidopsis and rice,. 32 Gene duplication events in the genome evolution process make new copies of a gene which may undergo modifications resulting in functional diversification. 33 These kinds of events are significant source of evolution in plants, however, most of the times gene copies produced by duplication are rapidly lost through pseudo genisation. Therefore, only a part of numerous homologous sequences coexisting in a genome are functional genes. Another important aspect of PR-10 evolution is evident from differential patterns of expression among the different plant organs, i.e. root, leaf, stem and peduncle, indicating that the transcripts may represent functionally divergent genes. 34,35 Furthermore, some PR-10 proteins are constitutively expressed in plants while some are induced only under biotic stress, abiotic stress or during plant development, emphasizing functional diversification. 19,28 The silencing of MtPR-10-1 from Medicago truncatula led to the induction of a new set of PR proteins after infection with Aphanomyces euteiches, 36 suggesting that there is a relationship between PR-10 and other PR proteins. Structural and functional diversity: decoy strategies to fine tune the defense PR-10 proteins are involved in many aspects of plant development, growth and defense but their molecular function is still unclear. Various roles for PR-10 proteins have been inferred, such as involvement in enzymatic processes, secondary metabolite biosynthesis, antimicrobial processes, storage, membrane binding, transport, phyto hormone and other hydrophobic ligand binding. However, most of the studies exploring PR-10 functions were conducted in vitro. [37][38][39][40][41][42][43] A protein with ribonuclease activity was isolated from callus cell culture of Panax ginseng showing ~60-70% sequence identity with two intracellular PR proteins from parsley, but did not show any homology with other known ribo nucleases. 37 The RNase activity of PR-10 proteins was also detected in Bet v 1 and BpPR-10c from birch. 38,39 LaPR-10 from white lupin, 40 LlPR-10.1B from yellow lupine, 20 BpPR-10c from birch, 41 GaPR-10 from cotton, 42 SPE16 from Pachyrrhizus erosus, 43 CaPR-10 from hot pepper, 44 SsPR-10 from Solanum surattense, 27 AhPR-10 from peanut, 45 and PsPR-10.1 and PsPR-10.4 from pea. 46-48 PR-10 proteins exhibiting RNase activity inhibit the growth of pathogen through direct cytotoxic impact on pathogen cells, possibly participating in the induction of plant cell apoptosis and development of hypersensitive reactions. 49 Despite a number of studies associating RNase and antimicrobial activities of PR-10 proteins in the plant immune responses, tissuespecific expression of PR-10 gene during plant growth and development needs critical evaluation to determine the role for PR-10 proteins. While the selective RNA degradation activity may be critical to controlling the transcriptional burst in response to molecular events leading to stress perception or a downstream hypersensitive/apoptotic response essential to the containment of infection foci, it may also be directly responsible for arbitration of an invading pathogen. PR-10 proteins behave as ribonucleotide binding proteins (RBP) and take part in virus resistance via binding to viral RNAs. 50 Structural analysis of PR-10 indicated that it has quite diverse sequences as well as highly conserved sequences. PR-10 family has highly conserved regions including a specific domain (KAXEXYL), and the glycinerich motif (GXGGXGXXK), which is known as a RNA binding site, but whether these sites have specific binding affinity to target RNA is not clear as PR-10 is also known to be involved in defense functions during a variety of abiotic and biotic stresses. 21,30,46 PR-10 proteins have been reported to have several functions but there is no general function common to all members of this class. It is likely that the post translational modifications such as phosphorylation of the protein provide specificity for target RNAs, which in turn delimit potentially dangerous unspecific RNase activity. 44 One of the member of PR-10 family, CaPR-10 isolated from hot pepper (Capsicum annuum), showed phosphorylation. 44 Phosphorylation lead to enhanced ribonucleolytic activity against viral RNAs upon Tobacco Mosaic Virus (TMV) infection showing its direct involvement in plant defense. 44 Some PR-10 proteins from Arachis hypogaea were shown to be phosphorylated but their role in RNase activity was not shown. 51 A report shows that phosphorylation of CaPR-10 is enhanced by leucine-rich repeat 1 (LRR1) protein. 10 However, Pungartnik et al., 52 demonstrated no effect of phosphorylation on the RNase activity or substrate specificity in the cocoa TcPR-10 protein. The PR-10 protein from Theobroma cacao, TcPR-10 showed both antifungal activity against Moniliophthora perniciosa, and in vivo ribonuclease activity. 52 Although non-specific effects of the PR-10 family were observed, the possibility of helper proteins for specific binding of target RNAs, such as viral or host RNAs of PR-10 proteins, cannot be overruled. 21 Recently, Choi et al., 10 investigated a cytosolic interaction of CaPR-10and LRR1, an innate immune receptor recruited in response to pathogen attack. Compromised cell death mediated-defense signaling as observed in transgenic pepper infected with avirulent Xanthomonascam pestris pv. Vesicatoria after suppression of cytosolic PR-10/LRR1 interaction. On the contrary, enhanced resistance to P. syringae pv. Tomato and Hyaloperonospora arabidopsidis was noticed under heterologous overexpression of PR-10/LRR1 in transgenic Arabidopsis, thus, corroborating the role for PR-10 proteins in conjunction with LRR1 during HR. 10 However, the mechanism of CaPR-10-LRR1 interaction-mediated defense and how CaPR-10 recognizes the host RNAs are still unclear. On a similar note, the interaction between another family of PR proteins (PR4b) and LRR1 was demonstrated in hypersensitive cell death and defense response in pepper by Hwang et al. 53 To investigate the role of three conserved residues Glu96, Glu148 and Tyr150 (ginseng ribonuclease sequence) in the RNase activity, sitedirected mutagenesis of those residues was performed including some positions within the glycine-rich loop. The RNase activities of SPE16 and GaPR-10 are affected to a greater extent when residues of the C-terminal helix are substituted, while in the case of AhPR-10 major effects are seen with mutagenesis at the glycine-rich loop. An elevated level of PsPR-10.4 activity is observed when Glu148 is mutated to alanine and a decreased level is observed with an H69L mutation. 48 Site directed mutagenesis of the peanut AhPR-10 protein deteriorated the RNAse and antifungal activities without any discernible effect on protein internalization by fungal mycelium of Fusarium oxysporum and Rhizoctonia solani in a hyphal extension inhibition assay. 45 However, Biesiadka et al., 20 reported that despite having a high level (76.8%) of identity and sequence conservation at the RNase-relevant positions in two yellow lupine LlPR-10.1A and LlPR-10.1B proteins, only LlPR-10.1B showed RNase activity. Therefore, it is presumed that RNase activity is found in some PR10 proteins, but this is not a general property of this class of PR proteins. Cytokinins, a class of plant growth phytohormones, have also been accepted as integral components of plant defense repertoire and abiotic stress responses. 54 A subclass of PR10 proteins has been structurally confirmed as cytokinin-specific binding proteins (CSBPs) despite having marginal (<20%) sequence identity. 55 Some of the classic PR-10 proteins were found to form complexes with brassinosteroid analogs, 56 flavonoids 57 and cytokinins. 58 Constitutive expression of a ribonuclease-active pea PR-10 protein (PR-10.1) gene in Brassica napus seedlings enhanced endogenous cytokinin pool while promoting seedling germination and growth rates under saline conditions. 46 Krishnaswamy et al., 59 suggested that PR-10 proteins may modulate cytokinin levels through an uncharacterized mechanism, which may include the degradation of tRNAs containing cytokinin moieties. Interestingly, an evolutionary ancient and versatile polyketide cyclase/dehydrase-like signature domain (polyketide_cyc, Pfam: PF03364) is found in PR-10 proteins, which may be involved in the binding of cytokinins, flavonoids and steroids across cellular aqueous environments. 21 Zubini et al., 60 have investigated the possible role of the two Pru p 1 isoforms in the defense response of peach to the fungal pathogen Monilinia spp. The RNase activity is different for the two proteins, and only that of Pru p 1.01 is affected in the presence of the cytokinin zeatin, suggesting a physiological correlation between Pru p 1.01 ligand binding and enzymatic activity. The difference in binding activity pointed towards the differences in the binding pockets based on homology modeling. PR-10 proteins have structural and sequence homology with mammalian lipid transport and plant abscisic acid receptor proteins and are predicted to have cavities for ligand binding. 61 A large internal Y-shaped hydrophobic cavity, as determined by three-dimensional structure of PR-10 proteins could be liable for transport of a polar ligands such as fatty acids, flavonoids, cytokinins or brassino steroids in the intracellular spaces. 62 The diverse roles predicted for PR-10 proteins in the plant immune system should have consideration of discernable modifications of the structure and shape of this cavity allowing to bind different ligands. 20,63 In a recently study, three new members of the PR-10 family, the Fra a proteins, have been identified in strawberry in response to the flavonoid biosynthesis pathway, which is essential for the development of color and flavor in fruits and it was suggested that Fra a proteins could act as transporters or "chemical chaperones" binding to flavonoid intermediates so that they are available to processing enzymes. 61 Furthermore, structural comparisons of the apo forms of Fra a 1E and the Fra a 3-catechin complex indicates that Fra a proteins show significant flexibility in the loop regions surrounding the cavity (loops L3, L5, and L7) and ligand-binding induces important conformational changes suggesting an important role of PR-10 proteins in control of secondary metabolic pathways. The discovery of a PR-10 homolog with unique organ/tissuespecific expression in the tapetal cells during anther development suggests a potential role in the sporo pollen in pathway for these proteins. 64 An enzyme (S)-norcoclaurine synthase (NCS) which is involved in benzyl iso-quinoline alkaloid biosynthesis, catalyzing a Pictet-Spengler condensation of dopamine and 4-hydroxy phenyl acetaldehyde to (S)-norcoclaurine share 28%-38% sequence identity with classic PR-10 proteins. 65,66 Four NCS enzymes namely Tf NCS from Thalictrum flavum, Ps NCS1 and Ps NCS2 from Papaver somniferum, and Cj PR10A from Coptis japonica share substantial identity with PR10 and Bet v1 proteins. 66 Similarly, the phenolic oxidative coupling protein (Hyp-1) from Hypericum perforatum which catalyze the condensation of two emodine molecules to the bioactive naphtha dianthrone hypericin, shows approximately 40% sequence identity with classic PR-10 proteins. [67][68][69] Signaling nodes: PR-10 in response to signaling pathways Phytohormones such as abscisic acid (ABA), ethylene (ET), jasmonic acid (JA) and salicylic acid (SA) are major signaling molecules in plants during the stress response, and their involvements during induction of PR10 proteins has been investigated in various studies. 70 In general, SA is an important signal for general defense responses and especially for attack by bio trophic pathogens in socalled systemic acquired resistance (SAR) and the JA/ET signaling pathway is involved in responses to wounding and abiotic stresses such as drought and high salinity and also in the defense signaling against necrotrophic pathogens. 71,72 ABA has a crucial role in responses to plant growth and development as well as in wide range of abiotic stresses, including drought, salt and cold. A diagrammatic representation of the expression of defense related proteins and transcription factors in response to signaling molecules are shown in Figure 2. Expression of a rice PR10 protein, RSOsPR10 is regulated antagonistically by JA/ET and SA signaling pathways in response to environmental stresses. 72 Accumulations of JIOPR10 73 and OsPR10 74 transcripts were observed on application of JA and SA in rice leaves. The folding canon of PR-10 proteins is found in the ABA receptor family known as PYR/PYL/RCAR (pyrabactin resistance/PYR-like/ regulatory component of ABA response). 21 Over-expression of a rice transcription factor, OsWRKY30, activates the expression of LOX, AOS2, PR3 and PR10 genes, increases endogenous JA levels and confers resistance to the rice fungal pathogens Rhizoctonia solani and Magnaporthe grisea. 75 Following ethylene treatment enhanced levels of accumulation of PR10 transcripts were observed in OsPR10a from rice 76 and Pg1 from ginseng. 77 Two alfalfa PR10 genes, MsPR10.1A and MsPR10.1B, were responsive to ethylene and ABA. 78 Analysis of the root proteome of moderate susceptible Medicago truncatula in response to infection by the oomycete root pathogen Aphanomyces euteiches and the abundance levels of one group of ABA-responsive proteins (ABR17) of the PR-10 class were observed indicating that ABA-mediated signaling is involved in PR protein induction for disease resistance. 79 Therefore, despite the fact that the mechanism of interaction between signaling molecules and PR-10 proteins remains largely unknown, the results of a number of studies suggest that PR10 expression is triggered by the application of signaling molecules and that this response is important in host resistance. Abiotic and biotic stresses: PR-10 response Plants are responsive to environmental factor and may adapt to certain amount of abiotic and biotic stresses by activating their survival strategies through changes in biochemical and physiological pathways. Activation of the plant immune system that allows survival of plants in response to these extreme stress regimes is important. The PR-10 genes are one of the important components of the plant growth and developmental system and are differentially regulated by various environmental stimuli such as pathogen attack and/or abiotic stresses. Some PR-10 proteins are shown to possess antifungal activity such as, AhPR-10 of Arachis hypogaea 45 and TcPR-10 of Theobroma cacao 52 through RNase activity and internalization of fungal mycelium. Other PR-10 proteins that possess antifungal activity are SsPR-10 from Solanum surattense., 27 maize PR-10 proteins, 9 CsPR-10 from Crocus sativus 80 and JcPR-10a from Jatropha curca. 81 A study by Soh et al.,11 also demonstrated enhanced expression and longevity of PR-10 gene transcripts in a disease-resistant pepper cultivar in response to the fungal pathogen Colletotrichum acutatum. In a recent study by Fan et al., 82 a novel PR-10 Protein Gly m 4l, was found to increases resistance upon Phytophthora sojae infection in soybean (Glycine max [L.] Merr). Gly m 4l transcripts were increased by SA stress, but relatively low under MeJA and ET treatments, and almost decreased with ABA and GA 3 treatments, therefore it was speculated that Gly m 4l might play a key role in soybean plants resistance to P. sojae mainly depending on SA signaling. Some PR-10 proteins also show antibacterial and antiviral activity. Ocatin inhibits the growth of phytopathogenic bacteria, such as Agrobacterium tumefaciens, Agrobacterium radiobacter, Serratiamarcescens and Pseudomonas aureofaciens. 83 The PR-10 proteins from maize, ZmPR-10 and ZmPR-10.1 have antibacterial activity against bacteria P. syringae. 9 Antiviral activity of pepper CaPR-10 was shown to degrade viral RNA of tobacco mosaic virus. 44 Antinematode activity has been reported for PR-10 proteins. The CpPRI from Crotalaria pallid roots shows nemato static and nematicide effects against root-knot nematode Meloidogyne incognita by inhibiting the papa in-like enzymes present in the digestive tube and the cuticles of the pathogens. 84 In addition to papa in inhibition, CpPRI was observed to internalize and diffuse over the entire body of juvenile M. incognita nematodes in fluorescence based assay. 84 In another study, transcripts of genes encoding PR-10 (SAM22) were increased 5-to 10-fold after 12days of infection and remained high even 10weeks after infection. 85 Similarly, PR-10 expression was higher in resistant pine trees than in susceptible pine trees at 7 and 14days post inoculation with the pine wood nematode (PWN) Bursaphelenchus xylophilus. 86 Synchronized expression of PR-10 with peroxidase in resistant trees indicates this gene may be induced by reactive oxygen species (ROS) such as H 2 O 2 or it may act as a proteinase against some enzymes such as cellulases, beta-1,3-glucanase, and pectate lyases which are secreted from PWN. 87 PR-10 proteins have been shown to be transcriptionally responsive across a large range of abiotic stress environments such as drought, salinity, low and high temperatures, heavy metals, wounding and UV exposure, 9,72,88 Several proteins with similarities to the PR-10 family members were identified through two dimensional gel electrophoresis, which were up-regulated in peanut callus cultures subjected to salt stress. 51 Transgenic overexpression of one peanut salinity-induced PR-10 gene (AhSIPR10) in tobacco exhibited enhanced tolerance to salt, heavy metal (ZnCl 2 ) and mannitol-induced drought stress. 88 The expression of CcPR-10 transcripts was induced by wounding and jasmonic acid treatments as well as by armyworm (Spodoptera litura), which suggested that CcPR-10 may be involved in crosstolerance to abiotic and biotic stresses. 89 The abundance of two PR-10 proteins from maize (ZmPR-10 and ZmPR-10.1) was increased by multiple abiotic stresses including SA, CuCl 2 , H 2 O 2 , coldness, darkness and wounding and biotic stresses such as Erwiniaste wartii and Aspergillus flavus infection. 89 In vitro cryo protective activity was exhibited by PR-10 suggesting the role of some PR-10 proteins in frost-tolerance mechanisms. 90 Another PR-10 homolog i.e. vegetative storage protein (VSP) from white clover (Trifolium repens L.), also accumulates under autumn and winter conditions, and thus may endow the plants with tolerance to chilling. 91 Moreover, PR-10 proteins are over expressed in Oxytopis (Fabaceae) species adapted to the Arctic as opposed to temperate species. 92 In a study by Vaas et al., 93 overexpression of PR-10a in suspension cultures of Solanum tuberosum causes an enhanced osmotic tolerance, which in turn leads to enhanced ability for cryo preservation. Abiotic stress-induced Zea mays PR-10 genes (ZmPR-10 and ZmPR-10.1) were also up-regulated following infection with pathogenic bacteria Erwinia stewartii and fungus Aspergillus flavus in young maize leaves and immature kernels, respectively. 9 PR-10 proteins: A resource for crop improvement The development of resistant cultivars with high yields and excellent quality is the most efficient, cost effective and environment friendly approach to prevent the losses caused by abiotic and biotic stress. Although some plants have remarkable ability to cope with extreme environmental onslaughts, however these stresses nevertheless represent a primary cause of crop-loss worldwide. Understanding the molecular process regulating these metabolic adaptations and untangling the network of interconnected signal pathways are important for developing stress resistant plants. Figure 3 displays different methods which can be applied to develop plants using PR-10 mediated resistance. An approach to transfer PR-10 mediated resistance in commercial cultivars is use of classical methods of plant breeding. However, the experiments needed here are very time consuming and laborious. Recent developments in genomics have potential to facilitate engineering for stress tolerance in plants. 94 Advances in highthroughput sequencing and phenol typing platforms have potential to transform conventional breeding to genomics-assisted breeding and will address the challenge of increasing food yield, quality and stability of production through advanced breeding techniques. Next generation sequencing can help in the identification of the numerous PR-10 gene family members in the plant genome, and in the characterization of the associations with resistant phenotypes. However, exploitation of the increasing knowledge of PR-10 proteins to enhance abiotic and biotic stress tolerance in the field should be exercised with caution. Sequence similarity of PR-10 with known allergens is a major setback in this area. 3,39 Another less unexplored area is that manipulation of a PR-10 proteins might increase resistance to one pathogen or pest, but as an unwanted side effect might increase susceptibility to other pathogens or pests since induction or silencing of PR10 may affect the expression of other defense relates genes. 36 Transgenic technologies have enormous potential to improve important crops by introduction of gene of interest often by Agrobacterium-mediated transformation or direct DNA transfer by particle bombardment method. Characterization of PR-10 proteins and development of transgenic plants overexpressing PR-10 proteins is important step in this direction. Table 1 lists the PR-10 genes which have been used to develop transgenic plants in different crop species. [95][96][97][98][99][100][101][102][103][104][105][106][107][108][109][110] Multi-location field trials of transgenic plants expressing PR-10 will likely be next step for further evaluation. Conclusion Our global food supply is threatened by multitude of abiotic and biotic stresses and advance molecular research techniques are trying to fill the gaps through understanding of plant resistance mechanism. PR-10 proteins are induced in response to pathogen and abiotic stimuli. Despite widespread reports on PR-10 involvement in combating a stress conditions sensed by plants, their functional mechanism is still unclear. However, many successful attempts were made to show the role of PR-10 proteins in stress resistance mechanism through transgenic approach in many species. Given the importance of the PR-10 proteins for abiotic and biotic stress tolerance, better understanding of these metabolic pathways involving PR-10 gene will be an exciting and rewarding process for plant scientists in the years to come.
2019-04-01T13:13:59.118Z
2015-12-18T00:00:00.000
{ "year": 2015, "sha1": "de4bf776ab540869d4bff0d163120caa6ddfe402", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15406/apar.2015.02.00077", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f11204cb5ebdc5c4784ea53373c99259d7417efc", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
9410872
pes2o/s2orc
v3-fos-license
Self-monitoring of spontaneous physical activity and sedentary behavior to prevent weight regain in older adults Objective This study determined whether adding a self-regulatory intervention (SRI) focused on self-monitoring of spontaneous physical activity and sedentary behavior to a standard weight loss intervention improved maintenance of lost weight. Design and Methods Older (65–79 yrs), obese (BMI=30–40 kg/m2) adults (n=48) were randomized to a five-month weight loss intervention involving a hypocaloric diet (DIET) and aerobic exercise (EX) with or without the SRI to promote spontaneous physical activity and decrease sedentary behavior (SRI+DIET+EX compared to DIET+EX). Following the weight loss phase, both groups transitioned to self-selected diet and exercise behavior during a 5-month follow-up. Throughout the 10-months, the SRI+DIET+EX group utilized real-time accelerometer feedback for self-monitoring. Results There was an overall group by time effect of the SRI (P < 0.01); DIET+EX lost less weight and regained more weight than SRI+DIET+EX. The average weight regain during follow-up was 1.3 kg less in the SRI+DIET+EX group. Individuals in this group maintained ~10% lower weight than baseline compared to those in the DIET+EX group whom maintained ~5% lower weight than baseline. Conclusions Addition of a self-regulatory intervention, designed to increase spontaneous physical activity and decrease sedentary behavior, to a standard weight loss intervention enhances successful maintenance of lost weight. Introduction The chronic disease and decreased mobility associated with obesity underscore the need to successfully treat the increasing prevalence of this condition in older adults.(1) Obesity treatment guidelines state that caloric restriction combined with moderate-intensity aerobic exercise should be the primary therapy for achieving weight loss in any age group. (2) However, while this approach produces short-term weight loss, weight regain is common, (3)(4)(5)(6) highlighting the need to identify treatments that are effective for sustaining weight loss. Difficulty in maintaining lost weight is in part due to the 'energy gap' caused by adaptive thermogenesis and the drive to defend existing energy stores. (7)(8)(9)(10) One key adaptation is a reduction in both resting (11)(12)(13)(14) and non-resting energy expenditure, including spontaneous physical activity (SPA), defined as energy expenditure resulting from movement-related activities including postural shifts and daily activities, but excluding structured exercise. (12,(15)(16)(17)(18)(19)(20) SPA is a learned/conditioned component of energy expenditure with large individual differences shaped by personal attributes, social influence, occupational demands, and environmental factors. (21) A reduction in SPA increases sedentary behavior, a trend that is compounded with aging. (22,23) Reduced SPA and greater sedentary behavior contribute to several adverse health outcomes, including obesity and aging-related functional decline, independent of exercise behavior. (24)(25)(26) Our prior work shows that reductions in SPA with weight loss are predictive of weight regain. (27) Yet, to date, there is very little research that tests strategies that can be adopted and sustained to prevent declines in SPA that occur with weight loss in older adults. Self-regulation, particularly self-monitoring, is a powerful behavioral strategy to facilitate behavior change during weight loss. (28) However, the use of this strategy to raise awareness of energy expended through everyday activities to maintain or increase SPA, and to promote more effective self-regulation of SPA, has not been prospectively tested for weight loss maintenance. Thus, the purpose of this study was to determine whether adding a selfregulatory intervention (SRI), focused on self-monitoring of SPA, to a standard weight loss intervention results in less body weight regain following weight loss than a comparable intervention lacking this component. Study Design This study was a two-arm, 10-month pilot study in 48 older, obese adults. Participants were randomized to an intervention involving a hypocaloric diet and aerobic exercise (DIET+EX, n=24) or to the same weight loss intervention with the addition of a self-regulatory intervention (SRI) for promoting SPA (SRI+DIET+EX, n=24). Both groups underwent a controlled diet and four days/week of aerobic exercise for five months as described below. Following post-weight loss research assessments, both groups transitioned to a self-selected program of diet and exercise behavior during a five-month follow-up. Throughout the total 10-month period, the SRI+DIET+EX group was provided with an intervention component designed to promote/maintain a SPA level that was equal to or greater than each individual's baseline level (described below). Research data were collected at baseline, after the fivemonth weight loss phase (five-month time point), and after the five-month follow-up phase (10-month time point). Participants Forty-eight, obese, older women and men were screened and randomized to one of the two groups (n=24/group). After a phone screen, those eligible underwent a medical history, physical exam, and graded exercise stress test. Inclusion/exclusion criteria included: 1) age 65-79 yrs, 2) sedentary (<2 x/wk of structured exercise), 3) BMI=30-40 kg/m 2 , 4) weightstable (± 5%) within the past year, 5) non-smoking for the past year, 6) not currently taking medications that affect body weight, 7) normal cognitive function, 8) no evidence of clinical depression, and no evidence of heart disease, cancer, liver or renal disease, chronic pulmonary disease, physical impairment that would prevent walking, uncontrolled hypertension, or any contraindications for either exercise or weight loss. The study was approved by the Wake Forest School of Medicine Institutional Review Board and all participants provided written, informed consent to participate. Interventions Both groups (DIET+EX and SRI+DIET+EX) took part in a five-month weight loss intervention via a hypocaloric diet and supervised exercise. Daily hypocaloric intake levels (−600 kcal/d deficit) were assigned for each person; the individual calorie level was derived by subtracting the participants' required energy deficit from their estimated daily energy needs for weight maintenance. Weight maintenance energy needs were calculated from the direct measurement of resting energy expenditure, applying an activity factor based on daily activities (1.2 for sedentary). Participants were provided two meals (lunch and supper) per day prepared by our Clinical Research Unit (CRU) kitchen. All meals were prepared individually after participants chose from a menu designed by the RD. No woman was provided with less than 1100 kcals/d and no man with less than 1300 kcals/d. The diet contained less than 30% calories from fat and at least 0.8 grams of protein per kg of ideal body weight per day. In addition, participants were provided with a daily calcium (1200 mg/d) and vitamin D (800 IU/d) supplement. They were provided menus to guide their food purchasing and preparation of breakfast meals and snacks that were consistent with the prescribed calorie level. They were asked to consume only the food given to them, or approved from the breakfast menu, and were asked to keep a daily food/drink log which were reviewed weekly to verify compliance. A compliance estimate was calculated which is an average percentage (over the course of the 20-week intervention) of the prescribed caloric intake (kcal/d) each participant reports on his/her food log. Body weight was measured weekly. The exercise component involved treadmill walking four days/week in our exercise facility under the direction of an exercise physiologist. Blood pressure and heart rate (HR) were measured before each session and participants warmed-up by walking for 5 mins at a slow pace and then walked at an intensity of 65-70% of heart rate reserve (HRR). The duration of exercise progressed from 15-20 mins at 50% HRR the 1 st week to 30 mins at 65-70% HRR by the end of the 6 th week and thereafter. At least two HR readings were recorded each session to monitor compliance to the prescribed exercise intensity. Participants assigned to the SRI+DIET+EX group underwent an additional intervention aimed at promoting SPA through self-monitoring. Each participant in this group was provided with an accelerometer and individual cognitive-behavioral counseling sessions (see below) to prevent the decline in SPA expected from the hypocaloric diet and structured exercise program. SPA was operationalized as minutes of light physical activity based on the accelerometer output. The interventionist assisted with goal setting of SPA minutes and tailored increasing the minutes in SPA to each individual participant. Thus, considerable attention was placed on the process of how to achieve increased SPA given the daily demands and environmental constraints faced by each individual. In consultation with the interventionist, participants were given personal SPA goals that were initially calculated to be at least 10% greater than baseline levels. Individuals whose baseline SPA was less than 10 minutes were counseled to increase levels to at least 10 minutes. SPA goals were increased throughout the intervention, with the overall goal being to increase total volume of light physical activity by 20% over baseline. This energy was able to be expended in a variety of ways and an individual's SPA minutes resulted from a combination of several daily activities depending on individual habits and preferences. During the first week, participants in the SRI intervention were given a Lifecorder Plus ® triaxial accelerometer (Suzuken, Co., LTD; http://www.new-lifestyles.com/) with instructions for wear and documentation of usage. They were asked to wear the accelerometers daily for the length of the 5-month intervention and the 5-month follow-up. The units are small devices worn on the hip. With this model, participants cannot be blinded to the output and, thus, are able to instantaneously view their SPA minutes throughout the day and to compare it to their daily SPA goal. Participants recorded SPA minutes (from the accelerometer) in a daily log to become more aware of the metabolic costs of different daily activities, and, if necessary, to devise ways to increase SPA. Daily documentation included participants' personal SPA goal, actual accumulated daily minutes of SPA, adherence information (e.g., wearing the monitor, meeting the goal, etc.), and identifying daily barriers if participants did not meet their goals. The accelerometers were removed during the on-site, structured exercise sessions. Participants in the SRI+DIET+EX group met weekly for the first six-weeks with an interventionist to review progress. During these 10-15 minute sessions, accelerometers were downloaded, the SPA self-reported logs were collected, and participants received feedback based on a self-management model developed by Rejeski and colleagues.(29) This model assumes that the motive to change behavior is driven by the desire for specific outcomes along with both facilitative and inhibitory processes. The SRI intervention facilitated SPA behavior by clarifying the intended outcomes of increasing SPA and by providing specific, moderately-challenging goals that were evaluated frequently across the day using the accelerometer. During brief counseling sessions that were collaborative by design, feedback was provided and factors that inhibited successful self-management were addressed using Perri and colleagues approach to social problem solving. (30) If, during the first six-weeks of treatment, participants' level of SPA activity dropped below established goals, additional counseling sessions were arranged. After the first six weeks, participants met bi-weekly with the SPA interventionist for the remainder of the five-month weight loss intervention and then monthly during the five-month follow-up period. Follow-up phase During the five-month follow-up phase, all participants followed a self-selected diet and exercise routine (e.g., no dietary counseling, diet, or formal exercise provided) and participants in both groups were asked to complete follow-up visits that occurred at fiveweek intervals. At these visits, participants were weighed and those in the SPA SRI group met briefly with the SPA interventionist to turn in accelerometers to download and change batteries, turn in SPA trackers, and briefly review progress. Assessments Height and weight were measured at baseline, after the five-month weight loss phase, and after the five-month follow-up phase (10-month time point) on the same scale, which is accurate to ±100g and calibrated weekly. Percent body fat, lean mass, and adipose tissue mass were measured by dual energy absorptiometry (Hologic Delphi QDR) and maximal aerobic fitness was measured using expired gas analysis on a treadmill test to exhaustion. (31) Resting energy expenditure (REE) was measured at each time point in the morning after a 10-hour fast by indirect calorimetry using the ventilated hood technique. (27) Daily physical activity was assessed using a Kenz ® Lifecorder EX ® tri-axial accelerometer (Suzuken, Co., LTD; http://www.new-lifestyles.com/), which provides valid and reliable measures of activity duration and intensity. (32,33) Participants in both groups wore the accelerometer for at least 10-hours for a period of seven days at each of the assessment time points and they were directed to keep an adherence diary giving details as to when the monitor was worn and taken off during the day. Participants wore the accelerometers during their structured exercise time at the five-month time point and were instructed to maintain their regular level of physical activity outside of structured exercise. They were asked to wear the monitor at all times, except while bathing and sleeping. Participants were blinded to the results of the data during these assessment periods; thus, they did not receive any performance feedback from the accelerometers or the staff. Statistical analyses Baseline descriptive statistics were calculated for each group and values are reported as mean ± standard deviation (SD) or as frequency in percentage. Absolute changes in all outcomes were calculated as baseline value subtracted from values at either the 5-month or 10-month time points. Observed means and group differences were analyzed using Student's t-tests with means and standard errors being reported. Linear mixed effect model was performed to assess within-group and between-group differences of the changes of each outcome, after adjusting for age, gender and baseline measure. Least square means and standard errors were also estimated from the same model. All analyses were performed using SAS v.9.3 (SAS Institute, Cary, NC). A p-value ≤ 0.05 was considered statistically significant. Participant Characteristics at Baseline A total of 46 (of the randomized 48) participants (DIET+EX: n=23; SRI+DIET+EX: n=23) completed the 5-month weight loss phase, and 41 participants completed the 5-month follow-up phase and returned for 10-month testing (DIET+EX: n=21; SRI+DIET+EX: n=20). Thus, there was 85% study retention over 10 months. Because the primary purpose of this pilot study was to examine whether the self-regulatory intervention enhanced weight loss maintenance, all analyses included only the 41 participants who completed the entire 10-month study. Table 1 shows baseline demographics and physical characteristics for these participants; there were no differences between study groups at baseline for any of the assessed variables. The participants who were lost to follow-up dropped out of the study due to life changes unrelated to the study interventions, including unanticipated illness, change in work schedules, new time constraints, relocation or family circumstances. Intervention Compliance Exercise session attendance to the prescribed four day per week intervention during the 5month weight loss phase averaged 91±8% and 90±16% for the DIET+EX and SRI+DIET +EX groups, respectively. The absolute energy expenditure during the exercise sessions averaged 234±60 kcal and 205±56 kcal for the DIET+EX and SRI+DIET+EX groups, respectively. Self-reported compliance to the dietary intervention (calculated as a percentage of daily caloric intake over or under what was prescribed) was also good with the DIET+EX group reporting an average compliance of 101.1±2.4% and the SRI+DIET+EX group reporting an average compliance of 99.8±4.3% to the prescribed diet. For participants in the SRI+DIET+EX group, the average SPA goal during the 5-month weight loss phase was 27±13 minutes (range=10-59 minutes) and did NOT include structured exercise time. Of the 20 participants who completed the SRI+DIET+EX intervention, 17 (85%) provided accelerometer process data for the entire 10 months; the remaining three stopped providing accelerometry logs at 19, 21 and 31 weeks. Participants reported wearing the accelerometer at least 10 hrs/day for an average of 87±14% of the days and the daily SPA goal was met for 81±14% of the days. The average number of SPA minutes recorded (39±14 minutes) was higher than the average SPA goal (P < 0.0001). The most common reported barriers to full accelerometer compliance (10 hrs/day, every day for 10 months) were: device malfunction/need for battery change (13%), illness or health reason (9%), forgot to wear (7%), and too busy or time conflict (7%). Table 2 shows unadjusted and adjusted (for baseline weight, age, and gender) body weight changes by group during each phase of the study. During the initial 5-month weight loss intervention, the absolute amount of lost weight tended to be greater, and the percentage of weight loss was significantly greater, in the SRI+DIET+EX compared to the DIET+EX group. Moreover, the average weight regain during follow-up was 1.3 kg less in the SRI +DIET+EX group. At the 10-month time point, body weight in both groups was still significantly (P < 0.001) lower than at baseline, but individuals in the SRI+DIET+EX group had maintained an approximate 10% lower weight than baseline, compared to those in the DIET+EX group whose weight was approximately 5% lower than baseline ( Table 2; P < 0.01 between groups). Figure 1 shows body weight at each time point, adjusted for time, treatment, time×treatment interaction and baseline weight. There is an overall group by time effect of the SRI intervention (P=0.005). Intervention Effects on Body Weight During the weight loss intervention, 45 of the 46 completers lost some weight (range= −15.4 to −0.5kg; one person gained 2.1kg). The majority of participants (34 of 46, 74%) lost at least 5% of their initial body weight; of the 12 participants who did not lose ≥5%, 9 (75%) were in the DIET+EX group. The range of weight changes during the 5-month follow-up phase was −9.3 to +4.2 kg. A total of 31 participants (76%) experienced some weight gain and 18 of those (44%) experienced a weight gain of 2kg or more. Of the 18 participants who gained ≥2kg, 12 (67%) were in the DIET+EX group. Intervention Effects on Daily Physical Activity and Energy Expenditure Changes in resting energy expenditure and blinded accelerometer measures of daily physical activity are also shown in Table 2. To account for changes in body weight, physical activity energy expenditure (PAEE) and resting energy expenditure (REE) are expressed per kg of weight. Most notably, physical activity increased in both groups during the weight loss phase and adjusted changes in minutes of light activity tended to be significantly (P=0.08) greater in the SRI+DIET+EX group compared to the DIET+EX group. In addition, 10month changes in REE were also greater in SRI+DIET+EX; this group experienced an overall increase in REE per kg body weight, whereas REE per kg body weight decreased overall in the DIET+EX group. Discussion The primary finding of this study was that adding a self-monitoring intervention, designed to increase light physical activity in order to prevent potential declines in SPA, to a standard weight loss intervention resulted in a lower body weight over a period of ten months compared to a group without the self-regulatory intervention. Specifically, the SRI+DIET +EX group lost more weight during active treatment, and regained less weight (essentially no weight regain) during the short five-month follow-up, than the DIET+EX group. In addition, at the end of the weight loss intervention phase, the group that received the selfregulatory intervention tended to have higher levels of light physical activity and greater REE. These group differences in REE were sustained at ten months, suggesting that this self-monitoring intervention may have beneficial effects on both resting and non-resting energy expenditure. Presently, there are observational data demonstrating that a high level of physical activity distinguishes those individuals who are successful at weight loss maintenance from those who are not. (34)(35)(36) However, long-term adherence to a high volume of exercise may be particularly difficult for older and obese persons who also may be more likely to compensate for the caloric expenditure of structured exercise by decreasing SPA, resulting in reduced total energy expenditure. Fortunately, evidence suggests that less intense activity is also beneficial for weight loss maintenance.(37) A few clinical trials show that less structured exercise of a lower intensity may be better for weight loss maintenance than higher-intensity exercise. (38,39) Thus, we posit that incorporating more movement into daily activity (i.e., increasing SPA and reducing sedentary behavior) during and following weight loss may be more beneficial than promoting structured exercise in older, obese adults. There is a growing body of literature suggesting it is feasible to intervene on sedentary behavior, which is an independent predictor of health outcomes when considered in conjunction with structured exercise. (24,26,40) The strengths of this study include the randomized controlled design, the carefully and successfully delivered interventions, and the excellent compliance to the daily selfmonitoring and accelerometry use. Yet, there were limitations as well. First, the study was designed as a pilot to test the feasibility and potential efficacy of adding the SRI intervention to a conventional weight loss protocol and, therefore, the sample size is small. Next, although there were group differences in body weight at both follow-up time points, there was only a trend for physical activity to be greater in the SRI+DIET+EX group. Therefore, we cannot deduce with certainty that the body weight differences primarily result from increased SPA. The SRI also involved other behavioral strategies known to influence weight loss success, including goal setting and problem solving, that may have also contributed to the larger weight loss of the SRI+DIET+EX group. Substantial research (28,29) suggests that goal setting and other self-regulatory skills, particularly self-monitoring, are critical for successful behavior change. This study provides important feasibility and early efficacy data regarding the benefits of using a novel selfmonitoring strategy within a broader conceptual model of behavior change to modify the likely reduction in SPA that occurs during periods of negative energy balance among older adults. Our approach focuses on a behavioral strategy to eliminate the compensatory reduction in non-exercise activity seen in older adults who are losing weight with a hypocaloric diet and structured exercise training. While these data are compelling, they are not definitive, and a full-scale randomized clinical trial is needed to examine the effects of a SPA self-regulation intervention compared to structured exercise for the maintenance of weight loss in this population. If our hypothesis is correct, and confirmed in a larger and longer randomized trial involving a state-of-the-art weight loss intervention, the findings would challenge the current standard of care (i.e., exclusive prescription of structured moderate-intensity exercise) for obesity therapy in older adults, leading to new obesity treatment guidelines for both initial weight loss and weight loss maintenance. Table 2 Changes in body weight, energy expenditure, and daily physical activity during weight loss (baseline to 5 mos), during follow-up (5 mos to 10 mos) and after follow-up (baseline to 10 mos)
2016-05-04T20:20:58.661Z
2014-03-17T00:00:00.000
{ "year": 2014, "sha1": "92e805d64748af1f17b9a45344d03c85465cad36", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4037357?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "92e805d64748af1f17b9a45344d03c85465cad36", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32629673
pes2o/s2orc
v3-fos-license
MYCOBACTERIUM FORTUITUM BACTERAEMIA IN AN IMMUNO COMPROMISED PATIENT A case of Mycobacterium fortuitum bacteraemia in an immunocompromised patient confirmed by four positive serial blood cultures is reported here. The patient was a known case of acute lymphoblastic leukemia (ALL) on intensive chemotherapy. The source of bacteraemia was most probably a peripherally inserted vascular catheter. After initiation, of treatment with amikacin to which the strain was sensitive and clarithromycin and removal of the central line the patient’s fever defervesced and repeat blood cultures were negative. This is the first time we have encountered an immunocompromised patient with M. fortuitum septicaemia in our hospital. The possibility of an infection with rapidly growing mycobacteria is important to consider when conventional organisms are not isolated in culture especially in the context of patients with malignancy. Case Report Atypical mycobacteria are known human pathogens and can cause disease in both healthy and immunocompromised individuals. According to Runyon's classification, there are four groups, Group IV being termed as Rapid growers with Mycobacterium fortuitum and M. chelonae being the commonly isolated pathogens in this group. They most often cause cutaneous disease but rarely cause disseminated infections. 1 In the department of microbiology at CMC Hospital, Vellore, we have isolated M. fortuitum from a variety of specimens, mainly from pus and tissue biopsy (unpublished data). We have earlier reported an isolation from blood and CSF culture 2 in a patient who developed endocarditis and meningitis following a balloon mitral valvotomy. We report presently an immunocompromised patient who had M. fortuitum bacteraemia as confirmed by four positive serial blood cultures. Case Report A 27 year old lady, a known case of acute lymphoblastic leukaemia (ALL) on intensive chemotherapy using the ALL-BFM86 relapse protocol, presented with fever of one week's duration following recovery from chemotherapy related neutropenia. There were no localizing symptoms except for a mild non-productive cough. Clinical examination was unremarkable except for a fever of 101 o F. There was no sinus tenderness, no evidence of skin or soft tissue infections. Laboratory investigations showed haemoglobin: 9.5G/L; total white blood cell (WBC) count, 3.1 x 10 9 /L; differential WBC count, neutrophils 62%, lymphocytes 32%, band forms 3%, monocytes 1%, eosinophils: 2%; platelet count, 249 x 10 9 / L. Liver function tests and renal function test were normal; ultrasound abdomen and X-ray of the paranasal sinuses were normal. Initial blood cultures (3 in number) sent on the first day of fever were sterile and Chest X-ray was unremarkable. Since she had a peripherally inserted catheter (PICC), a diagnosis of probable line infection was considered and the catheter was removed, tip sent for routine culture and patient was started on single agent cefotaxime; there was defervescence of fever in 72-96 hours. Smear and routine culture from the tip of the central venous catheter remained sterile. Mycobacterium fortuitum was isolated from four consecutive blood cultures sent three weeks after the initial blood cultures. The patient was given a combination of clarithromycin (500 mg twice daily) for four weeks, ciprofloxacin (750 mg twice daily) and amikacin (15 mg/kg/ day) for two weeks. A repeat blood culture sent at the end of two weeks of therapy was sterile. Blood culture was carried out in the BacT ALERT (bioMerieux Pvt. Ltd.) automation system. The media used was the Fan Aerobic medium provided by the manufacturers. When the machine gave a positive signal, a smear was made and subculture on blood agar and MacConkey agar was done. Detailed microbiological characterization was carried out using standard procedures. 3 The catheter tip was cultured on blood agar, MacConkey agar and thioglycollate broth. 3 Smears of colonies from blood agar revealed gram positive bacilli. which were negative for metachromatic granules with Alberts and Ponders stain. They were catalase positive, oxidase negative and did not produce H 2 S on triple sugar iron medium. Acid fast staining showed short regularly stained acid fast bacilli (Fig). Discussion Results of tests carried out for speciation of rapidly growing mycobacteria are given in the table. The organism was susceptible to ciprofloxacin, gentamicin, amikacin, ofloxacin and tetracyclin and resistant to chloramphenicol, erythromycin, vancomycin, cotrimoxazle, rifampicin, piperacillin and triple sulpha. Disseminated infection with the rapidly growing Mycobacterium fortuitum is rare, although cutaneous infections are known to occur. 1 At the CMC Hospital, Vellore, we have isolated M. fortuitum from patients with skin and subcutaneous infections. 4 We have also earlier reported a case of M. fortuitum endocarditis and meningitis after balloon mitral valvotomy. 2 This is the first time we have encountered an immunocompromised patient with M. fortutum septicaemia. Although four consecutive blood cultures grew the organism, the central venous catheter tip did not culture the same organism. This is difficult to explain since a review of medical literature shows that bacteraemia is most often secondary to a catheter infection. However, we have only carried out qualitative and not semiquantitative culture as suggested. 5 Also culture plates were discarded after 48 hours and not held for more than one week the thioglycollate broth was kept for one week and subcultured. The PICC catheter used in the patient was a single lumen non-tunneled polyurethane (Braun cava fix) type and there was no evidence of catheter site or track inflammation. An underlying immunosuppressed condition and presence of a long term central venous catheter are predisposing risk factors for M. fortuitum septicaemia as was seen in our patient. The patient presented with fever with no localizing symptoms. A catheter infection was suspected but routine cultures were negative. Four consecutive blood cultures positive for M. fortuitum, when three blood cultures taken three weeks earlier were negative, suggests an aetiological significance. The patient can be considered to have probable catheter related bacteremia due to M. fortuitum. 5 Treatment directed at this cleared the bacteraemia and a repeat blood culture, taken two weeks after initiation of treatment with amikacin and clarithromycin and removal of the central venous catheter, was negative. Awareness of possible infection with rapidly growing mycobacteria is important, especially when there are underlying conditions. Vascular catheters should be considered as a source of bacteraemia due to M. fortuitum in patients with cancer when cutaneous lesions compatible with dissemination are absent. 6 The laboratory needs to be alerted so that appropriate procedures can be included. Appropriate treatment prompted by antibiotic susceptibility test results and catheter removal (if present) helps to successfully eradicate infection.
2018-04-03T02:12:05.459Z
2005-04-01T00:00:00.000
{ "year": 2005, "sha1": "af3d7bb62fbfd5862a9d2edb1a98c507aa2fe92a", "oa_license": null, "oa_url": "https://doi.org/10.4103/0255-0857.16058", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "70184632e4c60cd55e1a33837658ac4abc41282d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90765513
pes2o/s2orc
v3-fos-license
Transcriptome-Wide Association Supplements Genome-Wide Association in Zea mays Modern improvement of complex traits in agricultural species relies on successful associations of heritable molecular variation with observable phenotypes. Historically, this pursuit has primarily been based on easily measurable genetic markers. The recent advent of new technologies allows assaying and quantifying biological intermediates (hereafter endophenotypes) which are now readily measurable at a large scale across diverse individuals. The usefulness of endophenotypes for delineating the regulatory landscape of the genome and genetic dissection of complex trait variation remains underexplored in plants. The work presented here illustrated the utility of a large-scale (299-genotype and seven-tissue) gene expression resource to dissect traits across multiple levels of biological organization. Using single-tissue- and multi-tissue-based transcriptome-wide association studies (TWAS), we revealed that about half of the functional variation acts through altered transcript abundance for maize kernel traits, including 30 grain carotenoid abundance traits, 20 grain tocochromanol abundance traits, and 22 field-measured agronomic traits. Comparing the efficacy of TWAS with genome-wide association studies (GWAS) and an ensemble approach that combines both GWAS and TWAS, we demonstrated that results of TWAS in combination with GWAS increase the power to detect known genes and aid in prioritizing likely causal genes. Using a variance partitioning approach in the largely independent maize Nested Association Mapping (NAM) population, we also showed that the most strongly associated genes identified by combining GWAS and TWAS explain more heritable variance for a majority of traits than the heritability captured by the random genes and the genes identified by GWAS or TWAS alone. This not only improves the ability to link genes to phenotypes, but also highlights the phenotypic consequences of regulatory variation in plants. ABSTRACT Modern improvement of complex traits in agricultural species relies on successful associations of heritable molecular variation with observable phenotypes. Historically, this pursuit has primarily been based on easily measurable genetic markers. The recent advent of new technologies allows assaying and quantifying biological intermediates (hereafter endophenotypes) which are now readily measurable at a large scale across diverse individuals. The usefulness of endophenotypes for delineating the regulatory landscape of the genome and genetic dissection of complex trait variation remains underexplored in plants. The work presented here illustrated the utility of a large-scale (299-genotype and seven-tissue) gene expression resource to dissect traits across multiple levels of biological organization. Using single-tissue-and multi-tissue-based transcriptome-wide association studies (TWAS), we revealed that about half of the functional variation acts through altered transcript abundance for maize kernel traits, including 30 grain carotenoid abundance traits, 20 grain tocochromanol abundance traits, and 22 field-measured agronomic traits. Comparing the efficacy of TWAS with genome-wide association studies (GWAS) and an ensemble approach that combines both GWAS and TWAS, we demonstrated that results of TWAS in combination with GWAS increase the power to detect known genes and aid in prioritizing likely causal genes. Using a variance partitioning approach in the largely independent maize Nested Association Mapping (NAM) population, we also showed that the most strongly associated genes identified by combining GWAS and TWAS explain more heritable variance for a majority of traits than the heritability captured by the random genes and the genes identified by GWAS or TWAS alone. This not only improves the ability to link genes to phenotypes, but also highlights the phenotypic consequences of regulatory variation in plants. KEYWORDS endophenotypes Fisher's combined test genome-wide association studies natural variation transcriptomewide association studies variance partitioning Discovery of variation that underlies quantitative traits remains central to the genetic improvement of agricultural species. Functional variation can alter coding sequence or act to regulate an intermediate phenotype. Regulating the abundance of phenotypic intermediates, such as mRNA expression or protein level, provides a more spatially and temporally subtle target for selection than coding sequence changes, which are more likely to be pleiotropic and therefore maladaptive (Mayr 1970). Thus, regulatory variation is the frequent target of both natural and artificial selection that shapes genomes across life, including domesticated plants (Carroll 2008;Hufford et al. 2012;Mayr 1970). It is likely that about half of functional variation is regulatory (Albert and Kruglyak 2015;Gusev et al. 2014;Rodgers-Melnick et al. 2016;Welter et al. 2014). It should also be noted that regulation can take place at any biological level of organization from the epigenetic state (Law and Jacobsen 2010), to gene expression (Albert and Kruglyak 2015;Fu et al. 2013;GTEx Consortium 2015), to ribosome occupancy (Juntawong et al. 2014), to metabolites (Riedelsheimer et al. 2012), to protein abundance (Battle et al. 2015;Chick et al. 2016), furnishing multiple levels at which intermediate and terminal phenotypes can be associated. In standard genetic mapping approaches, like association or linkage mapping, associations between genetic markers and terminal phenotypes of interest are tested for significance (black arrow, Figure 1). However, multiple levels of biological organization exist between the DNA sequence and the terminal observed phenotypic outcomes, enabling trait dissection to be conducted between intermediate levels of biological organization (hereafter endophenotypes, designated by an orange and red arrow in Figure 1). Associating endophenotypes with terminal phenotypes predates the use of molecular genetic markers for mapping. The use of linked observable traits and isozyme migration patterns are examples of tying markers from biological intermediates to terminal phenotypes of interest. Similarly, just as relationships between individuals can be calculated from molecular genetic markers (Flint-Garcia et al. 2005), endophenotypic similarity from isozyme markers can also be used to quantify relatedness (Dubreuil and Charcosset 1998). These same principles have recently been extended to phenotypic prediction guided by metabolites (Riedelsheimer et al. 2012) or by expression dysregulation (Kremling et al. 2018). However, the use of molecular intermediates, which are now readily measurable at large scale across diverse individuals, remains underexplored in plants for the inverse task of causal inference. Associating endophenotypes with terminal phenotypes has multiple distinct advantages. First, while genetic mapping is dominated by the covariance structure of neighboring SNPs and complex haplotypes, endophenotypes provide orthogonal information that often permits inference regarding biological mechanism, which may not be possible from genetic variants alone. Second, genetic mapping often points to intergenic (Wallace et al. 2014) regulatory variants that are not within the coding sequence of the gene that alters the phenotype (Albert and Kruglyak 2015). Therefore, an association signal cannot directly be tied to a corresponding gene and may even be in the body of a second unrelated gene (Tishkoff et al. 2007) or in the case of synthetic association, between multiple true causal variants affecting different genes (Dickson et al. 2010). Association tests with intermediate expression phenotypes do not suffer from these limitations. Third, the abundance of endophenotypes is largely independent of linkage disequilibrium (LD), unlike in the case of genetic markers. In other words, even multiple genes that are perfectly linked, and thus not independently observable in separate individuals, can be prioritized for association with a trait because their expression patterns are independent. This is of greatest utility in species where linkage disequilibrium is extensive or where making high-resolution mapping populations is not feasible. Intermediate phenotypes, such as expression, can also integrate the signal from changes in multiple components of a network, which may not be individually detectable either because their effects are small or changes to the peripheral network components occur at low frequencies. Similarly, intermediate phenotypes can integrate a phenotypic signal from underlying genetic variants for which low frequencies preclude direct detection. The most deleterious of variants are expected to segregate at the lowest frequencies (Gibson 2012;Henn et al. 2015) and, thus, escape detection by mapping without prohibitively large sample sizes. However, rare deleterious variants can be expected to drive common maladaptive patterns in intermediate phenotypes that are thus more easily detected through endophenotype association tests like transcriptome-wide association studies (TWAS) (Hirsch et al. 2014;Pasaniuc and Price 2017). Methods for integrating expression association tests with GWAS have also been used extensively in the human context as shown by Gusev et al. (2016) and Mancuso et al. (2017). However, those methods rely on summary statistics, LD scores, and expression imputation and are computationally more intensive than the more accessible Fisher's combined test whose utility and improved power over TWAS or GWAS alone we have shown here for the first time and recommend for other researchers in model contexts. Here, we illustrate the power of using gene expression endophenotypes measured in a large 299-individual, seven-tissue gene expression resource (Kremling et al. 2018) collected from the Goodman maize diversity panel (Flint-Garcia et al. 2005). Expression levels are correlated with terminal phenotypes in TWAS (Hirsch et al. 2014;Pasaniuc and Price 2017) and then combined with genotype-based associations from GWAS. The method is demonstrated here in a maize inbred diversity panel (Flint-Garcia et al. 2005), which has been widely used to dissect the architecture of dozens of traits of varying complexity (Harjes et al. 2008;Lipka et al. 2013;Owens et al. 2014;Wisser et al. 2011). Related work in maize that relies on associating expression differences directly with phenotype using a Bayesian method, called expression read depth GWAS (eRD-GWAS), has been published recently (Lin et al. 2017). This work used 369 maize samples from which shoot apex RNA was collected. Beyond the difference in frequentist vs. Bayesian approaches, our study also exploits expression measurements from seven tissues in a multiple-regression-based TWAS and integrates the signal from TWAS and GWAS into a more powerful combined test which can be readily visualized as a Manhattan plot. We also compare the power of each model based on the ability to detect known genes, and the capacity to explain variance in a separate population, which differs from the approach of the previous study (Lin et al. 2017). To make this comparison we use the maize NAM population (Yu et al. 2008), which has the advantage of being largely independent of the diversity panel (Flint-Garcia et al. 2005) in which detection was performed. We assess the efficacy of TWAS by quantifying the capacity to identify previously identified genes, and by the fraction of phenotypic variance explained (Gusev et al. 2014;Rodgers-Melnick et al. 2016) by the most strongly associated genes, and compared the TWAS results with GWAS and an ensemble approach combining both TWAS and GWAS. We illustrate that the results of TWAS are a valuable supplement to GWAS mapping that aids in prioritizing likely causal genes when both methods are used in a combined test. Genotypic data Genotypes for the Goodman diversity panel (Flint-Garcia et al. 2005) used in the genome-wide association studies were from the unimputed maize HMP 3.2.1 called against the B73 reference genome (Bukowski et al. 2018). Variants segregating above 5% minor allele frequency (MAF) in the union of all lines were considered for mapping. Variance component estimation in the maize NAM population (Yu et al. 2008 20 kernel tocochromanol traits BLUPs were from Lipka et al. (2013) after additional outliers were removed. The 22 field-based agronomic trait BLUPs were those calculated by Hung et al. (2012). Phenotypes used in variance partitioning with the maize NAM population were from Diepenbrock et al. (2017) for the tocochromanol traits. Agronomic trait BLUPs were previously calculated by Hung et al. (2012). Expression data Expression quantifications were those created from seven diverse tissues in maize by aligning 39 mRNAseq reads against the AGPv3.29 maize genome as described by Kremling et al. (2018). Genome-wide association study Genome-wide association tests were conducted in the maize Goodman diversity panel (Flint-Garcia et al. 2005) using a mixed linear model as implemented in FastLMM (Lippert et al. 2011) accounting for kinship and a naive general linear model fit using MatrixEQTL (Shabalin 2012) as implemented in TASSEL (Bradbury et al. 2007). Transcriptome-wide association study Transcriptome-wide association tests were conducted in the maize Goodman diversity panel (Flint-Garcia et al. 2005) for genes that were expressed in at least half of individuals represented in a specific tissue. A linear model was fit individually for each phenotype à expressed gene combination in which the explanatory variable is the expression value of a gene across individuals. TWAS was attempted both without covariates and with five genetic principal coordinates (calculated from maize HMP3.2.1 used in (Kremling et al. 2018) and 25 probabilistic estimation of expression residuals (PEER) hidden factors (calculated separately for each tissue) as calculated in (Kremling et al. 2018). Multitissue TWAS was also performed. First a model was fit once per trait using the five principal coordinates described above. This model was then compared by analysis of variance (ANOVA) to a model for each gene containing terms for each tissue and the principal coordinates. The p-value resulting from this ANOVA was used to determine whether the multi-tissue model is significantly better than the covariate-only model. This p-value was also used as the p-value in the second of the Fisher's combined tests below. Fisher's combined tests of TWAS and GWAS The GWAS p-value (mixed linear model with kinship as a random effect) of each SNP in the top 10% of most associated SNPs was assigned to nearest gene and then combined with the TWAS p-value (linear model with multi-dimensional scaling (MDS) principal coordinates + PEERs) for that same gene using Fisher's combined test as implemented in the sumlog method in the metap package (Dewey 2017) in R. TWAS p-values for genes which were not tested in TWAS (i.e., their expression was not observed in at least half of individuals) were set to P = 1 prior to combining with GWAS p-values. Fisher's combined tests were performed in the same way when including the multitissue TWAS results instead of the kernel-only results. Variance partitioning Using the k-Nearest Neighbors (KNN) imputed Nested Association Mapping population HMP3.2.1 genotypes described above, kinship matrices were calculated based on the top ten genes identified by each of the TWAS, GWAS, and combined models described in the Goodman diversity panel (Flint-Garcia et al. 2005). To independently assess the accuracy of detected genes, the phenotypic variance explained by each kinship matrix was calculated in the Nested Association Mapping population, within each family and across all the NAM families. For TWAS, the top 10 genes were taken and all SNPs within a 0.5 Mb radius of the start and end of the gene (maize annotation AGPv3.29) were used to calculate a single kinship matrix per trait using the Variance Component Annotation Pipeline in TASSEL (Bradbury et al. 2007). The REML solver in LDAK (Speed et al. 2017) was used to calculate the variance explained by the single kinship matrix. For GWAS, the SNPs were ordered based on significance and assigned to their nearest gene. The top ten unique genes from this list were taken to calculate kinship matrices using the same 0.5 Mb radius around the gene. To avoid picking multiple genes and redundant variants from the same peak based on the GWAS results, the top most associated gene was used within a peak and all other genes within the 0.5 Mb radius were excluded from selection as top genes. Overlap with known kernel metabolite genes Fourteen known tocochromanol biosynthetic genes identified in NAM (Diepenbrock et al. 2017) and 58 a priori candidate genes relevant to the biosynthesis and retention of carotenoids (Owens et al. 2014) were used as positive controls to test the capacity of our GWAS, TWAS, and combined methods to re-detect known genes. In order to avoid comparison of p-value thresholds across methods, positive detections were counted if a gene was detected among the top 1% of genes associated with a trait. Data availability All data are held in public repository. The SNP data for the Goodman diversity panel (Flint-Garcia et al. 2005) used in the genome-wide association studies were from the unimputed maize HMP 3.2.1 called against the B73 reference genome (Bukowski et al. 2018 Levels of biological organization between the ultimate cause of genetics and the terminal phenotypic outcomes can be exploited individually to improve power and inference of biological mechanism. Genotype can be linked to endophenotype as in eQTL or protein QTL (pQTL), or endophenotype can be linked to terminal phenotype by methods like TWAS. the maize NAM population (Yu et al. 2008) was used for the variance component estimation. Expression quantifications were those created from seven diverse tissues in maize by aligning 39 mRNAseq reads against the AGPv3.29 maize genome as described by Kremling et al. (2018). Kernel carotenoid BLUPs from 30 traits were from Owens et al. (2014) and the 20 kernel tocochromanol traits BLUPs were from Lipka et al. (2013) after additional outliers were removed. The 22 field-based agronomic trait BLUPs were those calculated by Hung et al. (2012). Phenotypes used in variance partitioning with the maize NAM population were from Diepenbrock et al. (2017) for the tocochromanol traits. Agronomic trait BLUPs were previously calculated by Hung et al. (2012). Supplemental material available at Figshare: https://figshare.com/ s/ef57544b4d09d5c55131. RESULTS To test the utility of expression data in dissecting quantitative traits in maize, we performed single-tissue-based and multi-tissue-based TWAS (Pasaniuc and Price. 2017) and compared these results with GWAS results, and an ensemble approach combining GWAS and TWAS results using the Fisher's combined test. In TWAS, expression levels across seven tissues from a maize diversity panel (Flint-Garcia et al. 2005) were used individually and together in a multiple regression as independent variables and correlated with previously measured phenotypes for maize kernel traits, including 30 grain carotenoid abundance traits (Owens et al. 2014), 20 tocochromanol abundance traits (Lipka et al. 2013), and 22 field-measured agronomic traits (Hung et al. 2012). Integrating TWAS with GWAS improves power for identifying and prioritizing known genes To assess the relative power of each method to detect known genes, we counted the number of known genes identified in the top 1% ranked genes (based on p-values) found by each method for each trait. This identification of known genes among the top 1% of hits for each method measures how often known genes appear in the tail of the distribution of detected genes and avoids direct comparisons of p-values between differently powered and structured tests that rely on continuous (TWAS) or discrete (GWAS) independent variables. As shown in Tables 1, 2, S1, and S2, the combined test outperforms either the genotype-based or expression-based tests alone for both classes of traits, with 30 total detections of known genes among the top 1% of associations across tocochromanol and 75 detections of putative carotenoid related genes (Owens et al. 2014) when using the carotenoid traits. Using the tocochromanol and carotenoid lists from (Diepenbrock et al. 2017;Owens et al. 2014) genes are detected more often in each of the tocochromanol and carotenoid trait classes when using the combined method. However, the Fisher's combined test of GWAS results with the multi-tissue TWAS results did not perform better. The detection rate was consistently higher for kernel-based TWAS over the multi-tissue TWAS, most likely because the tocochromanol and carotenoid traits are predominantly controlled by gene expression in the kernel. We also compared the methods at the level of single traits. To determine how the combined method prioritizes genes that are not detected in the individual TWAS and GWAS methods and aggregates genes that are detected by only one method, we plotted the results across models for each individual trait. In Figure 2 we plotted the signals mapped for the zeaxanthin trait. Note that points representing SNPs from the MLM GWAS model in (c) and (a) are identically placed, but in (c) they are colored by TWAS significance. The top five genes detected by each method are labeled (a is not individually labeled because the points and top five genes are identical to those in plot c) and previously detected genes found by Owens et al. (2014) are highlighted in red. As shown by the TWAS results plotted in Figure 2B, the known expression-regulated gene crtRB1 has expression which is most strongly correlated (r = 0.309, P = 2.84e-5) with zeaxanthin abundance in our TWAS model that includes genetic and expression-derived covariates (see methods). crtRB1 is not among the top MLM GWAS-detected genes in our study, but the detection of crtRB1 by kernel TWAS is consistent with previous results (Owens et al. 2014;Yan et al. 2010), highlighting this gene's role as a principal determinant of grain carotenoids which acts through variable expression. As is clear in Figure 2A, C another zeaxanthin-implicated gene, zeaxanthin epoxidase, zep1, is detected by GWAS in our study (Owens et al. 2014). zep1 expression is correlated (r= 0.232, P = 0.0014) with zeaxanthin abundance, but it is not among the fifty most significantly associated genes in our TWAS results, and would not be prioritized by TWAS alone. However, within the peak covering zep1 in Figure 2 A,C the markers most strongly associated with zeaxanthin from the MLM GWAS results prioritize a different gene first, GRMZM2G127123, which lacks a known function. The linkage-independent kernel TWAS results also show nearly equal support for both genes, providing evidence that GRMZM2G127123 (r= 0.218, P = 0.0025) and zep1 (r= 0.232, P = 0.0014) both affect zeaxanthin abundance. Both Fisher's combined models using the single-tissue and multi-tissue TWAS results also support the importance of both genes. To test the capacity of the TWAS, GWAS, and combined methods to re-identify genes known to underlie QTL for another trait class, we examined the detected genes for the total tocotrienol trait measured by Lipka et al. (2013). In Figure 3 the most strongly associated variant identified by GWAS is on chromosome 9 nearest a gene of unknown function, GRMZM2G431524. However, as is illustrated in the MLM GWAS Manhattan plot in which points are colored by TWAS significance (c), the other points in the chromosome 9 peak are near other genes known to underlie QTL whose expression is variably associated with total tocotrienol abundance. These second and third most strongly associated genes based on proximity to the most significant markers identified by GWAS are GRMZM2G345544 (function unknown) and hggt1, which has been previously tied to total tocotrienol content (Lipka et al. 2013), and is essential for tocotrienol biosythesis. However, because hggt1 expression is most strongly correlated with total tocotrienol measurements from among these first three genes in the chromosome 9 peak, the combined test using single tissue and multiple n tissues of expression data prioritizes the known gene hggt1 suggesting it is the functional gene in this region, consistent with previous evidence. This illustrates how the supplementary information from expression associations prioritizes likely causal genes that are not among the top hits of either individual expression or genotype-based methods. Variance component estimation from TWAS-and GWAS-detected genes To further assess the capacity of each method to correctly identify genes affecting each trait, an independent variance partitioning approach (Gusev et al. 2014;Rodgers-Melnick et al. 2016;Speed et al. 2017) was also performed. Using variants in a 1 Mb window around the ten top ranked genes identified in the Goodman diversity panel (Flint-Garcia et al. 2005) by GWAS alone, TWAS alone, and the combined method, separate kinship matrices were calculated. These relationship matrices were fit as random effects in separate models of phenotypic variance explained for traits measured in the NAM population, which is largely independent of the Goodman diversity panel in which the various mapping strategies were performed. The additive genetic variance explained by the variants underlying each kinship matrix was calculated providing an estimate of heritability explained by the genes identified by each method. Using variance partitioning across all NAM families, we found some advantage for including expression data in detecting likely functional regions of the genome (Figure 4). Among the tocochromanol kernel traits ( Figure 4A), eight out of ten traits exist in which TWAS or the Fisher's combined method is superior to GWAS alone ( Figure 4A). Heritable variance explained on a per-trait basis by either the TWAS alone or the Fisher's combined method showed about 25% improvement on average over the MLM GWAS, with notable advantage for alpha-tocotrienol (40%), gamma-tocotrienol (41%) and total tocopherol (43%). For more complex field-based agronomic traits, the multitissue TWAS or Fisher's combined method also showed an advantage over GWAS alone in 16 out of 22 agronomic traits ( Figure 3B). On average, the multi-tissue TWAS had 24% improvement over GWAS alone, while the FisherGWASmultiTWAS had notable advantage for kernel number (24%), leaf width (15%), and node number below ear (19%). Based on mean heritable variance across traits per trait class, the combined Fisher's test explained the most heritability among the models; it showed 4-8% improvement for the tocochromanol kernel traits ( Figure 4A inset). However, little improvement was observed for agronomic traits likely due to trait complexity ( Figure 4B inset). Because previously known genes are more often re-identified in the top 1% of hits by combining GWAS and TWAS (Table 1), the variance explained by markers near detected genes also reflect this advantage on heritability with known oligogenic architecture. We further tested the heritability explained by the top ten ranked genes identified by each method using family-based variance partitioning ( Figure 5). Heritable variance was decomposed for each NAM family, giving 24 independent tests of variance partitioning for each trait tallying a total of 3,840 independent tests (24 families à 5 models à 32 traits). To evaluate the best winning model for each trait, we took the sum of heritable variance across 24 NAM families (hereafter, summed heritability). Based on the same set of genes identified from each model, our results illustrate the differing levels of heritability among families for both tocochromanol (Fig. S1, Figure 5A) and agronomic traits (Fig. S2, and S3; Figure 5B). For a-tocotrienol, which is an oligogenic trait, the FisherGWASTWAS method explained the most heritability in 18 out of 24 NAM families ( Figure S1a), giving a fourfold advantage on summed heritability over either GWAS or TWAS alone ( Figure S1b; Figure 5A). The FisherGWASTWAS method captured the most summed heritability in 10 tocochromanol traits ( Figure 5A inset), consistent with what we found in variance partitioning using all NAM families for tocochromanol traits ( Figure 4A). On a per-trait basis, we note that the kernel-based TWAS or the FisherGWASTWAS was the winning method for eight out of 10 tocochromanol traits. We do see a similar pattern in 19 out of 22 field-based complex traits in which either the multi-tissue TWAS or FisherGWASmultiTissueTWAS explained the most heritability ( Figure 5B). We see greater advantage of the FisherGWASmultiTissueTWAS over the GWAS MLM for tassel primary branch (54%), cob length (103%), kernel number (112%), ear mass (98%) and total kernel weight (106%) ( Figure 5A). For the more complex traits such as plant height, the multi-tissue TWAS was the winning model, which explained about twofold higher heritability than the GWAS alone ( Figure 5B, Fig. S3). We found that in 16 NAM families, the multi-tissue TWAS explained the most heritability among other models for plant height. Based on total summed heritability across 22 agronomic traits ( Figure 5B inset), the FisherGWASmultiTissueTWAS and multi-tissue TWAS showed a 15% and 17% improvement in heritability explained over the GWAS MLM alone, respectively. DISCUSSION By far the majority of efforts to dissect the architecture of terminal phenotypes have relied on associations with genetic variants; this capacity to link genotype to phenotype has recently been accelerated by the plummeting cost of sequencing. The more recent advent of technologies which permit the quantification of endophenotypes like mRNA, metabolite, or protein abundance now enable mapping and trait dissection to be done between intermediate levels of biological organization. Assaying and associating these endophenotypes with traits of interest provides insight on biological mechanisms, serves as an independent source of evidence of associations, and facilitates prioritizing potentially causal variation while linking genes directly to traits in a way that potentially integrates the effects of multiple independent genetic variants. Here, we illustrated the utility of using a large RNA-seq resource in maize (Kremling et al. 2018) for transcriptome-wide association studies and integrating these results with associations based on genetic variation. We find evidence supporting the inclusion of transcriptome-wide variation in addition to genetic variation in models seeking to associate traits to underlying and likely causal genes in diverse maize lines, especially when the goal is to infer function of genes underlying oligogenic traits. Across tocochromanol trait classes, the inclusion of TWAS results enables more frequent detection of known causal genes n and helps to prioritize novel candidate genes in the profiled panel. Crucially, transcriptional variation alone does not improve over genotype-based associations, but it is in combination with genotypic information that the power of gene detection is increased. As we demonstrate here, TWAS in combination with GWAS enhances the capacity to prioritize candidate genes over the use of GWAS alone. Given that more than half of detections are supported by TWAS (Table 1), our results also reveal much of the functional variation for these traits to be regulatory. While not all previously identified genes are detected by TWAS, this is likely a combination of insufficient power compared to the previous association studies in the NAM population with .16x as many observations (Diepenbrock et al. 2017), the sampling of a single time point per tissue, and the fact that not all functional variation is regulatory. Despite these limitations, TWAS adds value to GWAS mapping alone and increases the power to re-detect known genes. Our finding that TWAS alone is a valid method for finding true gene-trait associations is consistent with the recent findings of Lin et al. (2017) and colleagues despite the difference between the eRD-GWAS and TWAS models. However, our results differ in that we demonstrate that a combined test integrating TWAS and GWAS yields a more powerful test than either method individually when it comes to re-identifying known genes underlying oligogenic traits (Diepenbrock et al. 2017). We also note that our efforts to validate our TWAS and GWAS detections differ from Lin et al. (2017). In contrast to comparing the overlap of the detections by GWAS and TWAS in the same study, we compared our detections to previously known genes found in a largely independent set of germplasm, namely the NAM population (Yu et al. 2008), which was used to find tocochromanol associations (Diepenbrock et al. 2017). Also, in contrast to the previously published study, we did not perform our cross-validation analysis in the same set of germplasm in which discovery was conducted by GWAS and TWAS to assess accuracy. Using variance partitioning in the largely independent NAM population, we found similar levels of variance explained by the genes detected by each method in the Goodman diversity panel (Flint-Garcia et al. 2005), illustrating that even when the identified genes are tested in an outside population, the detections of the transcriptome-only and combined methods are found to be valid and explain similar amounts of variance to the genotypebased methods (Figures 4, 5). This is roughly consistent with the cross-validation results comparing SNP_BayesB and eRD-GWAS presented in Table S4 by Lin and colleagues (Lin et al. 2017). However, the previously published results show an advantage for eRD-GWAS for only one of fourteen traits, while on the basis of variance partitioning for kernel traits we find an advantage for the kernel-based TWAS or the Fisher's combined model for nine of the ten kernel-based traits for which measurements in NAM exist. In further contrast to the previously published work (Lin et al. 2017), none of the SNPs used in our GWAS or variance partitioning methods were derived from RNA-seq data, allowing for less bias toward expressed genes and giving the genotype-based tests more independence from the expression-based tests. In the previous work, more than 0.9M of the 1.2 M genetic variants were derived from the alignment of RNA-seq reads (Leiboff et al. 2015;Lin et al. 2017), potentially confounding the ability to make associations by GWAS with the presence of an expressed gene, and thus limiting the power of the genotype-based GWAS to make associations which are independent of expression. It is striking that even in diverse maize lines where linkage decays quickly (Wallace et al. 2014) and thus the power to resolve mapping peaks to individual genes is high, TWAS provides a valuable supplement to genetic mapping alone. This benefit of TWAS would be compounded in species or populations in which resolution is limited. Additionally, by imputing expression values based on local/cis haplotype, as has been successfully shown in humans (Pasaniuc and Price 2017), the utility of TWAS could potentially be extended further in maize. Imputing expression to a larger panel would permit the exploitation of previously measured phenotypes across a much larger set of individuals which have not been expression profiled. By imputing only the local/cis genetic component of expression, and implicitly averaging over trans and environmental effects, the capacity to attribute field phenotypes to the genetic component of expression would likely be further improved. The lack of improvement in re-detecting known tocochromanol traits by the multi-tissue TWAS models alone or as part of the Fisher's combined tests is notable, but unsurprising for these genetically simple and very tissue specific traits. This lack of improvement indicates that kernel-based expression alone is most predictive of the kernel-based metabolites and accuracy is not improved by the incorporation of all other tissues. Rather than comparing the inclusion of all tissues vs. kernels only, in the future a variable (tissue) selection TWAS approach should be used in which can remove uninformative terms from the model rather than including them but giving them a very small coefficient. It is also plausible that for more genetically complex traits which are also affected by expression across tissues, the multi-tissue TWAS results are more likely to be informative. A further cause of the limited improvement for the kernel TWAS or Fisher's combined test seen in the variance partitioning results is likely because GWAS identifies genomic regions which, when expanded to a 1 Mb window, could cover the functional variants. Furthermore, while the correct functional gene may not be prioritized by GWAS, if the trait is affected by genetic regulation rather than coding sequence change, the sites near the GWAS hit may in fact be more functional than those Figure 5 Family-based variance partitioning on individual NAM family. Heritability for each trait was estimated for each of 24 NAM families using kinship matrices made from the genetic regions adjacent to the top 10 ranked genes mapped by MLM GWAS, kernelbased TWAS, multi-tissue TWAS, the Fisher's combined test of the MLM GWAS + kernel-based TWAS, and the Fisher's combined test of the MLM GWAS + multitissue TWAS. There were a total of 24 independent tests for each trait-model combination. Heritability estimates were then added together (hereafter, summed heritability) for A) tocochromanol traits and B) agronomic traits. Horizontal barplots compare model based on total summed heritability across traits per trait class. near the mechanistically significant gene itself even if they are misattributed to the incorrect proximal gene. Using a large independent diverse panel with very low LD to assess the heritability explained by the SNPs identified by each method may also provide a better estimate as the functional variants are not as easily tagged over long distances. While the utility of expression endophenotypes in dissecting traits has been demonstrated here, it should be noted that associations made between endophenotypes and terminal phenotypes are inherently more susceptible to environmental effects than genotype-based associations. This susceptibility to environmental effects likely allows us to associate only the environmentally independent heritable fraction of expression with phenotype in our study, especially because expression data were collected from separate plants than those for which terminal phenotypes were measured. Given that in endophenotype-based association studies, like TWAS, environmental variation separately impacts and increases error in both the independent and dependent variables, methods like TWAS alone may plausibly be expected to perform more poorly than genetics-based associations. However, this shortcoming is partially compensated for by the more direct link between endophenotype and terminal phenotype and the potential discovery of mechanism. The collection of expression data from the same plants and conditions in which the phenotypes are collected would likely benefit the dissection of genotype by environment interactions by highlighting the impact of variation in expression for a specific gene within an environment, but cannot be examined here as terminal phenotypes and expression values were calculated from separate environments and years.
2019-04-02T13:15:06.863Z
2018-07-06T00:00:00.000
{ "year": 2019, "sha1": "17d7b00c7d605b3d6de42d7565d38b00f1ce6f54", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/g3journal/article-pdf/9/9/3023/37097747/g3journal3023.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a03d6b5e3ce04be9aba3fd0b9541ead8bae4ff9b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
119077138
pes2o/s2orc
v3-fos-license
Unravelling the Cosmic Web: An analysis of the SDSS DR14 with the Local Dimension We analyze a volume limited galaxy sample from the SDSS to study the environments of galaxies on different length scales in the local Universe. We measure the local dimension of the SDSS galaxies on different length scales and find that the sheets or sheetlike structures are the most prevalent pattern in the cosmic web throughout the entire length scales. The abundance of sheets peaks at $30 \, h^{-1}\, {\rm Mpc}$ and they can extend upto a length scales of $90 \, h^{-1}\, {\rm Mpc}$ . Analyzing mock catalogues, we find that the sheets are non-existent beyond $30 \, h^{-1}\, {\rm Mpc}$ in the Poisson distributions. We find that the straight filaments in the SDSS galaxy distribution can extend only upto a length scale of $30 \, h^{-1}\, {\rm Mpc}$. Our results indicate that the environment of a galaxy exhibits a gradual transition towards higher local dimension with increasing length scales finally approaching a nearly homogeneous network on large scales. We compare our findings with a semi analytic galaxy catalogue from the Millennium Run simulation which are in fairly good agreement with the observations. We also test the effects of the number density of the sample and the cut-off in the goodness of fit which shows that the results are nearly independent of these factors. Finally we apply the method to a set of simulations of the segment Cox process and find that it can characterize such distributions. INTRODUCTION Understanding the formation and evolution of the cosmic web remains one of the most fascinating and challenging problems in cosmology. The first observational hint of the existence of the cosmic web came through several early redshift surveys (Chincarini & Rood 1975;Gregory & Thompson 1978;Einasto et al. 1980) which was later confirmed (de Lapparent et al. 1986) by the surveys like CfA (Davis et al. 1982) and LCRS (Shectman et al. 1996). The modern redshift surveys like the 2dFGRS (Colless et al. 2001) and the SDSS (York et al. 2000) have now revealed the cosmic web in its full glory. The cosmic web is a network of galaxies spanning the entire Universe. The network comprises of several distinct morphological components such as clusters, filaments and sheets which are interconnected in a complex manner and are encompassed by voids of numerous sizes. The galaxies form and evolve in different environments inside the cosmic web and the different ⋆ suman2reach@gmail.com † biswap@visva-bharati.ac.in morphological components provide unique environments for galaxy formation and evolution. The first theoretical insight into the formation of the cosmic web was provided by the seminal work of Zel'dovich (1970) which showed how the successive collapse of an overdense region along its longest, medium and shortest axis would produce spatial patterns like sheets, filaments and clusters respectively. Characterizing these spatial patterns in the cosmic web is an important step towards understanding the galaxy formation and evolution in the Universe. A large number of statistical tools have been designed for this purpose. The percolation analysis (Shandarin & Zeldovich 1983;Einasto et al. 2018), the genus (Gott, Mellot & Dickinson 1986;Appleby et al. 2018), the Minkowski functionals (Mecke et al. 1994;Wiegand et al. 2014;Fang et al. 2017), the Shapefinders (Sahni et al. 1998;Bharadwaj et al. 2004;Bag et al. 2018), the minimal spanning tree (Barrow et al. 1985;Lares et al. 2017), the statistics of maxima and saddle points (Colombi, Pogosyan & Souradeep 2000;Ansari Fard et al. 2018), the multiscale morphology filter based on the Hessian of the density field (Aragón- Calvo et al. 2007Calvo et al. , 2010, the skeleton formalism (Novikov et al. 2006;Sousbie et al. 2008), the local dimension Sarkar et al. 2012) and the Origami approximation (Neyrinck 2012(Neyrinck , 2016 are to name a few. Each of these different statistical measures captures some aspects of the cosmic web. But a comprehensive measure of the cosmic web is still awaited. Presently, developing effective tools for the quantification of the cosmic web is an active area of research. The different structural elements of the cosmic web are characterized by their density and geometry. The galaxy clusters located at the nodes where the filaments intersect, are known to be the densest regions in the cosmic web followed by the filaments and the sheets. The filaments observed in the galaxy distribution from the SDSS has been shown to be statistically significant upto the length scales of 80 h −1 Mpc (Bharadwaj et al. 2004;. Filaments are elongated structures with a length of tens of Mpc (Colberg 2007) and thickness of ∼ 2−3 h −1 Mpc (González & Padilla 2010). They can be of different sizes and types (straight, warped, irregular etc.) based on their visual morphology (Pimbblet et al. 2004). The filaments are believed to host ∼ 50% of the baryons in the Universe (Cen & Ostriker 2006) and are expected to play an important role in the formation and evolution of galaxies. The filaments which are one of the most prominent visual features in the galaxy distribution has so far drawn a lot of attention in the literature. Contrary to this, the detection of sheets or the walls in the galaxy distribution has attracted very little or no attention at all. There are also giant structures like the Sloan Great Wall (Gott et al. 2005) extending over length scales of more than 400 Mpc. The Saraswati supercluster (Bagchi et al. 2017) which spans at least 200 Mpc is a massive supercluster recently found in the SDSS. On the other hand, the empty regions or the voids constitute of about ∼ 95% volume of the Universe (Kauffmann & Fairall 1991;El-Ad & Piran 1997;Hoyle & Vogeley 2002;Platen et al. 2007). The voids seen in the galaxy distribution have different sizes such as Bootes void with a radius of 62 Mpc (Kirshner et al. 1987) and the Eridanus supervoid which extends upto ∼ 300 Mpc (Szapudi et al. 2015). The existence of these giant structures illustrate the variety and richness of the environments for galaxy formation and evolution in the cosmic web. Galaxy environments are primarily characterized by the local density which is known to play a central role in the galaxy formation and evolution. It has been also argued that besides the density, the morphology of the environment may also play a crucial role in the formation and evolution of galaxies (Pandey & Bharadwaj 2008;Scudder et al. 2012;Darvish et al. 2014;Luparello et al. 2015;Filho et al. 2015;Pandey & Sarkar 2017;Lee 2018). It would be interesting to measure the relative abundance of these structures on different length scales and understand their roles in the galaxy formation and evolution. The Sloan Digital Sky Survey (SDSS) which is currently the largest redshift survey has mapped the distribution of millions of galaxies in the nearby Universe providing an unprecedented view of the cosmic web in the nearby Universe. This provides an unique opportunity to unravel the cosmic web in greater detail than ever possible. propose the local dimension which is a simple measure to characterize the environment in which a galaxy is embedded inside the cosmic web. This has been applied earlier to the SDSS DR7 data by Sarkar et al. (2012) to study the length scale dependence and density dependence of the various morphological components of the cosmic web. The local dimension can be also employed to address several other important issues related to the cosmic web. In this work, we analyze the data from the SDSS DR14 with the local dimension to study how the fraction of galaxies residing in different morphological environments changes with the associated length scales. This allows us to explore the relative abundance of different types of structures at different length scales and identify the length scales which are dominated by any particular type of structures. We also prepare a list of galaxies for which the local dimension can be computed throughout the entire length scale range available for this analysis. This would enable us to track the gradual transition of the environment of a galaxy with the increasing length scales. We compare our findings with a semi analytic model of galaxy formation by using a semi analytic galaxy catalogue (Henriques et al. 2015) based on the Millennium Run Simulation (MRS) (Springel et al. 2005). Further, some of the filaments and sheets observed in the galaxy distributions are the outcome of random chance alignments. So we also compare our findings against the random mock catalogues from Poisson distributions to quantify the fraction of galaxies identified as part of filaments and sheets which are the products of random chance alignment. We also test the possible roles of any systematics such as the number density and the cut-off in the goodness of fit in influencing the results of the present analysis. Finally, we test the efficiency of the method by applying it to a set of simulations of the segment Cox process. We convert redshifts to distances using a ΛCDM cosmological model with Ωm0 = 0.31, ΩΛ0 = 0.69 and h = 1 throughout the analysis. A brief outline of our paper is as follows. We describe the method in Section 2 and the data in Section 3. We present our results and conclusions in section 4 and section 5 respectively. METHOD OF ANALYSIS We consider a sphere of radius R centred around each galaxy in the volume limited sample. The centres for which the spheres remain completely inside the survey boundary are identified and we count the number of galaxies N (< R) inside each of these spheres. We repeat these measurements for a number of different radius R within a specified length scale range R1 ≤ R ≤ R2. The value of R1 is kept fixed and R2 is gradually increased upto the largest radius accessible within the survey region. The cosmic web is an interconnected network of sheets, filaments and clusters and each of the galaxies are part of any of these structural elements. We expect the number of galaxies within a sphere of radius R centered around a galaxy to scale as, where A is a constant and the exponent D is the local di- mension ). The local dimension D quantifies the nature of the structural element in which it is embedded. We expect D = 1 and D = 2 for galaxies residing in the filaments and sheets respectively. D = 3 around a galaxy can both indicate a galaxy cluster or any volume filling structures such as the cosmic web on large scales. It may be noted that the intermediate values of the local dimension D are also possible when the counting sphere incorporates multiple structural elements of different types. We fit the galaxy counts N (< R) around each centre to Equation 1 and measure the D value associated with each galaxy. The local neighbourhood of a galaxy is expected to look different at different length scales. Consequently, the measured D values are expected to change with the increasing length scales and finally approach D ∼ 3 when the galaxy is surrounded by a homogeneous network. This would occur only beyond the scale of homogeneity. We consider only those centres for which we have at least 10 neighbouring galaxies within radius R2. The value of D for each galaxy within each length scale range R1 ≤ R ≤ R2 is estimated using a least-square fit and a χ 2 value is also calculated for each of these fits. We apply a cut in the Chi-square per degree of freedom χ 2 ν ≤ 0.5 to identify only the good quality fits for our analysis ( Figure 1). We classify the galaxies into five classes based on the measured values of their local dimension D. Table 1 provides the criteria for this classification. C1 and C2 are the galaxies which are part of a filament or sheet respectively. The C3 galaxies are part of volume filling structures. The I1 and I2 galaxies with intermediate D values may lie near the junction of two different types of structural elements. For each length scale range R1 ≤ R ≤ R2, we find the number and fractions of classified galaxies in each class. SDSS DR14 We use data from the 14 th data release of the Sloan Digital Sky Survey (SDSS) (Abolfathi et al. 2017) which is the second data release of the fourth phase (SDSS IV) of the survey. DR14 has accumulated spectral and imaging data taken from August 2014 to July 2016 by the SDSS 2.5 m telescope and it has the most current and reprocessed data that incorporates the entire coverage of the prior data releases. We use a Structured Query Language (SQL) to get the data from SDSS CasJobs 1 . We select a contiguous region in the Northern galactic hemisphere using the cuts 0 • ≤ δ ≤ 60 • and 135 • ≤ α ≤ 225 • , where α and δ are the right ascension and declination respectively. We select all the galaxies within redshift z < 0.3 and r-band Petrosian magnitude mr < 17.77 in this region. We set the 1 http://skyserver.sdss.org/casjobs/ ZW ARN IN G flag to zero to select only the galaxies with good spectrum and reliable redshift. These cuts yield a total 377606 galaxies. We then prepare a volume limited sample from this data by applying a cut Mr < −20.5 in the Kcorrected and extinction corrected r-band absolute magnitude. The K-corrections are obtained from a polynomial fit provided by Park et al. (2005). The resulting volume limited sample contains 90406 galaxies within redshift z < 0.1385 which radially extends upto 406 h −1 Mpc. The galaxy sample has a number density of ∼ 2.977 × 10 −3 h 3 M pc −3 and the mean intergalactic separation of ∼ 6.95 h −1 Mpc. Millennium Run Simulation The Millennium Run Simulation (MRS) (Springel et al. 2005) is one of the largest high resolution cosmological Nbody simulation available todate. The Millennium simulation followed the evolution of 2160 3 dark matter particles in a comoving box of size 500 h −1 Mpc from redshift z = 127 to z = 0. The semi analytic models (SAM) (White & Frenk 1991;Kauffmann, White & Guiderdoni 1993;Cole et al. 1994;Baugh et al. 1998;Somerville & Primack 1999;Benson et al. 2002) provide a powerful and effective tool to study the galaxy formation and evolution. These models parametrise the physics involved in terms of simple models following the dark matter merger trees over time and finally provide the statistical predictions of galaxy properties at any desired epoch. Here we use the data from a semi analytic galaxy catalogue (Henriques et al. 2015) derived from the Millennium run simulation (Springel et al. 2005). Henriques et al. (2015) updated the Munich model of galaxy formation using the values of cosmological parameters from PLANCK first year data. We use a SQL query to extract the data from the Millennium database 2 . We map the Millennium galaxies to redshift space by using their peculiar velocities and then construct the mock samples by applying the same absolute magnitude cut as applied to the SDSS data. We ensure that the mock sample has the identical geometry and number density as the actual SDSS sample. We construct 10 such mock SDSS samples from the SAM catalogue from the Millennium Run simulation by placing the observer at different location. These mock samples are not derived from independent regions as we have only one realization of the SAM catalogue. Poisson sample We construct 10 mock SDSS samples from Poisson distributions. These mock random catalogues have exactly the same geometry and number density as the actual SDSS sample used in this analysis. Figure 1. The left panel shows the best fit lines along with the measured values of N as a function of R for three different galaxies with local dimension 1,2 and 3 respectively. The fits are carried within the length scale range 5 h −1 Mpc ≤ R ≤ 10 h −1 Mpc and each of these fits satisfies the criteria χ 2 /ν ≤ 1 2 employed in this work. The right panel shows the same for another three galaxies with D = 1, 2, 3 for which the number counts are fitted within length scale range 5 h −1 Mpc ≤ R ≤ 20 h −1 Mpc. Segment Cox process We simulate a set of segment Cox process (Martinez et al. 1998;Pons-Bordería et al. 1999) inside a cube of sides 250 h −1 Mpc to test the efficiency of the method employed in the present work. The segment Cox process is a controlled point process where segments of length l are scattered with random positions and orientations over a given volume. We first generate a random position and then choose a random orientation for a segment. The segment is then populated with points at random locations on it. The process is repeated for the desired number of segments. The segment length, the number of segments per unit volume and the mean number of points per unit length of the segments are the control parameters in the segment Cox process. We generate 10 realizations of the segment Cox process each with segment length 10 h −1 Mpc, 30 h −1 Mpc, and 50 h −1 Mpc. The control parameters of the simulated datasets are described in Table 2. Scale dependence of the local dimension We study the scale dependence of the local dimension by identifying the classifiable galaxies at different length scales and estimating the local dimension for each them. We keep R1 fixed at 5 h −1 Mpc and gradually change R2 from 10 h −1 Mpc to 100 h −1 Mpc in uniform steps of 10 h −1 Mpc. The number of classifiable galaxies decreases with increasing length scales. We find that initially 66136 galaxies out of the total 90406 galaxies (∼ 73%) are classifiable at R2 = 10 h −1 Mpc which decreases to 2717 (∼ 3%) at R2 = 100 h −1 Mpc. We measure the number of galaxies classified in each class (Table 1) and their fractions at each value of R2. The number of SDSS galaxies in each class and their fractions as a function of R2 are shown in the top left and top right panels of Figure 3. At any given length scale R2, the fractions are simply the ratio of the number of galaxies in each class and the total number of classifiable galaxies at that length scale. The filaments and the sheets are the most striking visible features in the cosmic web. In our analysis, the C1 and C2 types of galaxies are believed to be part of filament and sheet respectively. The right panel of Figure 3 shows the change in the fractions of different types of galaxies with increasing length scales. The figure shows that at R2 = 10 h −1 Mpc, ∼ 50% (∼ 30% in sheets and ∼ 20% in filaments) of all the classifiable galaxies resides in sheets and filaments. The rest 50% galaxies are distributed in C3 type and the intermediate I1 and I2 type environment. The C3 type represent the galaxies inside groups/clusters or volume filling structures such as a homogeneous network of galaxies. The I1 type galaxies are expected to lie in the vicinity where filaments and sheets intersect. Further, this may also include the galaxies which are part of a curved or warped filament. The I2 type galaxies are expected to be a part of the environment where multiple sheets intersect. It is interesting to note that the fraction of C1 type galaxies decreases rapidly from 20% at 10 h −1 Mpc to merely 0.1% at 20 h −1 Mpc. We find very few C1 type galaxies beyond this length scale. It may be noted that only the galaxies residing in the straight filaments would be identified as C1 type. This indicates that On the other hand, the fraction of C2 type galaxies initially increases with length scales and peaks at 30 h −1 Mpc. We find that 40% classifiable galaxies are C2 type at 30 h −1 Mpc. The fraction of C2 type galaxies then decreases steadily with increasing length scales. We note that only 0.5% classifiable galaxies are C2 type at 90 h −1 Mpc. The presence of a peak at 30 h −1 Mpc for the C2 type galaxies indicates that most of the sheets extend upto a length scale 30 h −1 Mpc. Sheets of larger sizes also exist in the cosmic web but they become less and less abundant with increasing length scales. The fraction of I1 type galaxies behaves similar to the C1 type galaxies but they extend to a larger length scale. The fraction of I1 type galaxies changes from ∼ 35% at 10 h −1 Mpc to 0.1% at 60 h −1 Mpc indicating that the size of such environment extend much beyond the size of the straight filaments. The fraction of I2 type galaxies in the SDSS increases from ∼ 15% at 10 h −1 Mpc to ∼ 60% at 100 h −1 Mpc. Similarly, the fraction of C3 type galaxies grows from 10% at 10 h −1 Mpc to 40% at 100 h −1 Mpc. These indicates that more and more galaxies are associated with such environ-ment as the length scales are increased and nearly all the classifiable galaxies are part of either I2 or C3 type environment on a length scale of 100 h −1 Mpc. This trend clearly indicates that a nearly homogeneous network of galaxies emerge on larger length scales. The two middle panels of Figure 3 show the numbers and fractions of different types of galaxies as a function of length scales for the galaxies from the semi analytic galaxy catalogue from the Millennium simulation. Interestingly, the galaxies in this semi analytic model recovers the observed fractions of different types of galaxies in the SDSS remarkably well. The filaments and sheets extends upto nearly the same length scale in both the SDSS and the semi analytic model. Some small differences in the results can be also noted. For instance at 10 h −1 Mpc, a relatively higher fraction of SDSS galaxies reside in sheets as compared those from the semi analytic model and this trend continues till 50 h −1 Mpc. Further the fraction of C2 type galaxies peaks at 20 h −1 Mpc in the semi analytic model whereas the same peak appears at 30 h −1 Mpc for the SDSS galaxies. This implies that the sheets are relatively less abundant in the semi analytic model as compared to the SDSS. Interestingly, the sheets extend upto nearly 80 − 90 h −1 Mpc in both the distributions. The I2 type galaxies which dominates the larger (Table 1) at different values of R 2 . The right panel shows how the fraction of different types of galaxies vary with increasing value of R 2 . The two middle panels and the two bottom panels show the same but for the mock galaxy samples from a semi analytic galaxy catalogue from the Millennium simulation and the Poisson distributions respectively. The value of R 1 is fixed at 5 h −1 Mpc in each case. The error-bars shown for the Poisson distributions and Millennium simulation are obtained from 10 independent realizations. The size of the error-bars are very small for the Poisson distributions. The error-bars are also very small for the Millennium simulation as all the 10 mock samples are derived from the same catalogue. length scales are also believed to inhabit the regions which are partly sheetlike. These result emphasizes the prevalence of sheets in the cosmic web. It should be also noted that some of the filaments and sheets identified in the galaxy distribution may be a result of random chance alignment. We would like to examine this by analyzing a set of mock SDSS catalogues from the Poisson random distributions. The results for the Poisson distributions are shown in the bottom two panels of Figure 3. It is interesting to note that a very small number of galaxies (∼ 5%) are found inside filament at 10 h −1 Mpc in the Poisson distributions. This number is roughly 1 4 th of that observed in the SDSS and the semi analytic model. This in-dicates that although a small number of filaments arise due to ransom chance alignments, the majority of the filaments detected in the SDSS and the semi analytic model are genuine in nature. On the other hand, a significant number of galaxies (∼ 30% are of C2 type) in the Poisson distribution are found to be part of a sheetlike structures at 10 h −1 Mpc. The fraction of both the C2 type and I1 type galaxies diminish rapidly with increasing length scales and becomes nearly extinct beyond 30 h −1 Mpc in a Poisson distribution. Contrary to this, we observed that the sheetlike structures extend upto 90 h −1 Mpc in the SDSS and the semi analytic model. This suggests that the sheets identified on smaller length scales may be a result of random chance alignment but the sheetlike structures spanning out to larger length scales in the SDSS and the semi analytic model are significant and genuine. The fraction of I2 and C3 galaxies rises with increasing length scales in both the SDSS and the semi analytic model. We note that in the Poisson distribution, the fraction of I2 galaxies though initially increases with length scales upto 30 h −1 Mpc but then decreases gradually with increasing length scales. This clearly indicates that both sheetlike (C2 type) and partly sheetlike (I2 type) structures are less likely to emerge on larger length scales in a Poisson distribution due to random chance alignment. This emphasizes the significance of the large sheetlike structures observed in both the SDSS and the semi analytic model. Finally, the fraction of C1 type galaxies steadily increases from 20% at 10 h −1 Mpc to ∼ 90% at 100 h −1 Mpc in the Poisson distribution indicating its homogeneous nature as compared to the galaxy distributions on most length scales. Transition of the local dimension We find that the galaxies tend to inhabit regions with higher local dimension when probed on larger length scales. But most of the galaxies which are classified according to their local dimension on different length scales are not available at all scales. The gradual transition of the environment of a galaxy with increasing length scales can be only probed if its local dimension can be calculated at each and every length scales. We identify a subset of the classifiable of galaxies for which the local dimension can be computed throughout the entire length scales probed. We find that there are altogether 2282 galaxies in our SDSS sample for which this can be achieved. We prepare such a sample of galaxies for both the mocks from semi analytic galaxy catalogue and Poisson distribution. We study the variation in the fraction of different types of galaxies as a function of length scale in each of these samples. The results are shown in Figure 4. The top left and right panel of Figure 4 show the results for the SDSS and the semi analytic model respectively. The results show that the galaxies reside in all sorts of environment when we probe only their immediate neighbourhood. As we in- Figure 5. The top left and right panels of the figure show the fraction of different types of galaxies as a function of length scale when the good quality fits are selected using the cut-offs χ 2 ν ≤ 1 and χ 2 ν ≤ 2 respectively. The middle left and right panels respectively show the results with cut-off χ 2 ν ≤ 0.5 but after randomly discarding 25% and 50% galaxies from the original volume limited sample. The two bottom panels show the results for the SDSS volume limited sample when the redshifts are converted to distances using the ΛCDM model and cosmography. The 1-σ errorbars are only shown in the two middle panels which are drawn from 10 different subsamples. clude larger and larger scales in the computation of local dimension, it appears that there are no filaments beyond 30 h −1 Mpc. Interestingly, the sheetlike structures are found to exist on length scales upto 70 h −1 Mpc in both the SDSS and the semi analytic model. The local dimension of a galaxy on length scales beyond 70 h −1 Mpc are either I2 type or C3 type indicating a transition towards a homogeneous network. We show the results for the Poisson distribution in the bottom middle panel of Figure 4. The results for the Poisson distribution indicate that the filaments do not extend beyond 10 h −1 Mpc and sheets do not extend beyond 20 h −1 Mpc. Also there are very small number of galaxies residing in filaments in the Poisson distribution. These filaments and sheets are the result of random chance alignment which should be kept in mind while analyzing any galaxy distribution to identify various patterns present in them. These results show that sheets can not arise from chance alignment on large scales and the prevalence of sheetlike structures in the SDSS galaxy distribution is an important characteristics of the observed cosmic web. Systematic effects We also study the systematics effects which may affect the outcome of the present analysis. While estimating the local dimension, the good quality fits are identified by employing a cut-off in the the Chi-square per degree of freedom χ 2 ν ≤ 0.5. We would like to test if the results of the present analysis are sensitive to this criteria. We repeated our analysis for another two cut-off values χ 2 ν ≤ 1 and χ 2 ν ≤ 2. The results of this test on the SDSS data are shown in the top two panels of Figure 5. Comparing these with the top right panel of Figure 3, we find that the fraction of galaxies available in different environment as a function of lengthscale is insensitive to the choice of the cut-off in χ 2 ν . We have checked that the galaxies belonging to a particular class remains in the same class when we change the cut-off in the χ 2 ν . It is only the numbers in each class which gets reduced when more stringent cuts are applied. Further, we also test if the specific number density in our volume limited sample plays any role in deciding the results of the present analysis. We separately repeated our analysis by randomly discarding 25% and 50% of the galaxies from the SDSS volume limited sample and adopting χ 2 ν ≤ 0.5. The results of this test are shown in the middle two panels of Figure 5. We observe some small differences with the original result when 25% galaxies and 50% galaxies are discarded. These tests show that the results of the present analysis are robust and nearly independent of the cut-off in the χ 2 ν and the number density of the galaxy sample. The galaxy distribution analyzed here is restricted within z < 0.1385 which probes the local Universe. In this case one may convert redshifts to distances by simply using cosmography without the use of any particular cosmological model. We compare the results from the SDSS using the ΛCDM model and the cosmography in a model independent way in the two bottom panels of Figure 5. We observe that the main findings of the analysis remain nearly model independent. Tests with the segment Cox process We also test the efficiency of the method by simulating a set of segment Cox process and analyzing them with the local dimension. While analyzing the datasets from the segment Cox process, we find that the fraction of points belonging to C1 type or the filaments type gradually decreases with increasing length scales and extends upto a length scale which is somewhat larger than the characteristic segment length in each case. For example, the top right panel of Figure 6 shows that the fraction of filament type points are largest (> 50%) at 10 h −1 Mpc which decays to nearly zero at 30 h −1 Mpc. The datasets with segment length 30 h −1 Mpc shows that more than 65% of the points are filament type at length scales of 10 h −1 Mpc which gradually decays to zero at length scales of ∼ 50 h −1 Mpc. Similarly, we find that the dataset with segment length 50 h −1 Mpc exhibit that ∼ 80% points reside in filaments at 10 h −1 Mpc which diminish to zero at 80 h −1 Mpc. The results suggest the existence of larger number of segments having length smaller compared to the characteristic segment length and smaller number of segments with a length larger than the characteristic segment length. This may arise due to the intersection and chance alignments of multiple segments which could produce both shorter and longer segments as compared to the characteristics segment length. The intersection of multiple segments is more likely to occur as compared to the chance alignment of multiple segments. This explains why we find a larger frac-tion of shorter filaments than the longer filaments as compared to the characteristic segment length. We also note that the chance alignments of many linear segments from various orientations on large scales can give rise to structures with sheetlike appearance. In all the right panels of Figure 6, we find that a large fraction of points are classified as C2 type or sheet type on increasingly larger scales. These sheetlike structure are the results of pure chance alignments of many linear segments oriented along different directions. Interestingly, we find that the fraction of points belonging to volume filling structures are negligible in each case. So the test suggests that the local dimension is unable to trace the exact size of the linear segments in a segment Cox process but gives the size of longest straight filaments in the distribution which can arise after intersection and alignments of multiple segments are taken into account. It may be noted that this increases with the characteristic segment length. Some spurious sheetlike features are identified on large scales due to the intersection and chance alignments of the linear segments in the segment Cox process. However, the fraction of C2 type points or sheetlike points remain very small on small scales which may be used to distinguish the segment Cox process from other types of distribution. The cosmic web is a much more complex system than a simple superposition of linear segments of uniform length and hence all the findings of this test may not be applicable to the real galaxy distributions. However, the test ascertains that the local dimension method can characterize a distribution which is dominated by linear filamentary structures. CONCLUSIONS We compute the local dimension of galaxies in a volume limited galaxy sample from the SDSS in the local Universe and study their proportions on different length scales. We find that the galaxies reside in all types of environments when the environment is characterized on smaller length scales. Our results indicate that the filaments in the galaxy distribution extend upto 30 h −1 Mpc whereas the sheets extend upto as large as 90 h −1 Mpc. On large scales, the majority of the galaxies in the SDSS are found to reside in either sheetlike or partly sheetlike environment. We find a very similar trend in the semi analytic galaxy catalogue from the Millennium Run simulation. No filaments and sheets are observed beyond a length scales of 30 h −1 Mpc in the Poisson distribution. The absence of sheetlike structures on large scales in the Poisson distribution show that they can not result from a random chance alignment on those length scales. The present analysis find the prevalence of sheetlike structures in the cosmic web on larger length scales. The filaments are only observed on smaller length scales and are completely absent on larger length scales. Our analysis indicates that the sheets and the sheetlike structures are the most dominant features on large scales in the galaxy distribution from the SDSS as well as in the semi analytic model. In the Zeldovich scenario, the pancakes are the first non-linear structures formed by gravitational collapse. Doroshkevich (1970) show that the simultaneous collapse along multiple axes is quite unlikely and the filaments and nodes would form later depending on the eigenvalues of the deformation tensor at different Lagrangian co-ordinates. The pancakes are expected to be the most dominant feature emerging from the first stage of non-linear clustering. So the higher abundance of sheets or sheetlike structures observed on relatively larger scales may be a consequence of the Zeldovich approximation. Earlier studies find that the filaments are statistically significant upto length scales of 70 h −1 Mpc whereas our results indicate that the straight filaments can only extend upto 30 h −1 Mpc. The two dimensional sections analyzed by may also include some filaments which arise due to the projection of sheetlike structures. Further, their study also incorporates the curved or wiggly filaments into consideration. The observed galaxy distribution shows a tendency towards transition to a homogeneous network on larger length scales. This is consistent with the findings that the Universe is homogeneous around a length scales of ∼ 100 h −1 Mpc (Yadav et al. 2005;Hogg et al. 2005;Scrimgeour et al. 2012;Nadathur 2013;Pandey & Sarkar 2015;Pandey & Sarkar 2016;Avila et al. 2018). We study the systematics effects of the number density of the sample and the cut-off in the goodness of fit and find that our results are robust against the variation in these parameters. Analyzing simulated datasets of the segment Cox process, we find that the local dimension method can characterize such distributions.
2019-02-15T12:43:57.000Z
2018-12-10T00:00:00.000
{ "year": 2019, "sha1": "e96b6c33a04cdd2f63de71656408791c179e4953", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1812.03661", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e96b6c33a04cdd2f63de71656408791c179e4953", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5038922
pes2o/s2orc
v3-fos-license
Estimation of the Toxicity of Different Substituted Aromatic Compounds to the Aquatic Ciliate Tetrahymena pyriformis by QSAR Approach Nowadays, quantitative structure–activity relationship (QSAR) methods have been widely performed to predict the toxicity of compounds to organisms due to their simplicity, ease of implementation, and low hazards. In this study, to estimate the toxicities of substituted aromatic compounds to Tetrahymena pyriformis, the QSAR models were established by the multiple linear regression (MLR) and radial basis function neural network (RBFNN). Unlike other QSAR studies, according to the difference of functional groups (−NO2, −X), the whole dataset was divided into three groups and further modeled separately. The statistical characteristics for the models are obtained as the following: MLR: n = 36, R2 = 0.829, RMS (root mean square) = 0.192, RBFNN: n = 36, R2 = 0.843, RMS = 0.167 for Group 1; MLR: n = 60, R2 = 0.803, RMS = 0.222, RBFNN: n = 60, R2 = 0.821, RMS = 0.193 for Group 2; MLR: n = 31 R2 = 0.852, RMS = 0.192; RBFNN: n = 31, R2 = 0.885, RMS = 0.163 for Group 3, respectively. The results were within the acceptable range, and the models were found to be statistically robust with high external predictivity. Moreover, the models also gave some insight on those characteristics of the structures that most affect the toxicity. Introduction With the rapid development of science and technology, tens of thousands of new chemicals are synthesized and widely used in all walks of life every day. However, as we all know, if chemicals are used or handled incorrectly, they may enter the aquatic environment or bio-accumulate in the food chain, where they may adversely impact the people, ultimately. One of the current interests in medicinal chemistry, environmental sciences, and especially for toxicology, is to rank and establish the chemical substances with respect to their potential hazardous effects on humans, wildlife, and aquatic flora and fauna [1]. Among the vast organic matter, it is noteworthy that the substituted aromatic compounds [2][3][4][5][6][7][8] occupy important positions, since they are produced in large quantities and released into the environment as a result of their wide use in agriculture and industry, and are widely distributed in air, natural water, waste water, soil, sediment, and living organics [9,10]. In addition, recent studies have proved that the substituted aromatic compounds are also a kind of biotoxic environmental pollutant, and even have the effects of carcinogenesis and gene mutation on organisms [10,11]. Therefore, studies on the properties of substituted aromatics have important significance. Up to now, both experimental [12][13][14][15] and theoretical methods [16,17] have been used to evaluate kinds of substituted aromatic compounds for their different toxicities. Also, it is well known that the theoretical predictions of properties or activities by quantitative structure-activity relationship (QSAR) studies have been widely adopted and applied since the 90s, because of their advantages, such as rapidness, easiness, sensitiveness, and cheapness [11]. The QSAR method has been widely applied in different fields, including physical chemistry, pharmaceutical chemistry, environmental chemistry, toxicology, and other research fields [18]. It has been proven that the use of QSAR modeling for toxicological predictions would help to determine the potential adverse effects of chemical entities in risk assessment. For a long time, a lot of meaningful research focusing on the toxicity of substituted aromatic compounds by QSAR approach have been carried out. In 1982, Schultz et al. tried to perform the QSAR study between the cellular response to Tetrahymena pyriformis and molecular connectivity indexes for a series of 24 mono-and dinitrogen heterocyclic compounds. In this study, the authors established a better model than before, and pointed out that toxicity increases with an increase in the number of atoms and degree of methylation per compound, and that toxicity decreases with an increase in nitrogen substitution [1]. In 1998, Cronin et al. established several QSAR models focusing on a dataset of 42 alkyl-and halogen-substituted nitro-and dinitrobenzenes to Tetrahymena pyriformis [19]. They found that the nitrobenzenes were thought to elicit their toxic response through multiple (and mixed) mechanisms by one or two molecule descriptor models. In 2001, in order to compare the differences among kinds of QSAR model-building methods, Cronin and Schultz developed QSAR studies for the toxicity of 268 aromatic compounds in the Tetrahymena pyriformis growth inhibition assay [16]. In their study, they not only compared the influence of different descriptors on the models, but also the Bayesian regularized neural network (BRANN) and partial least-squares (PLS) analysis to build the models. In the following year, the same authors performed also the same study on a dataset of phenolic toxicity data to Tetrahymena pyriformis [17]. The above works gave us some guidelines or directions on how to build better models on the toxicities to this group of compounds. Netzeva et al. developed relative simple QSARs models (one or two descriptors) for the acute toxicity of a dataset of 77 aromatic aldehydes to the ciliate Tetrahymena pyriformis using mechanistically interpretable descriptors [20]. They revealed that the octanol/water partition coefficient (log K OW ) is the most important descriptor, and the models would be improved by using another electronic descriptor. Roy et al. performed a QSAR studies on the toxic potency to Tetrahymena pyriformis of a dataset of 174 aromatic compounds (phenols, nitrobenzenes, and benzonitriles) using electrophilicity index [21]. In this study, the compounds in the dataset were divided into the electron donor and acceptor group, and they stated that electrophilicity indices, along with the total Hartree-Fock energy, can be used to build the model perfectively. Later, the performances of the linear and nonlinear models were estimated by Devillers et al. using a structurally heterogeneous set of 200 phenol derivatives on Tetrahymena pyriformis. In this study, the authors pointed out the superiority of the nonlinear methods over the linear ones to find complex structure-toxicity relationships among large sets of structurally diverse chemicals [22]. Tetko et al. gave studies on the applicability domain and the influence of the overfitting in the QSAR model building process by the toxicity dataset against Tetrahymena pyriformis [23]. The hierarchical technology for QSAR was performed using 95 diverse nitroaromatic compounds against the ciliate Tetrahymena pyriformis [24]. Zarei et al. developed a model for the prediction of the toxicity of 268 substituted benzene compounds including phenols, monosubstituted nitrobenzenes, multiply substituted nitrobenzenes and benzonitriles to T. pyriformis using bee algorithm (BA) for selecting descriptors and adaptive neuro-fuzzy inference system (ANFIS) for building model [25]. A molecular structural characterization (MSC) method named molecular vertexes correlative index (MVCI) was successfully used to describe the structures of 30 substituted aromatic compounds, and the results suggested good stability and predictability of the QSAR models [26]. Comparative molecular field (CoMFA), molecular similarity index analysis (CoMSIA), and density functional theory (DFT) methods were used to establish QSAR models for analyzing and predicting the toxicities of 31 substituted thiophenols [27]. And later, Salahinejad et al. also used the CoMFA, CoMSIA, and VolSurf techniques to develop valid and predictive models able to estimate the toxicity of substituted benzenes toward T. pyriformis. In the paper, they confirmed that in addition to hydrophobic effects, electrostatic and H-bonding interactions also play important roles in the toxicity of substituted benzenes, as well as that the information obtained from CoMFA and CoMSIA 3-D contour maps could be useful to explain the toxicity mechanism of substituted benzenes [28]. The linear (MLR) and nonlinear statistical (RBFNN) methods were used by us to build a reliable, credible, and fast QSAR model for the prediction of mixture toxicity of non-polar narcotic chemicals, including 9 PFCAs, 12 alcohols, and 8 chlorobenzenes and bromobenzenes. The predictive values are in good agreement with the experimental ones [18]. In the same way, recursive neural networks (RNN) and multiple linear regression (MLR) methods were also employed to build models for prediction of the toxicity values of 69 benzene derivatives, both methods provided good results as compared to other studies available in the literature [29]. To build a reliable and predictive QSAR model, a genetic algorithm along with partial least square (GA-PLS) was employed to select the optimal subset of descriptors that significantly contribute to the toxicity of 45 nitrobenzene derivatives to Tetrahymena pyriformis [30]. The goal of present study was to develop reliable and predictive QSAR models using both MLR and RBFNN methods to identify and predict the acute toxicity (the 50% growth inhibitory concentration IGC 50 ) of substituted aromatic compounds to the aquatic ciliate Tetrahymena pyriformis. For this purpose, the whole dataset was divided into three groups with respect to the important function group of the substituted aromatic compounds such as −NO 2 , −X etc. They were Group 1: Compounds with NO 2 group, etc. (46 compounds); Group 2: Compounds with -X, etc. (75 compounds); Group 3: Compounds with both −NO 2 and −X, etc. (39 compounds). In so doing, different accurate models were built to evaluate the toxicities of these aromatic compounds. Datasets For the aromatic compounds, Wei et al. have mentioned that the order of the contribution of the special substituents to the toxicity of the aromatic compound is: −NO 2 > −Cl > −CH 3 > −NH 2 > −OH [31]. Based on the dataset given by Schultz et al. [32], we selected the typical compounds containing the most influential functional groups (−NO 2 and -X), and divided them into three subgroups. Group 1 includes 46 compounds whose chemical structures have the functional group −NO 2 without −X. Among them, 36 compounds were substituted by a −NO 2 and 10 compounds were substituted by two −NO 2 . Group 2 contains 75 compounds which have functional groups -X without −NO 2 . Among them, the 54, 16, and 5 compounds were replaced by one, two, or three functional groups −X, respectively. Group 3 contains 39 compounds, in which both the −NO 2 and −X functional groups are included, and the total number of substituents for −NO 2 and −X is not more than 3. In this study, compounds in each group were randomly divided into two subsets. One called training set was used to build a model, and there were 36, 60, 31 compounds in the training set for Group 1, 2, 3, respectively. The remaining compounds were used to verify the robustness and feasibility of the model as a test set which includes 10, 15, and 8 for the corresponding groups, respectively. The CAS number, name, and toxicity (−log IGC 50 ) of the above compounds are all listed in Table 1. Molecular Descriptors' Generation and Selection To calculate the molecular descriptors of each compound, their structures were drawn using ISIS Draw 2.3 (MDL Information Systems, Inc., San Ramon, CA, USA) [33]. The MM + molecular mechanics forcefield in the HyperChem 6.0 program (Hypercube, Inc.: Waterloo, ON, Canada) was then used to carry out the preliminary molecular geometry optimization [34]. The further optimization of the compound structure was done by semi-empirical PM 3 method utilizing the Polak-Ribiere algorithm until the root mean square gradient was 0.01 kcal/mol [35]. Finally, a more precise optimization was achieved by MOPAC 6.0 software package (Indiana University: Bloomington, IN, USA) [36]. Afterwards, the final optimized structures were converted to the CODESSA 2.63 program (University of Florida, Gainesville, FL, USA) for calculating the five classes of descriptors, namely constitutional, topological, geometrical, electrostatic, and quantum-chemical descriptors [37]. It was necessary to explain that the logP descriptor, which cannot be calculated by the CODESSA 2.63, but can be obtained by Hyperchem, was then added to the descriptors pool [34]. Through doing these, 494, 597, and 611 descriptors were gained for each of the studied compounds in Group 1, 2, and 3, respectively. Before establishing the QSAR models, it is necessary to remove the insignificant descriptors, and the constant and highly intercorrelated descriptors (the intercorrelation of the descriptors should be lower than 0.8). In this paper, the heuristic method (HM) was used to achieve a thorough search for the best multilinear correlations with the computed descriptors in the framework of the program CODESSA 2.63 [37]. Multiple Linear Regressions (MLR) Multiple linear regressions (MLR) are often accepted as a classical method for solving linear problems when there are two or more than two independent variables in QSAR modeling. The purpose of MLR is to find a mathematical function which best depicts the desired activity Y (here, −log IGC 50 values) as a linear combination of the X-variables (the molecular descriptors), with the regression coefficients b n . The equation is as follows: Usually, the good fit alone does not guarantee that the model is useful for prediction purposes by the R 2 (coefficient of determination), LOOq 2 (leave-one-out correlation coefficient), RMS (root mean square error), F (Fisher's statistics), etc. [38]. Some statistical characteristics of the test set are also needed to be considered: R 2 (coefficient of determination), R 2 0 (the coefficients of determination, predicted vs observed activities, when the Y-intercept b 0 is set to zero), as well as by their corresponding slopes k and k . The following conditions need to be fulfilled to adequately estimate the predictive ability of a model [39]: Radial Basis Function Neural Networks (RBFNN) In general, RBFNN may have a better result than MLR, because it can take into account some nonlinear behavior between the molecular descriptors and the desired activities values (−log IGC 50 ). The detailed introduction of RBFNN has been stated in previous studies [40,41], so we only make a simple statement of the key parts here. The RBFNN is a typical feed forward neural network which is composed of three layers, which are the input layer, the hidden layer, and the output layer. The first layer is linear, and distributes the input values, while the next layer is nonlinear, and uses radial basis function. The third layer linearly combines the outputs. Each neuron in each layer is adequately linked to the next layer. However, there is no connection between neurons in a given layer. Each hidden layer unit stands for a single radial basis function, which is characterized by a center and a width. In this layer, each neuron uses a radial basis function as nonlinear transfer function to handle the input information from the previous layer. The most common use of RBF is the Gauss function, characterized by the center (c j ) and width (r j ) [42]. It is used to measure the Euclidean distance between the input vector (x) and the radial basis function center (c j ), and gain the nonlinear transformation within the hidden layer, defined as where h j is the output of the jth RBF unit, while c j and r j are the center and width of such a unit, respectively. And the operation of the output layer is linear and is given by where y k is the kth output unit for the input vector X, w kj is the weight connection between the kth output unit and the jth hidden layer unit, and b k is the respective bias. In the present study, we used the MATLAB package (MathWorks, Natick, MA, USA) (www.mathworks.com/products/matlab/) to accomplish all the RBFNN calculations. The total functions of the RBFNN model can be evaluated by the same statistical parameters as the MLR method together with its reliability and robustness. Applicability Domain (AD) of the Model It is necessary to give the application domain (AD) of the model. The applicability domain (AD) of a QSAR model refers to a theoretical region in the space defined by the compounds in the training set. It demonstrates the nature of the compound molecules that can be utilized in the built model. That is to say, AD restricts a theoretical region, also for unknown chemicals without experimental data, with the lowest number of bad predictions (Y-outliers) and chemicals far from the training structural domain [43]. In this study, a William's plot, i.e., a plot of standardized residuals (R) vs leverages was used [44]. Here, a simple measure of a chemical being too far from the applicability domain of the model is its leverage, h i [43], as follows: In the above equation, x i represents the descriptor row vector of the studied compound, while x represents the n × k − 1 matrix of k model descriptor values for the n training set compounds. The superscript "T" refers to the transpose of the matrix/vector. h i characterizes the leverage of a compound, and is one of the coordinates of the William's plot (standardized residuals versus leverage). MLR Results As mentioned above, based on the structural differences among the molecules which are caused by the influential functional groups (−NO 2 and −X), Group 1, 2, and 3 have 46, 75, 39 compounds, respectively. The models of each group were established by the training sets. Before doing this, the heuristic method (HM) was used to conduct the descriptor selection. After the preselection of the descriptors, 178, 203, and 160 descriptors were left for each group by removing of the descriptors not obeyed the thumb rules [45]. Multilinear regression models were then developed in a stepwise procedure, that is, the descriptors and correlations were sorted by the values of the F-test and the correlation coefficients. Beginning with the top descriptor from the list, two-parameter correlations were calculated. Later, the descriptors were added one by one, until the preselected number of descriptors in the model is fulfilled. Finally, three descriptors were used to describe the relationship between molecule structure and toxicity for each group of compounds. The selected descriptors and their chemical meaning, along with the statistical parameters, are listed in Tables 2-4. The external test set was also used to further evaluate the three models. The statistical parameters obtained are as follows: N ext = 10, R 2 = 0.917, q 2 ext = 0.851, F = 13.820, RMS = 0.222 for group 1; N ext = 15, R 2 = 0.789, q 2 ext = 0.732, F = 13.720, RMS = 0.266 for group 2; N ext = 8, R 2 = 0.733, q 2 ext = 0.730, F = 260.404, RMS = 0.380 for Group 3. Figures 1, 2 and 3a show the predicted vs observed −log IGC 50 values for all the training and test set compounds. Thus, it can be seen that the model is reasonable in both statistical significance and predictive ability. Model Applicability Domain Analysis and Improved MLR Model It is also an important step to consider the possible outliers of the models. In order to visualize the AD, the plot of standardized cross-validated residuals versus leverage (the William's plot), which can provide an immediate and simple graphical detection, was used to find out the outliers from the models. In this plot, the horizontal and vertical straight lines represent the normal control values of Y-outliers and X-outliers, respectively. The limit of X-coordinate is 3m/n, where m is the number of model parameters, and n is the number of samples belonging to the training set. In the present study, the normal control value for Y-outliers (RES) was set as ±3σ. Figures 4-6 show the William's plot based on the MLR models for the whole dataset compounds of group 1, 2, 3, respectively. As can be judged from Figure 4, in the model for Group 1, there is one X-outlier (for Group 1: compound 2), which is 2-nitroanisole. In its structure, there are two functional groups, −NO 2 and methoxy. The former is in all of the compounds belonging to this group as a strong electron-withdrawing group. However, the methoxy group has oxygen lone pair electrons which are a strong electron donor moiety, compared to other ones in the group. Therefore, care should be taken when using the compounds with methoxy, since they can activate the benzene ring and exert an unusual influence on the toxicity. And from Figure 6, it can also be seen that there is a X-outlier (for group 3: compound 15), that is, 2-chloromethyl-4-nitrophenol. This compound has three electron-withdrawing moieties, including-Cl, −OH, and −NO 2 , which has almost the strongest induction effect of the compounds in this group. Also, there seem to be another outlier (for Group 3: compound 36), which belongs to the test set. This may be due to variability in the measurement, or it may indicate experimental error. If the handling of the outliers is unreasonable, the accuracy of the model will be affected. Thus, the quality and ability of the model prediction will be affected. Therefore, we removed the outliers from Group 1 and Group 3, set up the models anew, and the results were as follows: for the training To further assess the predictive powers of the model established by the MLR method, parameters such as , k, k , etc., were also calculated, and the results were shown in Table 5. From the table, we can see the statistical results were all within the acceptable ranges for the methods of MLR. Validation Results of the Models Further, a fivefold cross-validation algorithm was applied for validation of the stability of the three models. The members selected for each group (i.e., groups A, B, C, D, and T) were shown in Table 1. The R 2 , F, and RMS values for each validation along with their average values were shown in Table 6 for the MLR models. As can be seen, both models are stable, judging from the obtained values for the average training quality and for the average predicting quality. RBFNN Results In the field of QSAR research, RBFNN often shows better results than MLR because of its ability to consider some nonlinear relationships between the molecular structure and its activity. In order to confirm this view, RBFNN was utilized to build nonlinear predictive models using the same descriptors selected by the MLR models. The RBFNN can be traced as i-n k -1 net to indicate the number of units in the three layers, respectively. Meanwhile, the width (r) of RBF was computed by systemically changing its value in the training step from 0.1 to 4.0 with increments of 0.1. For the three groups of compounds belonging to training sets in this study, the RBFNN models were 3-10-1, 3-9-1, and 3-9-1, along with widths of 0.8, 2.0, and 1.7, respectively. Their statistical results of the training and the test set are as follows. Group1: for training set, Table 1, and the plot of the predicted and experimental values of both training and test set were displayed in Figures 1b, 2b and 3b. Different from the original literature, [35], we selected and classified the original compounds according to the structural characteristics and further modeled, analyzed, and predicted the corresponding toxicity values. The models, thus established, are also more targeted for the particular compounds, and the statistical results of (R 2 −R 2 0 ) R 2 , k, k , etc., as shown in Table 5 by RBFNN, also indicated the models to be statistically robust with high external predictivity. Interpretation of Model Descriptors In order to deepen the understanding of this study, more detailed explanations of the descriptors selected in each group were performed. For group 1, three descriptors were selected in the QSAR model, namely: G 2 , P AB , and Enn (C-H) . The positive sign of them indicated that the −log IGC 50 values increased with its increase, and vice versa. G 2 refers to gravitation indexes for all bonded pairs of atoms, and it is defined as [46], where m i and m j are the atomic weights of atoms i and j, r ij is the interatomic distance, N b is the number of bonds in the molecule. P o belongs to the valency-related descriptors, which relate to the strength of intermolecular bonding interactions and characterize the stability of the molecules, their conformational flexibility and other valency-related properties [47]. E nn(C-H) is Max n-n repulsion for a C-H bond, calculated as follows: E nn (CH) = Z C Z H R CH , where Z C and Z H are the nuclear (core) charges of atoms C and H, respectively, and R CH is the distance between them. This energy describes the nuclear repulsion driven processes in the molecule, and may be related to the conformational (rotational, inversional) changes or atomic reactivity in the molecule [46]. For Group 2, focusing on the compounds without the functional group −NO 2 , but with −X, three descriptors were chosen. That is, Log P, PNSA-2/TMSA, and P SIGMA. PNSA-2/TMSA is FNSA-2 fractional PNSA (PNSA-2/TMSA) [Zefirov's PC], which contributes to the calculation of atomic partial charges to the total molecular solvent-accessible surface area [46]. P SIGMA represents the maximum bond order for a given pair of atomic species in the molecule, its values for a given pair of atomic species in the molecule with the lower limit P SIGMA (min) > 0.1. LogP stands for the solvational characteristic (hydrophobicity of chemicals) because it is closely related to the change in the Gibbs energy of solvation of a solute between two solvents. For Group 3, three descriptors were selected to build the model, that is Ic, Enn (C-C) , and RPCG. The chemical meaning of them can be seen in Table 4. I c is a geometrical descriptor which relates to the atomic masses, the distance of the atomic nucleus from the main rotational axes, which characterizes the mass distribution in the molecule. Enn(C − C) = Z C Z C /R C-C , where Z C and Z C are the nuclear (core) charges of atoms C, and R C-C is the distance between them. This energy describes the nuclear expulsion driven processes in the molecule, and may be related to the conformational (rotational, inversional) changes or atomic reactivity in the molecule [48]. RPCG, relative positive charge, belongs to electrostatic descriptors. From its coefficient, we can find that the relative positive charge of the molecule is negatively related to the endpoint values (−log IGC 50 ). In summary, we found that the repulsion between the two bonds and the local charge on the surface of the molecule appeared in different models, indicating that these two factors have a greater influence on the structure of the compound and should be relatively valued. Conclusions In the present study, the QSAR models were performed on the study of the acute toxicity of substituted aromatic compounds to the aquatic ciliate Tetrahymena pyriformis using the MLR and RBFNN methods, and by dividing the whole dataset into three groups based on the most influential functional group (−NO 2 and −X). Acceptable statistical results for each model indicated their good stability and good predictability. We can also see from the results of the MLR and RBFNN models that the MLR method can establish reasonable models for evaluating the activity of compounds, and the RBFNN method can provide better statistical parameters. Also, the selected descriptors are effective and feasible for evaluating the toxicity of this group of compounds. Lastly, the results of this study provided useful insights on the characteristics of the structures that most affect the toxicity.
2018-04-24T17:26:43.058Z
2018-04-24T00:00:00.000
{ "year": 2018, "sha1": "2354a3ff559c3ccf7b8628aba15e1916e5925132", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/5/1002/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85f685913ba110cc3f96466f56f53483d572ec42", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
234545418
pes2o/s2orc
v3-fos-license
Improved SVM classification algorithm based on KFCM and LDA To address the problem that SVM is sensitive to outliers and noise points, in order to improve the classification accuracy of SVM, this paper introduces fuzzy theory and intraclass dispersion theory, proposes an improved SVM classification algorithm, uses KFCM and LDA to filter the data set, and selects reasonable training samples, thereby reducing the number of wild points and noise points in the training sample, and thus reducing its impact on the classification effect of the classification model. Compared with the traditional SVM, the algorithm in this paper considers the impact of training samples on the classification effect, introduces fuzzy theory and intra-class dispersion, and eliminates the wild points and noise points in the training samples that affect the classification accuracy of the classification model. Experimental verification shows that the classification accuracy of the SVM classification model trained by the filtered training samples is higher than that of the SVM classification model without the trained training samples. Introduction SVM is an effective classification method that provides ideal decision making between two or more categories [1]. SVM algorithm has many unique advantages in solving pattern recognition problems such as small samples, nonlinear and high dimensions. However, SVM also has disadvantages, such as sensitivity to wild points and noise points, and low training speed and classification speed when the number of samples is large [2]. At present, a number of scholars have adopted a variety of methods to optimize SVM to improve the classification accuracy of big data. Among them, some scholars have improved SVM parameters through optimization algorithm. Chiang, H [3] et al. proposed a decentralized artificial bee colony food source optimization algorithm and used it to optimize the kernel parameters of the support vector machine model, thus creating a new mixture and further improving the classification accuracy. Gao, Y. [4] et al. proposed a TWSVM algorithm based on an improved artificial fish swarm algorithm, which solved the TWSVM parameter selection problem through the improved artificial fish swarm algorithm. At the same time, some scholars improve the classification accuracy of SVM by removing redundant support vectors. Wang Yu et al. [5] and Zhao Xiaoqiang et al. [6] improved the training speed and classification effect of SVM by reducing the scale of training set and reducing redundant support vector. This paper proposes an improved SVM classification algorithm based on KFCM and LDA, aiming at the impact of the outliers and noise points in the training samples on the SVM classification effect. The algorithm takes into account the data structure of the training data set, introduces intra-class dispersion on the basis of KFCM, filters the training data, reduces the influence of noise points and outliers in the training samples on the classification algorithm, and thus constructs an efficient classification algorithm. By comparing and analyzing the performance of various algorithms, the results show that the improved SVM classification algorithm based on KFCM and LDA has a high classification accuracy. SVM basic theory The main idea of SVM is to establish a classification hyperplane as the decision surface to maximize the margin between positive and negative examples [7]. In other words, when the sample is linearly separable, an optimal hyperplane can be found to separate the points of different categories, and the optimal classification plane obtained correctly separates the two categories, the classification interval is also the largest. Let the sample set be   , , 1,2, , y is the class marker, 1 i y   is the positive example, 1 i y   is the negative example. Let the hyperplane equation obtained is 0 T w x b   , w is the weight normal vector, b is the deviation displacement term, and the linear equation be normalized, so that the above equation can satisfy: In formula (1), the classification interval is 2 / w , to make the interval maximum is to make 2 w minimum. At this point, the optimization problem can be expressed as: under the constraint of formula (1), find the minimum value of A. Therefore, the solving problem of SVM can be transformed as follows: Lagrange multiplier is introduced into formula (2), after solution, the optimal solution can be obtained. According to the obtained i  , w and b can be calculated. Meanwhile, the optimal classification hyperplane is 0 T w x b   , and the optimal classification function is: In linear inseparability, SVM introduces kernel technique to map the linear inseparability data in the input space into a high-dimensional feature space, so that the data can be divided in the highdimensional feature space [8]. We use ϕ(x) to represent the eigenvector after x mapping, then the objective function becomes: Lagrange multiplier is introduced to the above objective function, and kernel function ( , ) introduced. Its corresponding classification function becomes: Kernel fuzzy C-means clustering algorithm Based on C-means clustering algorithm, KFCM algorithm introduces fuzzy membership degree to spread "hard" clustering and "soft" clustering, thus transforming the clustering problem into the fuzzy division problem of logarithmic data points [9]. The objective function of FCM is: U is the membership degree of the k sample to class i, c is the number of clusters, and m is the fuzzy factor. The improved fuzzy C-means clustering algorithm introduced by kernel function is to transform the nonlinear problem of low-dimensional input space into the linear problem of high-dimensional feature space through kernel function [10]. Select the RBF function, change the distance function in the fuzzy C-means clustering algorithm through the kernel function, and convert the objective function into: Using the necessary conditions of Lagrange extreme values, the updating formulas of cluster center and membership matrix are obtained as follows: Linear Discriminant Analysis LDA is a classical Discriminant method. The idea of this method is as follows: given a training set sample, in a feature projection space, the interval between data samples of the same category is the smallest, and the center interval between two different categories is the largest [11]. Let dataset In-class divergence matrix S  is defined as: The inter-class divergence matrix b S is defined as: The main purpose of LDA algorithm is to find a reasonable ω, to minimize S  and maximize b S : so that the membership of each point does not directly belong to a single clustering center. For binary classification, if a data point to two clustering center membership degree value is calculated by the hour, the data points is hard to judge what kind of the data points is likely to affect the classification effect of classification model, so you can according to calculate the membership degree values of the training sample to carry on the preliminary screening, eliminate the noise points. Improved SVM classification algorithm The LDA algorithm finds a line based on a given sample and projects the sample onto the line. In other words, the projection of the same kind of samples on the straight line is as close and dense as possible, and the projection points of different kinds are as far away as possible. Discriminant analysis was carried out on the training samples using LDA algorithm, get the data distribution of the projection, when the data distribution is more closely, can think these data points is a main characteristic of the class, therefore, can according to the projection of the distribution of the sample points, the training sample screening, get features evident in the data set as the training set, training the classification model. In the SVM classification algorithm, the field points and noise points in the training samples have a great negative impact on the classification effect of the SVM classification model. Therefore, this paper considers the data structure of the training data set, introduces fuzzy theory and in-class dispersion, and selects the training samples. The improved SVM classification algorithm based on KFCM and LDA is named KFLDPSO_SVM. The steps of the improved SVM classification algorithm based on KFCM and LDA are as follows: Step1: Pre-screen the training data, and define the membership range of the pre-training set of the screening as: greater than the minimum membership value and less than the average membership value. KFCM was used to cluster the data to obtain the minimum membership minF and the average membership AVGF. For the data within the membership range of the pre-training set, it was screened out and put into ftrain.cvs as the pre-training set. For the data outside the range, it was put into test. Step2: The LDA algorithm is used to perform linear discriminative analysis on the pre-training data set. According to the data distribution after projection, the closely distributed data is put into the new data file train.cvs as the training set, and the scattered data is put into test.cvs as the test set. Step3: The improved PSO is used to optimize the SVM algorithm. In PSO-SVM, the population size is 60, the number of iterations is 30, and the fitness function is a 3 fold crossover calibration verification function. Adaptive inertia weight, max 0.9 Relevant data extraction In order to verify the classification effect of the new model, the experimental data selected in this paper are from the data provided by the LibSVM official website [12]: German, Heart, Australian. Table 1 shows the attribute information of the data in turn according to the size of the experimental data. Table1. Attribute information of experimental data set Verification and analysis Firstly, the experimental data set is input, KFCM algorithm is used to cluster the data set, and the membership degree of each data point is calculated, the data points with membership value within the specified range are screened out. Then, the LDA algorithm is used to perform linear discriminant analysis on the filtered data and eliminate outliers that affect the efficiency of the classification model. After the LDA discriminates the data, the following data distribution can be obtained: (a) German data set (b) Australian data set Fig.1 Distribution of pre-training samples after LDA projection In the diagram above, the y axis is the tag data classes, the x axis is received after the projection on the original data values, in figure 2 (a) the graph is German data set after by KFCM screening of the preliminary training samples by LDA projection distribution, we can see that the German advance training sample after the LDA for projection, negative class are mainly distributed in the range of (0.004, 0.011), is class are mainly distributed in the range of (0.058, 0.012); FIG. 2 (b) shows the distribution of pre-training samples of Data set Australian projected by LDA. It can be seen from the figure that after the pre-training samples of Australian were projected by LDA, the negative classes were mainly distributed within the range of (-0.022,-0.005), while the positive classes were mainly distributed within the range of (-0.017,-0.003). The densely distributed data points in the pre-training sample were put into train.cvs and the sparsely distributed points were put into test.cvs. The PSO-SVM classification model was trained with the training set obtained by screening, and the remaining data was used as the test set to verify the model. At the same time, PSO-SVM and GA-SVM classification models are compared with the model proposed in this paper, where the ratio of training set to test set is 3:7. Can be seen from table 2, the data collection of German, the KFCM algorithm is adopted to train the data to carry on the preliminary screening, screening by LDA algorithm for pre training samples, and finally, using the selection of training samples to train the SVM classification model, classification accuracy is 75.9%, GA -SVM classification accuracy of 72%, and without screening training samples of SVM classification model of classification accuracy is 75%; For the dataset HEART, the classification accuracy of KFLDPSO_SVM algorithm is 71.9%, GA-SVM 64.2%, and PSO-SVM 65.4%. For data set Australian, the classification accuracy of KFLDPSO_SVM algorithm was 82.6%, GA-SVM 61.8%, and PSO-SVM 67.6%. It can be seen that the classification accuracy of KFLDPSO_SVM algorithm is higher than that of the other two classification models. It can be seen that the classification accuracy of KFLDPSO_SVM algorithm is higher than that of the other two classification models. We can conclude through KFCM and LDA screening of the training sample, improve the classification accuracy of SVM classification algorithm, after KFCM and LDA selection of training samples, the category of the training sample characteristic is obvious, and the wild points and noise points in the training samples that affect the classification accuracy are effectively reduced, thus, the classification performance and accuracy of SVM classification model are improved. Conclusion In this paper, an improved classification model of support vector machine (SVM) based on KFCM and LDA is established by introducing fuzzy theory and in-class discreteness theory. By using KFCM and LDA algorithms to screen the data set, the appropriate training samples are selected, the training samples are used to train the SVM classification model, and finally the test set is used to test the classification model. Through experimental verification, we can find the SVM classification model trained by the screened training samples, its classification accuracy is higher than that of the SVM classification model with randomly selected training samples, in other words, screening training samples through KFCM and LDA can improve the classification effect of the classification algorithm. It can be seen that the number of field points and noise points in training samples is reduced after screening, and the classification accuracy of the model is improved.
2020-12-24T09:12:37.671Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "3e671920ad5aa5c87d15ef1ccfbcf728162d6461", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1693/1/012107", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5488698f100342b212ee28d0764bce333b2768f4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
125866718
pes2o/s2orc
v3-fos-license
BEHAVIORAL PROBLEMS IN EPILEPTIC CHILDREN – A TERTIARY CARE EXPERIENCE Dr. Virender Kumar, MBBS, MD, Dr. Uruj Qureshi, MBBS, MD (Community Medicine) and Dr. Geeta Kumari, MBBS. 1. Assistant Professor, Department of Paediatrics, GMC Srinagar. 2. Medical Officer, Health & Medical Education, J&K Govt. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Frequency of seizure was defined as per Sabbagh, et al. 28 . All patients were receiving antiepileptic drugs (phenytoin sodium/sodium valproate/carbamazepine/ clobazam) either as monotherapy or in combinations of two or three. Children who were admitted for acutecontrol of seizures were assessed once it was controlledand they were discharged from the hospital. Contolled seizure was defined as cases who were seizure free for atleast 6 months before assessment and those who hadrecurrence of seizures despite antiepileptic medications were considered as uncontrolled seizure. Revised Kuppuswamy scale was 29 was used for the assessment of socio-economic status. Assessment for behavioral problems was done by a clinical psychologist. The native language of the study population was Kashmiri and the questions were translated from English in Kashmir, Counseling was provided to children and families having clinical range abnormalities, and non-responders were referred to psychiatrist for pharmacotherapy. Statistical analysis Data obtained was entered into Microsoft Excel and was analysed in Statistical Package for Social Sciences (SPSS Ver. 20). Student's 't' test was used to compare the observations of patients with controls. Chi-square test was applied for comparisons of data of proportions. A P value of <0.05 was considered as statistically significant. Results:- A total of 70 children with epilepsy and 70 healthy controls in a similar age group were enrolled, and were further sub-divided into two age-groups: 2-5 years (32 epilepsy and 29 controls) and 6-14 years (38 epilepsy and 41 330 controls). Mean and standard deviation of age of onset of disease was 2.4±1.73 years and 4.3±2.32 years in 2-5 and 6-14 years age-group, respectively. We had 41 males in 2-5 years and 51 in 6-14 years age-groups in cases with epilepsy. In 2-5 years age group, 31 (51.7%) received sodium valproate, 10 (16.7%) phenytoin sodium and 19 (31.7%) cases drugs in combinations (levetiracetam, carbamazepine/oxcarbamazepine, clobazam); The corresponding figures in 6-14 years age-group were 45 (56.2%), 10 (12.5%) and 25 (31.3%), respectively. No significant differences in total behavioral problems between children on monotherapy as compared to polytherapy in both younger (10.5% vs17.1 %, P=0.35) as well as older age groups (35% vs41.5%, P=0.41), respectively. A relatively higher percentage of children with below average IQ had total behavioral problems in comparisonto those who had average IQ in both younger (18.6% vs13.6%, P=0.96, relative prevalence (RP) 1.15,confidence interval (CI) 0.25-5.30) as well as older age group (49% vs34%, P=0.15, RP 1.03, CI 0.39-2.75), but the differences were found to be insignificant.Thirty nine (65%) children in 2-5 years group and 44 (55%) in 6-14 years had controlled seizures and the resthad uncontrolled seizures at the time of assessment. Inyounger agegroup, there was no significant difference in the occurrence of behavior problems between childrenwith controlled and uncontrolled seizures (2.5% vs9.5%,P=0.25, RP 0.18, CI 0.48-12.37). However, in the older age group, children with uncontrolled seizures had higher incidence of behavior problems than children with controlled seizures (50% vs18.1%, P=0.003; RP 2.44, CI 0.07-0.50). None of the parents of cases had any history of psychological problems. No significant differences in mean values of different domains were found in children on monotherapy versus polytherapy in both age groups.However, in the 6-14 years age-group, uncontrolled seizures were significantly (P<0.05) associated with internalizing behavioural problems. Mean values of behavioral scores in patients withepilepsy aged 2-5 years were significantly higher ascompared to control in the CBCL domains of emotional reactivity (P=0.021), withdrawn (P=0.004), attentionproblems (P<0.001), aggressive behavior (P<0.001), externalizing (P<0.001) and total behavior problems (P<0.001). In the 6-14 years age group, all thedomains showed significantly higher scores in patients than controls, except somatic complaints and thought problems. Further, 23.3% children withepilepsy of 2-5 years had externalizing behavior scores, and 21.2% and 45% of 6-14 years had internalizing andexternalizing behavior scores in the clinical range, respectively. Discussion:- In the present study, most of the behavior domains inchildren with epilepsy had higher mean scores thancontrols, but below the cut-off levels. Externalizing behavioral problems appeared to affect patients of boththe age-groups, but internalizing behavior such asdepression and anxiety were mostly limited to school-age children. Impaired attention, anxiety, depression, hyperkinetic, impulsivity, low self-esteem and thought problems aresome of the co-morbidities reported earlier, mostly in mixed age-group of children 11,12,13,15 . In addition,educational underachievement has been also observed inthese children 29 . Behavior problems may not only occur following idiopathic epilepsy but also due to secondary causes like neurocysticercosis 30 . Abnormalexcitability and disrupted synaptic plasticity in the developing brain can result in epilepsy and subsequentlybehavior problems in these patients 31 . We did not observe any difference in the incidence of behavioral problems in children with below average IQ incomparison to cases with average IQ in both the age groups. It may be possible that effect of IQ was not distinctly seen because of lesser number of cases in thesub-groups. In contrast, Buelow, et al. 32 observed a higher risk of occurrence and mean problem scores incases with low IQ as compared to patients having middleor high IQ groups, and all types of problems were foundin children with low IQ. Similar to our findings, Powell,et al. 33 also observed no significant difference inbehavior between children with epilepsy having decreased seizure-frequency as compared to those withgood seizure-control. A significant effect of age of onset, frequency of seizures and number of antiepileptic drugs in relation tobehavioral problems have been reported earlier 11,16,28 . We found younger age of onset, and frequency of seizures were significantly associated with behavioralproblems. In addition, duration of disease in both age groups and antiepileptic drugs in older children alsoaffected the internalizing problems. However, no difference in behavioural problems was observed between mono and polytherapy. In contrast, effect ofpolytherapy over behavioural problems was found by Datta, et al. 34 in their patients with epilepsy. It appearsthat multiple factors affect the behavioral domains inchildren with epilepsy. Further, it is likely that the child's psychological perception of the disease situation,especially in older children, could be another contributing factor to the patient's behavior during the course of illness. Thus, use of minimum number of antiepilepticdrugs for seizure-control should be aimed, to minimize the occurrence of behavioral impairment inthese children. The strength of the present study is the use of astandardized validated measurement tool, applied in twoage-groups of population to observe the different behavioral pattern. However, it has certain limitations as findings are based only on parent-reported observations.We did not observe the effect of parental educational level and teacher-report of school-going children, whichmay limit the generalizability of the results up to someextent. Further, it would be also be pertinent to carry out follow-up assessments to document resolution of problems after discontinuation of treatment. In conclusion, due attention should be given for recognition of behavioral co-morbidities in children with epilepsy. They need periodic assessment during epilepsytreatment and if abnormalities are detected, may need counseling and also adjustment on behalf of parents.
2019-04-22T13:11:29.029Z
2018-03-31T00:00:00.000
{ "year": 2018, "sha1": "2a7bf6e83e381bec79e2bc042a5b0ccf3a408720", "oa_license": "CCBY", "oa_url": "http://www.journalijar.com/uploads/997_448_IJAR-22486.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6fcbc09f8b1678ebcb7225e71ece09dad3ec28ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246360380
pes2o/s2orc
v3-fos-license
Shaping ability of reciprocating and rotary systems in oval-shaped root canals: a microcomputed tomography study ABSTRACT This study compared the shaping ability of single-file reciprocating (WaveOne Gold) and multifile rotary (Mtwo) systems on mandibular oval-shaped canine root canals, using microcomputed tomography (micro-CT). Thirty mandibular canines were scanned by micro-CT and assigned to one of two groups (n=15) according to the system used for root canal preparation: WaveOne Gold or Mtwo. After preparation, the teeth were rescanned, and the percentage of untouched canal area, apical transportation and centering ability were analyzed. The data was evaluated using Kruskal and Mann- Whitney tests (p<0.05). No difference was found in percentage of unprepared canal area between groups in the entire root canal or the apical third, or in centering ability (p>0.05). WaveOne gold had less canal transportation than MTwo at the 5 mm section (p<0.05). WOG and Mtwo systems presented similar shaping ability and centering ability in oval-shaped canals. However, WOG presented less transportation than Mtwo at 5 mm from the apex. INTRODUCTION Intracanal microbial reduction is the primary goal of root canal treatment, and is accomplished through irrigation, chemical debridement, and mechanical action of instruments 1 , allowing periradicular tissue healing. However, these steps can be difficult to complete due to the complexity of root canal anatomy 2 . The internal canal configuration of mandibular canines has a high incidence of oval-shaped root canals 3 . Several rotary and reciprocating systems are used to promote complete cleaning of oval-shaped canals 4 , but leave unprepared areas after root canal instrumentation [4][5][6] . Furthermore, anatomical complexities can also make it difficult to control infection during instrumentation, allowing accumulation of hard tissue debris, with microorganisms remaining in areas that instruments are unable to reach [4][5][6] . Remaining microorganisms might have the potential to perpetuate periapical inflammation and compromise the success of endodontic treatment 7 . Therefore, endodontic instruments with different kinematics and heat treatments have been developed to deal with root canals with complex anatomy, such as oval-shaped root canals 8 . The WaveOne Gold system (Dentsply-Sirona, Ballaigues, Switzerland) is a reciprocating single-file made of a heat-treated gold metal alloy (M-wire) 9,10 . It has a triangular convex cross-sectional design with two cutting edges, resulting in one or two points of contact between the cutting edges and the dentin walls 9 , which can increase the flexibility and improve cyclic fatigue resistance when compared to conventional NiTi alloys 11,12 . Mtwo is a well-known NiTi superelastic (SE) rotary system (VDW, Munich, Germany), with an "S"-shaped cross-sectional design, a positive rake angle with 2 cutting edges, and low radial contact to increase flexibility and improve performance during root canal prepararion 13,14 . Its shape enables dentin to be cut effectively and greater root canal residue removal 15 . Therefore, the aim of this ex vivo study was to evaluate the shaping ability of single-file reciprocating (WaveOne Gold) and multifile rotary (Mtwo) systems on mandibular oval-shaped canine root canals, using microcomputed tomography (micro-CT). The null hypothesis tested was that there would be no difference between WaveOne Gold and Mtwo in (i) shaping ability or in (ii) apical transportation and centering ability of mandibular oval-shaped canine root canals. MATERIAL AND METHODS This study was approved by the Iguaçu University Ethics Committee, Rio de Janeiro, Brazil (n.2.435.836). Sample size calculation A power calculation was performed based on data from a previous study 16 , with G*Power 3.1 software (Heinrich Heine University, Dusseldorf, Germany) using a power β = 95% and α = 5% as inputs into an independent samples test from the t tests family. The ideal sample size for each group was a minimum of 10 teeth. Five additional specimens per group were added to compensate for possible sample loss. Specimen selection Thirty mandibular canines with moderately curved mesial roots (10° to 20°) 17 were selected from a pool of 300 teeth from the Bank of Human Permanent Teeth of Iguaçu University. Teeth had been extracted for reasons unrelated to this study, Consent was secured prior to tooth donation. The teeth evaluated in this study were from patients of the metropolitan region of Rio de Janeiro city. The remaining attached tissue was removed, and the teeth were stored in distilled water until the time they were to be used. All samples were scanned by micro-CT (SkyScan 1173, Bruker, Kontich, Belgium) operated at 50 kV and 160 mA, with a 1-mm-thick aluminum filter, 320-millisecond exposure time, 12.1 µm pixel size, 0.8 rotation step, and 360º rotation along the vertical axis. The files were then reconstructed into a three-dimensional dataset with the software NRecon v1.6.1.0 (Bruker micro-CT). Reconstruction parameters included a 50% beam hardening correction, ring artifact correction of 10, and fixed contrast limits (0 -0.05) for all image stacks. The volume of interest extended from the cementoenamel junction to the apex of the root, resulting in the acquisition of 600 to 700 axial cross sections per sample. Then, CTAn (v.1.14.4, Bruker Micro-CT) and CTVol (v.2.2.1, Bruker Micro-CT) software were used to evaluate root canal morphological and 3D configuration. After that, the teeth were matched according to anatomical similarities of preoperative canal volume, canal surface area, and 3D configuration and randomly assigned to one of two groups (n-15) according to the instrument to be used during root canal preparation: Mtwo (VDW GmbH, Munich, German) or WaveOne Gold (Dentsply-Sirona, Ballaigues, Switzerland). Root canal procedures Endodontic accesses were performed with highspeed diamond (1014 HL; KG Sorensen, São Paulo, Brazil) and Endo Z burs (Dentsply-Sirona, Ballaigues, Switzerland). A 10 K file (Dentsply-Sirona, Ballaigues, Switzerland) was used to determine apical patency, and the working length (WL) was considered 1 mm short of the apical foramen. A glide path was accomplished with a 15 K file (Dentsply Sirona) up to the WL. The WaveOne Gold (Dentsply-Sirona) and Mtwo rotary (VDW GmbH) systems were activated with a VDW Silver motor (VDW GmbH, Munich, Germany), according to manufacturer's instructions. WaveOne Gold system The WaveOne (WOG) primary (25/.07) was used in a reciprocating movement with an in-and-out pecking motion and an amplitude of 3 mm with light apical pressure until the WL was reached. After three movements, the instrument was removed from the canal and cleaned with a wet sterile gaze. Mtwo system The root canals were prepared using the sequence 10/.04, 15/.05, 20/.06, 25/.06 at 250 rpm with pecking motion, and small brushing movement with light apical pressure until the WL was reached. An irrigation protocol was used for both groups. Root canal irrigation was performed with 2 mL of 2.5% sodium hypochlorite (NaOCl) with a 30-G Endo-Eze needle (Ultradent Products Inc; South Jordan, UT, USA) inserted until it was 2 mm from the WL. Final irrigation was performed with 2 mL of 2.5% NaOCl, 2 mL of 17% EDTA (Mil Fórmulas, Rio de Janeiro, RJ, Brazil) for 1 min and 2 mL of 2.5% NaOCl. The root canals were dried with paper points, after which the teeth were scanned for a second time using the same parameters as mentioned above. A single experienced operator performed all procedures. Micro-CT Evaluation The teeth were submitted to a second micro-CT scan and reconstructed (NRecon) using the same parameters as described previously. The postoperative stacks of the root canals after preparation were registered with their respective preoperative stacks with an affine algorithm of the 3D Slicer software. The software ImageJ 1.50d (National Institutes of Health, Bethesda, MD, USA) was used to evaluate the initial and final volume (mm³), surface area (mm²), percentage of unprepared area, canal transportation and centering ability. The unprepared canal area was determined by calculating the number of static voxels (voxels present in the same position on the canal surface before and after instrumentation) divided by the total number of voxels present on the root canal surface 6 , according to the following formula: number of static voxels × 100 total number of surface voxels Canal transportation and centering ratio were calculated at 3 cross-sectional levels (3-, 5-, and 7-mm distance from the apical foramen) using the following equations 18 where m1 is the shortest distance from the mesial of root canal to the mesial of the non-prepared canal, m2 is the shortest distance from the mesial of root canal to the mesial of the prepared canal, d1 is the shortest distance from the distal of root canal to the distal of the non-prepared canal, and d2 is the shortest distance from the distal of root canal to the distal of the prepared canal 18 . Statistical analysis The degree of homogeneity between the groups at baseline was confirmed through the analysis of initial volume and initial surface area of the root canals (p>0.05). Data distribution was verified for normality with the Shapiro-Wilk test. Due to the lack of normality, a Kruskal-Wallis test was used to compare intragroup transportation and centering ability parameters. The Mann-Whitney T test was used to compare canal transportation and centering ability between the same canal sections in different groups. The data were processed with Prism 7.0 (GraphPad Software, Inc., La Jolla, CA, USA) and expressed as the median, minimum and maximum values. The significance level was set at 5%. RESULTS The degree of homogeneity of the matched teeth regarding canal volume and surface area before root canal preparation was confirmed (p>0.05). No significant difference was found regarding the percentage of unprepared root canal areas between groups for the entire root canal or in the apical third (p>0.05). There was an increase in volume and surface area after root canal preparation compared to the initial sample in the groups tested. These results are described in Table 1 and Fig. 1. No significant difference was observed in centering ability between the experimental groups (p>0.05). Canal transportation showed no statistically significant differences in the intragroup comparison at the evaluated sections in either group (p>0.05). When each section was analyzed separately, WaveOne gold had less transportation than the MTwo file only at the 5 mm section (p<0.05). No statistical difference was found in centering ability at any of evaluated levels between groups (p>0.05). The total analyzed values are shown in Table 2. DISCUSSION The development of nickel-titanium (NiTi) rotary systems led to progress in root canal instrumentation 19 . However, failures may occur in oval and flattened canals because the instruments generally provide a rounded cross-section preparation, presenting a challenge to prepare all root canal walls. The instrumentation of these cases is more difficult due to the greater amount of dentin that must be removed to accomplish the ideal root canal shape 3,20 . The unprepared areas may harbor remnants of tissue and bacterial byproducts that could cause persistent infection and affect the success of endodontic treatment 21 . Neither of the systems evaluated in this study was able to completely prepare the root canal, which agrees with previous studies [22][23][24] . Also, no significant difference was found for unprepared areas between WOG and Mtwo instruments, either in the entire root canal or in the apical third. Thus, the first hypothesis was accepted. These results can be attributed to the standardization of the apical third by the diameter of the instruments tested 25,26 . NiTi instruments have led to significant progress in root canal preparation 27 . Centering ability was evaluated as described by Gambill et al. 18 , who defines centering ability as the ability of the endodontic instrument to remain on the central axis of the root canal. In the present study, no significant difference was observed in centering ability between experimental groups, which is in line with other studies 12,28 . Although our study showed similar shaping ability in general results, when each section was analyzed separately, WOG file had less transportation than the MTwo instrument at the 5 mm section from the apex, which partially rejects the second hypothesis. This result can be explained by the fact that WOG is a gold wire heat-treated instrument, while Mtwo is a NiTi SE instrument which does not have controlled memory. The thermally treated NiTi alloys present a higher percentage of martensitic phase, which is more flexible than conventional NiTi files, and may explain why there is less canal transportation of WOG at the 5 mm section from the apex 29 . The present study selected only long oval-shaped canals because they are considered a significant clinical challenge 30 . Moreover, the sample was selected through micro-CT analysis, which provides excellent pairing of teeth, reducing the anatomical bias related to heterogeneity of root canal morphology 4 . The micro-CT technique affords reliable results in the evaluation of data on 2D and 3D parameters of root canal preparation because it is a trustworthy, precise method for this kind of analysis 5 . Based on our results, WaveOne Gold and Mtwo systems presented similar shaping ability and centering ability during oval-shaped root canal preparation. However, WOG presented less transportation than MTwo at the 5 mm section from the apex. Different lowercase letters in each column indicate statistically significant differences within the same group between all evaluated sections. Different uppercase letters in each column indicate statistically significant differences between groups for each evaluated canal section.
2022-01-29T06:17:19.541Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "a2aba16a4147d39cd065d14f9836f9e6ef720370", "oa_license": null, "oa_url": "https://doi.org/10.54589/aol.34/3/282", "oa_status": "BRONZE", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a99f061bcdb150d05c608b85b4efce409902937c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268677692
pes2o/s2orc
v3-fos-license
A lethal mice model of recombinant vesicular stomatitis viruses for EBOV-targeting prophylactic vaccines evaluation Highlights • The recombinant VSV-EBOV is highly lethal to IFNα/β/γ R−/− mice.• The infected mice exhibit hyperviremia and distinct splenohepatic lesions.• Virus replicon particle vaccine can protect all IFNα/β/γ R−/− mice against VSV-EBOV lethal challenge.• This surrogate model can be used for EBOV vaccine evaluation. embedded with EBOV GP are the basis for implementing surrogate animal models in standard BSL-2 laboratories.In recent years, the live attenuated EBOV vaccine, replication-competent vesicular stomatitis virus encoding EBOV GP (VSV-EBOV) has been approved by EU and USA with a good safety profile and extensively studied in standard BSL-2 facilities (Garbutt et al., 2004;Lee et al., 2021;Lee et al., 2023).Its embedded EBOV GP, in place of its parental VSV G, confers infectivity to this recombinant virus and could elicit GP-specific antibody with neutralizing activities against EBOV (Garbutt et al., 2004).Therefore, VSV-EBOV can be selected as an alternative to EBOV to eliminate the need for BSL-4 facilities for the researches on GP-associated viral pathogenicity and GP-targeting vaccines in BSL-2 facilities.Surrogate animal models based on VSV-EBOV have been well studied, including neonatal C57BL/6 mice, Syrian hamsters and immunodeficient mice (McWilliams et al., 2019;Saito et al., 2020;Lee et al., 2021).In particular, acute and fatal diseases could be induced in immunodeficient mice after VSV-EBOV infection, such as transcription factor STAT1-knockout (STAT1 À/À ) and interferon α/β receptors-knockout (IFNAR À/À ) mice (Marzi et al., 2015).This suggests that this model can be used in BSL-2 facilities for evaluation of vaccines by using replication-competent VSV-EBOV instead of wild-type EBOV. In this study, we found a surrogate EBOV lethal mouse model that can be used in standard BSL-2 facilities by employing type I and II Interferon receptors-knockout (IFNα/β/γ R À/À ) mice infected with VSV-EBOV.Additionally, Venezuelan equine encephalitis virus (VEEV) replicon particle expressing EBOV GP (EBOV VRP), a previously reported EBOV vaccine candidate (Pushko et al., 2000), showed protective efficacy in this model, suggesting the potential of using this model as a platform for EBOV GP-targeting vaccines evaluation in BSL-2 facilities. Further analysis was conducted to determine the tissue tropism and organ virus load.IFNα/β/γ R À/À mice were infected with 1 Â 10 6 PFU VSV-EBOV and euthanized at 2 dpi for tissue collection, including heart, liver, spleen, lung, kidney, intestine, brain and eye (Fig. 1E).As expected, VSV-EBOV could be detected in all the tissues, with the key target being liver, spleen, kidney and lung (Fig. 1E).Having established that VSV-EBOV infects various tissues of IFNα/β/γ R À/À mice, we next assessed whether the infection was associated with pathologic changes.On day 2 after VSV-EBOV infection, IFNα/β/γ R À/À mice were sacrificed for tissues collection, including liver, spleen, kidney and lung.The hematoxylineosin (H&E) staining results showed that evident histologic lesions were observed in livers (unclear hepatic lobular structure, hepatocyte hemorrhage and necrosis, a little steatosis, and scattered chronic inflammatory cell infiltration) and spleens (atrophy of splenic bodies, congestion of red pulp, marked reduction and partial necrosis of lymphocytes) of all the VSV-EBOV infected mice (Fig. 1F), which was similar to the histopathological features of authentic EBOV infection in rodents (Raymond et al., 2011;Cross et al., 2015).In contrast, there were no significant differences in another two organs between the two groups (Fig. 1F), only mild spontaneous lesions in the kidney (slight atrophy of the small renal tubules and occlusion of the glomerular capillary network) and lung (slight thickening of the alveolar septum and congestion of the blood vessels).In addition, quantitative pathological scoring showed the same trends with the H&E observation (Fig. 1G).Taken together, VSV-EBOV infection caused severe liver and spleen lesions in IFNα/β/γ R À/À mice. On this basis, we utilized the EBOV VRPs to evaluate whether IFNα/β/γ R À/À mice can be used as a platform for in vivo evaluation of prophylactic vaccines against EBOV.Group of IFNα/β/γ R À/À mice aged 8-10 weeks were i. p. immunized with two doses of 5 Â 10 6 IU EBOV VRPs with two weeks intervals (Fig. 1H).On day 14 after first immunization, considerable EBOV GP-specific IgG antibodies and pseudoviral neutralizing antibodies (pVNA) were detected with geometrical mean titers (GMT) of 1/3200 and 1/71, respectively (Fig. 1I and J).Booster vaccination induced a further increase in IgG and pVNA with GMT of 1/10,763 and 1/1345 on day 28 (Fig. 1I and J).Then all mice were challenged with 1 Â 10 5 PFU VSV-EBOV via i. p. route, and monitored daily for clinical symptoms and weight changes (Fig. 1H).VRPs-treated mice were fully protected to survive, exhibiting no signs of illness or weight loss (Fig. 1K-L).Undetectable viremia during the first two days after challenge also supported the superior efficacy for EBOV VRPs (Fig. 1M).In contrast, all PBS-treated mice developed disease rapidly and succumbed at 3 dpi with high levels of viremia (Fig. 1K-M).Collectively, these data demonstrated that VSV-EBOV-infected IFNα/β/γ R À/À mice could be used as an effective animal model for vaccine evaluation against EBOV in BSL-2 facilities. In summary, we observed acute and fatal outcomes in IFNα/β/γ R À/À mice after VSV-EBOV infection, including hyperviremia, weight loss and hispathology of the liver and spleen.Moreover, EBOV VRPs, a reported vaccine candidate against EBOV (Pushko et al., 2000), were also shown to be effective in VSV-EBOV-infected IFNα/β/γ R À/À mice.Due to the expression of EBOV GP and the limited biological risk of VSV-EBOV, this surrogate animal model could serve as a useful tool for in vivo screening of therapeutics and vaccines against EBOV targeting GP in BSL-2 facilities, rather than BSL-4 ones. Footnotes This work was supported by the National Natural Science Foundation of China (Grant No. U20A2014) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB0490000).The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.We are grateful to the Center for Animal Experiments staff (Xue-fang An, Fan Zhang, He Zhao, Li Li, Tao Zhang and Yuzhou Xiao) at the Wuhan Institute of Virology and the Wuhan Key Laboratory of Special Pathogens and Biosafety for helpful support during the course of the work.The authors declare that they have no conflict of interest.Prof. Bo Zhang is an editorial board member for Virologica Sinica and was not involved in the Fig. 1.Evaluation of VSV-EBOV infected-IFNα/β/γ R À/À mice as a surrogate EBOV lethal model for vaccine development.A Schematic diagram of VSV-EBOV infection in IFNα/β/γ R À/À mice.Cohorts of adult IFNα/β/γ R À/À mice (n ¼ 3) aged 8-10 weeks were intraperitoneally inoculated with 1 Â 10 5 and 1 Â 10 6 PFU VSV-EBOV.Weight changes (B) and survival rates (D) were recorded daily for 14 days after infection.Viremia (C) was detected within the first two days after infection.The blood samples were collected from the orbital of mice.The horizontal dotted line represents the limit of detection: 100 PFU/mL.E Tissue tropism and organ virus load of VSV-EBOV in IFNα/β/γ R À/À mice.Mice were i. p. infected with 1 Â 10 6 PFU VSV-EBOV and sacrificed at 2 dpi for tissues collection.Viral loads in different tissues of infected mice were detected at 2 dpi.Liver, spleen, kidney and lung were collected and fixed for hematoxylin and eosin (H&E) staining (F) and quantitative pathological scoring (G).Scale bars in magnification of 50 Â represent 500 μm, in magnification of 200 Â represent 100 μm.H Schematic diagram of EBOV VRPs vaccination and VSV-EBOV challenge in mice.Group of IFNα/β/γ R À/À mice (n ¼ 4) aged 8-10 weeks were i. p. immunized with 5 Â 10 6 IU EBOV VRPs on day 0 and 14.I, J EBOV GP-specific IgG and pVNA were measured by ELISA and plaque reduction neutralization test (PRNT) on day 14 after each immunization.The blood samples were collected from the orbital of mice.The horizontal dotted line represents the limit of detection: 1:50.On day 28, the immunized mice were i. p. challenged with 1 Â 10 5 PFU VSV-EBOV.Clinical symptoms, weight changes (K) and survival rates (L) were monitored daily for 14 days.M The viremia was detected within the first two days after challenge.The horizontal dotted line represents the limit of detection: 100 PFU/mL.Data represent the geometrical mean AE standard deviation at each time point in each group.The asterisks denote statistical differences between the indicated groups.n. s., no statistical difference, *, P < 0.05; **, P < 0.01; ***, P < 0.001; ****, P < 0.0001.editorial review or the decision to publish this article.All the mice were cared following the recommendations of National Institutes of Health Guidelines for the Care and Use of Experimental Animals.Studies related to virus infection were performed in biosafety level 2 (BSL-2) facility at Wuhan Institute of Virology under a protocol approved by the Laboratory Animal Ethics Committee of Wuhan Institute of Virology, Chinese Academy of Sciences (WIVA26202301).
2024-03-26T06:18:16.425Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "6fdcfd5867c91c19bd9e2311db9827d5145d9c38", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.virs.2024.03.008", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bafb1ea1cd63c2ce6a6e9c153f62d5fdd646e5ac", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54893787
pes2o/s2orc
v3-fos-license
Return of the Vision Video This paper examines the role of corporate vision videos as a possible setting for participation when exploring the future potentials (and pitfalls) of new technological concepts. We propose that through the recent decade’s rise web 2.0 platforms, and the viral effects of user sharing, the corporate vision video of today might take on a significantly different role than before, and act as a participatory design approach. This address the changing landscaping for participatory and user-involved design processes, in the wake of new digital forms of participation, communication and collaboration, which have radically changed the possible power dynamics of the production life cycle of new product developments. Through a case study, we pose the question of whether the online engagements around corporate vision videos can be viewed as a form of participation in a design process, and thus revitalize the relevance of vision videos as a design resource? INTRODUCTION Corporate vision videos are a genre of moving images which act as an externalisation of a company's strategy, made manifest through imagining how a strategy could result in a specific -and often futuristic -scenario of how the value proposition might look like if the strategy is followed (Buur & Ylirisky 2007, Bergman et al 2004. As such, vision videos differ from traditional storytelling, such as science fiction, since the video scenario is grounded in the reality of the company here and now, and aimed towards the possible effects of current or new strategic choices. Thus, the assumption is, that vision videos show a systematic look into a possible future for the corporation, and thus in itself becomes a theory of what might be. This makes the videos act as metaphorical flagpoles for the company's employees -meant to guide them from a distance towards a idea of a concept, rather than through a formal specification. The intent is to demonstrate potentials, and drive the company's initiatives and investments, as well as spark the imagination of what can and should be made. Especially within the field of ICT, vision videos have often been used as an approach to explore the strategic potential of new technology, often long before it is feasible to realise any technical implementation or prototypes. Already in 1987, Apple's Knowledge Navigator videos made use of animation to portray the future use of technologies -then only on the R&D stage (Buxton 2010, Dubberly 2007. Together with other examples from Sun Microsystems (Tognazzini 1994) and Nokia (Ylirisky & Buur 2007) a 2 programme of using video in design visions has existed for at least 30 years. A specific trait of vision videos is their level of visual fidelity. Compared to related ways of using temporal media in design (e.g. Zimmerman 2005, Mackay et al 2000, Vistisen 2016, vision videos almost exclusively employ a high level of visual fidelity, resembling real implemented products. By employing special effects and theatrics, vision videos simulate advanced interfaces, and users interacting with them in a natural use context, as if the concept actually existed, and the user scenario actually happened. But these characteristics have also led to criticism of whether vision videos actually benefit the design process in any meaningful way. Buxton (2010), Dubberly (2007), Ylirisky & Buur (2007), and Tognazzi (1994 all highlight a series of critiques based on vision videos produced from 1987 to 2009. Buxton argues that vision videos becomes too persuasive, by portraying concepts which might not be finite, but are interpreted that way by the employees, due to both the fidelity, but also the polished way the technology's implementation is often portrayed. Tognazzi and Dubberly provided a similar critique of loss of control from their experience in using vision videos as internal design deliverables. Finally, Buur & Ylirisky's critique provides a pragmatic evaluation of the time and resources spent on making a high fidelity vision video for Nokia's future concepts, versus the actual strategic benefits it created among either the employees or the board of directors at Nokia. Their conclusion was, that video is a viable tool to sketch, but that the role of high fidelity vision video was more questionable. PARTICIPATION THROUGH VISION VIDEO? The short outlined background above indicates bit of a paradox. On one side, corporate vision videos has been widely used in the ICT industry for decades, while their value at the same time historically has been frowned upon as being too persuasive, didactic, and costly for being of much use as a creative or collaborative tool in the design process. We argue that the critique of the approach is better understood as indicating that the role of the vision video might not be solely as an internal design deliverable. If we examine in which research environments the existing corporate vision videos are currently being referenced, we see an overweight of contributions, referencing the videoes, coming from the rather new field of 'design fiction'-"...the deliberate use of diegetic prototypes to suspend disbelief about change" (Sterling 2013). From this point of view, the vision videos act as diegetic prototypes for a proposed new use of technology, and the goal is not just to set a guideline for an internal vision, but to invite others to reflect upon the discursive space of the video. If considered through the lens of design fiction, vision videos become an externally oriented design deliverable, with the goal of obtaining feedback, critique and new ideas from a larger pool of stakeholders -including potential end-users. This somewhat frames an ontological political concern (Gaver 2012) in videos by letting a multitude of stakeholder's comment upon what potentially could be released by the corporation. In fact, some of the most recent examples of corporate vision videos, coming from corporations as diverse as e.g. Jaguar, Google, and IKEA, seems to have taken this externally oriented approach, by submitting their vision videos to social media platforms such as Youtube, Vimeo and Twitter. From the organic viral mechanisms of these platforms, bloggers as well as more formally organised media outlets has picked up the vision videos as 'trending stories', sparking even further interest. Thus, recent vision videos have gathered millions of views on social platforms, and fostered thousands of comments and feedback for the corporations to gather. We propose that this changed pattern of using corporate vision video, in combination with social web 2.0 platforms, indicates a new configuration of user participation, extending on the contributions from e.g. Vines et al (2013). The fundamental idea of participatory design is that people besides the associated design team possess valuable knowledge and hereby can contribute to a design process by various means (Bødker et al 1993). When releasing a vision video as a publicly available design fiction, the user participation might be seen as pragmatic effort from the corporation to gather inputs, and probe the interest from the public, before investing heavier R&D resources on actual technical implementations. Through the social technologies, the potential users are given a voice, but also potentially a way to influence design decisions by taking part in the formation of a public discursive space around the concept, in line with what Hagen & Robertson (2009) categorizes as 'opening up' the design process for external participation. We argue this positions the vision video as a tool, which leverage the classic values of participatory design (e.g. Halskov & Hansen 2015) by democratically involving the end-user, listening to a variety of perspectives, in combination with framing the design space around a diegetic prototype, inviting the users from around the world to reflect upon the product. As such, this contributes to the knowledge of the potentials and challenges of large-scale participatory design provided by e.g. Oosterveen & van den Besselaar (2004) and Simonsen & Hertzum (2008). RESEARCH SETUP To analyse this phenomenon of corporate vision videos, we have collected and sampled the user feedback and interaction of a specific instantiation of a recent corporate vision video case, the Land Rover case, sampled throughout the last two years. THE LAND ROVER CASE In April 2014, at the New York Auto Show, Land Rover presented its concept for their new SUV car, which included a so-called 'Transparent Bonnet system'. The concept proposed a system using augmented reality (AR) cameras to make the hood semitransparent to make navigating up-close obstacles like rocks and narrow tracks easier and safer. The announcement was accompanied by a one minute vision video depicting the AR system in use -showing how the SUV became semitransparent when approaching a steep hill. However, in the top right corner of the video, a label stated that the video was a 'Virtual Prototype in Testing', which indicated that it was a diegetic prototype. COLLECTED DATA The Land Rover vision video was shared originally through Land Rovers three Youtube accounts (US, UK and Global). However, the video spread quickly to both other online media outlets' Youtube accounts, as well as onto private users' accounts. Thus, to get a clear picture of the online participation, we sampled all identified instances on Youtube which featured the vision video. We identified 25 separate instances based on a series of search keywords and synonyms (appendix 1), which had an accumulated 2.232.263 video views and 310 comments (as of 2/12/2016). Youtube source Views Comments Land Out of the comments, 33 of them were duplicates, and has thus not been included more than once, bringing the total unique number down to 277 comments. ANALYSIS OF PARTICIPATION For the analytical treatment of the collected user participation data we build a framework consisting of four opposite ends: serious vs. unserious, and constructive vs. Unconstructive (figure 4). 'Serious' and 'unserious' is drawn from the literature on online participation culture (Jenkins 2006), where 'serious' is the equivalent to strong communities' engagement such as fund in e.g. fandoms, 'unserious' is found in the sarcastic and often derailing discourse created by socalled 'trolls' (Hardaker 2010). The 'constructive' and 'unconstructive' dimension is to be understood as the quality or value of the information in terms of informing the design process. We draw on this dimension from how e.g. Sanders & Stappers (2008) and Vink et al (2008) and their notions of assessing stakeholder involvements in participatory design. Here constructive feedback is something which helps inform further design moves, and unconstructive feedback is either stagnant or too ambiguous to use in the design decision making. We mapped the 277 comments from the various Youtube outlets in a qualitative assessment of which block they represented based upon the characteristics of what was written. Whenever a cluster formed, understood as when a specific discourse had been recurring in multiple instances, it was mapped as a separate theme shared by all the comments in the cluster. A total of 24 themes were formed, ranging from 3 to 50 comments in each theme. On a cross examination between the identified themes, 10 categories could be identified as being representatives for multiple themes, such as 'Positive Feedback' and 'Design Details'. In this thematization and categorization we are inspired by the qualitative data analysis traditions of e.g. Kvale & Brinkmann (2009). The block with the most comments was the constructive/serious block. This indicates that the dominating discourse, created around the corporate vision video, was comments directly addressing aspects of the design, with nuanced arguments and substantiated critique. This result is surprising, since the principal expectation of comments made on semi-anonymous web 2.0 platforms would be a higher degree of unconstructive comments (Phillips 2015). The unconstructive/unserious block was the third most represented block, with the unconstructive/serious block coming in as the second most represented, while the constructive/unserious was the least represented. For the focus of this paper, we will focus on taking the constructive/serious block up for a more thorough In this thread, we see the comments center around the contextual challenge of using AR in an off-road car setting. Following this, it is interesting to note, how one of the users jokingly note how they are not 'the experts', which is interesting insofar as it shows us a paradox between the actual quality of the discussion (revealing a possible design flaw of this specific use of AR), and the role the vision video is framed to have (a virtual prototype in testing, but shared as part of a press release about the upcoming car). With the intent of the vision video not aimed at asking the users questions or other ways of inviting participation, some of the potential conversations are stopped before they might have been debated fully. A similar situation played out when some users discussed the possible security and legal concerns of the technology (figure 6). The first comment addresses a question of securitywill AR take the attention away from the road a thus increase the risk of a crash? The conversation is further elaborated with details of the potential problem, but is quickly taken in a direction of whether the specific car model is actually suitable for off-and on-road driving. Had the rhetoric in either the vision video or its accompanying description focused on asking more specifically into which concerns the users might have in a specific context, the framing of this type of discussion might have been clearer. However, the example also shows how it only takes another users participation in the thread to further raise the participatory value of the comments, when commenting on how this technology might inhibit the car from road driving in a specific state in the US. Thus, the participation raised a security concern, elaborated it, and ended up with detailing possible legal issues to be cleared out before the technology would be viable on the US market. CONSTRUCTIVE ENOUGH TO BECOME REAL An interesting theme formed in the constructive/serious block around using the same AR technology, but with a different purpose in the cars. Instead of using the technology to make the front hood transparent in offroad cars, a number of users discuss the potential of using the technology to instead avoid blinds angles from the A and B pillars in the Land Rover (figure 7). 6 Figure 7: Examples of one of the themes in which users discuss ways to apply the AR concept to reduce blind angles. The discussions begin by praising the utility of the concept, but criticizing it to not solve a problem for the broader audience of car users, before proposing using the technology to make the pillars on the cars transparent instead. Two other users discuss how this technology has already been showcased to be possible, and a fourth users takes the idea even further by arguing for building the display system into the rear-view mirror. Another category which contained a similar theme included discussions of how this could be applied in other vehicles, such as trucks, putting a further emphasis on the problem of the A pillars giving blind angles. We argue that these themes reveal a very reflective participatory involvement from the users, both giving feedback to the existing, as well as proposing new and potentially more useful domains for the technology. Furthermore, most of the participation in these blocks are also formulated as arguments which clearly also states their feedback as new proposals or comments on other proposals. As such, the comments need little translation or interpretation to understand the conceptual model of the users' way of understanding the concepts, or what their rationale for their ideas are. This block provides further merit to the hypothesis that the user participation around corporate vision videos on web 2.0 can in fact be constructive enough to potentially inform the design process. After the Land Rover vision video launched in 2014, 7 months passed before Land Rover made new announcements on their R&D efforts on the concept. However, when they launched their next news about the concept, with a new vision video launched to Youtube and media outlets in December 2014, the AR concept had changed. Now, Land Rover focused on showing the technology being used for city driving, and to make the A and B pillars of the car transparent, while also tracking the person walking in front of the car (figure 8). We can only speculate wether Land Rover has gathered and sampled the users' comments and interactions around the vision video (we reached out, but the company declines comments on their engineering process), and used them in the further design process. However, we still argue that comparing the later iteration state of the concept with the initial concepts online user reflections can indicate wether it is fair to claim that the user comments can and should be regarded as a potentially important source of user participation in the design process. Thus, the user feedback, especially arising from the fact that these comments are given as threads of users discussing the concept with each other, rather than just giving singular comments, indicate how the users' participation can actually be constructive enough to provide novel and relevant design ideas for the proposed technology, which is actually on par with how the corporation themselves ended up iterating on the concept. DISCUSSION Research exist on the topic of using online communities and virtual platforms as vehicles for participatory design (e.g. Reyes & Finken 2012, Hagen & Robertson 2009, Näkki 2011). However, fewer contributions has examined the kind of participation, where users are not invited or actively focused in the participatory process, but rather participates through the natural unfolding of their online behavior around a specific design deliverable like we have seen in this paper with the corporate vision video from Land Rover. From an observer point of view, this positions the design researcher as a total observer in the participatory situation, being mainly responsible for creating, sharing and spreading the design deliverable, and afterwards gather and systemize the feedback given from the users, and assess the comments value in informing the design process. 7 But is what we have seen then collaboration? or even participation? And are there ethical concerns in leveraging on how an online community comments, reflects and interprets a design deliverable, without clear consent or knowledge of what their participation actually is used for? Normally, a comment on e.g. a public web site is accessible to every user, and thus the user is explicitly making his or her reflection available for other users to further reflect upon, and thus further participate in the discourse created. Furthermore, due to the open access of the shared reflections and comments, every user can essentially collect and use other users' comments -even though this is generally not a common behavior (Li & Bernoff 2011). But if design researchers use these communities and their participation as a resource for the design process, are designers then obligated to state this as their explicit goal in e.g. the description text on Youtube? Sterling (2013) argued how one of the most important aspect of design fiction was to allow the viewer to return the here and now reality, to make up their own mind about the consequences and promise of the diegetic prototype depicted. The vision video should not only suspend disbelief about change, but also only grab the viewers' attention and imagination for a short time, before guiding them back to the current status of the technology or concept again. Diegetic prototypes, implicit or explicit, exists to show and argue that a technology can and should exist in the real world, and thus, as Kirby (2010) describes, has a rhetoric aimed at showing both necessity, normalcy and viability, while maintaining the fictional take on the real-world ontology. However, this rhetoric also holds much persuasion, and as we have seen in the Land Rover case, some users actually comment on the concept as if it was a real product -some actually indicating that they believe it is. This lack, of explicit intent and transparency of the state of the product, is one of the critical remarks made by both Buxton (2010) and Dubberly (2007) about the generation of vision videos created and used before the rise of web 2.0 media and the new wave of vision videos. Corporate vision videos must leverage on the lessons learned from the design fiction discourse, and explicitly state the intent behind articulating the design concept through a vision video, which has yet to see real production. So to speak, the articulations must match the purpose, be it participation, feedback and criticism, to not end up as just flashy marketing of non-existing products, or ideas building up expectations which cannot be fulfilled by the realized product. CHALLENGE OF ASSESSING STAKEHOLDERS As has been pointed to by some of the early attempts of large scale online participatory design (e.g. Oosterveen & Besselaar 2004, Simon & Robertson 2012), the challenge of identifying and communicating with relevant stakeholders is much higher online, than in the traditional workplace context of participatory design. The asynchronous nature of the participation, which might spike upon the initial sharing of the video, and suddenly pick up momentum again some time later, makes for a continuous introduction of new potential stakeholders. Thus, when assessing the bulk of participation, a video has generated, the design researcher must take a reverse look upon the material to assess the relevant stakeholders. Here, the patterns emerging in our mapping indicates might act as a reversed organizational principle for this identification. When identifying feedback in the unconstructive/unserious block the value in the user participation has little relevance or use for the design process. The unconstructive/serious block reveals surface level feedback, which can at best be seen as immediate reactions where the stakeholders can be grouped, rather than assessed individually -as when 50 various comments praise the Land Rover concept positively. The constructive/unserious block holds potentially valuable and important user feedback, but require a deeper interpretive reading for the insights to be gathered, and makes the user participation in this block relevant, but challenging. Finally, the constructive/serious block represents what a participatory design process would see as the core stakeholders, providing relevant and often detailed feedback upon the design issue at hand. A useful way of thinking about this block of users is as a community of shared interests, sharing a common involvement for a short period of time online. Here, the constructive and serious users simply share another common goal and involvement, than the unserious and unconstructivethey are essentially different community discourses emerging and participating on the same design issue. As such, we can not specify the individual stakeholders for assessing a corporate vision video spread through web 2.0 platforms, but rather specify which type of the community involvements output we will devote our research focus upon. Building upon this, further studies might be conducted on which value it would have, to engage in more active dialogues with the identified users participating on the online vision videos. This would also further qualify our initial insights into the power structures of using online communities as a participatory resource in design. CONCLUSIONS In this paper, we have examined the question of whether the online engagements around corporate vision videos can be viewed as a form of participation in a design process, and thus revitalize the relevance of corporate vision videos as a design resource? The corporate vision videos can act as diegetic prototype, and combined with web 2.0 media we have shown indications of that this might also generate valuable participatory feedback for the design process. As noted with the Land Rover case, some of the user discussions about the design of the Land Rover model are actually represented in the latest real world 8 prototype of Land Rovers transparent car technology. This marks an interesting point of venture into how other ideas, depicted in corporate vision videos, come to life as real products, and whether the online participation can be accounted for, like the case with Land Rover. With the ability to critique, comment, share new ideas and questions, the participating users potentially gets direct access to influence the design. The question remains wether the users are aware of the potential their participation holds, and wether a more explicit appeal would affect their participation positively or negatively. However, based on our initial pilot study with the Land Rover case we argue to have shown that there is a clear and present participatory potential in corporate vision videos, when being distributed through web 2.0 technologies.
2018-12-07T15:28:49.054Z
2017-06-15T00:00:00.000
{ "year": 2017, "sha1": "a8c69cbc1fd79bd52548881f35ce4f4b2cf9d215", "oa_license": "CCBYNC", "oa_url": "https://dl.designresearchsociety.org/cgi/viewcontent.cgi?article=1464&context=nordes", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2c3e5a2f14440dfe058236934a9eaf8c553f7819", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234849065
pes2o/s2orc
v3-fos-license
Design and modelling of MSW (RDF) gasifier Increase in population and technology advancement leads to higher generation Municipal Solid Waste (MSW). The current method of disposing solid waste of by incineration process, which operates at high capital, Operation and maintenance cost and emission of toxic gases. This paper discusses the process design to model the MSW gasifier to minimize heating requirement when compare with conventional incineration process for a waste disposal. The modelling aims to gasify the MSW to produce heating value syngas in 10 kW downdraft gasifier. The gasifier is configured in with equivalence ratio, gasification agent, feedstock flexibility, thermo-chemical process (auto-gasification) control, low tar formation and other parameters. The designed modelled yields biomass feed rate, gas flow rate, gas heating value, energy efficiency, specific gasification rate. An impressive option on reduction of landfill disposal from above designed gasifier model is feasible. Introduction Due to high accumulation of solid waste on the domestic places together with the aggravation of environment problems caused by an increased number of Land fill sites, there is an increased interest in the research and development of techniques used in solid waste management. These process represent a solution for the future in sustainable environment friendly, taking into consideration the interest in reducing greenhouse gas emissions, as well as air and soil pollution [1]. According to [2], Land fill sites represents one of the main pollutant factors of groundwater contamination, more than 90% of the Municipal Solid Waste (MSW) generated in India is directly dumped on land in an unsatisfactory manner. From the 2012-13 Central Pollution Control Board (CPCB) Report, only 19 percent of the total waste generated is treated. The untapped waste has a potential of generating 439 MW of power from 32,890 tonnes per day of combustible waste including Refused Derived Fuel (RDF). In order to sustain this energy, an optimization in the waste utilization of the Municipal Solid Waste is mandatory until it reaches maximum level closer to that of the waste-to-value processing. Nowadays, the most expensive technology of a waste processing is the incineration, which represents main factors affecting capital and operating cost. As presented by [3], an increase of waste-to-value is anticipated by 2025, reaching a zero level sustainable waste processing, which will determine a significant decrease in the environmental effects of the MSW, helping them reach by reduce in the carbon footprint. Composition of municipal solid waste. The municipal solid waste consists of combustible substance, noncombustible substance, and material with high moisture. The combustible substance comprises up to 80% of plastic and paper, while the remaining 20% represents wood, organic, and textile waste. Due to the high organic contents of these combustible substance, it could be a promising feedstock as refuse-derived fuel (RDF) for further processing into gaseous fuel. Conversion procedure of municipal solid waste into refuse-derived fuel starting from collection of waste followed by pretreatment of the mixed composting with spraying of chemicals and enzymes. Next, the mixed composting is dried under hot sun. The bulk item is separated manually followed by screening of mixture according to desire mesh size. After the mixture was separated, it will then undergo further size reduction mechanically followed by magnetic and air separation to remove metals and light materials. Finally, refuse-IOP Publishing doi:10.1088/1757-899X/1130/1/012023 2 derived fuel is produced in the form of brick, fluff, and pellets [11]. Municipal Solid Waste (MSW) [4] is being used in energy-to-waste plants and as fuel substitutes in different industrial processes. Particularly Refuse Derived Fuel (RDF) selected fractions from MSW has distinct possibilities for the future waste-to-energy technologies. This paper aims at conducting a feasibility study of energy recovery of RDF from MSW generated. Downdraft gasifier is selected because the low the tar content produced at least 0.04 g/Nm3, while the gas produced is relatively high and clean [5]. Materials In this article a MSW Gasifier, based on the reviewed parameters of the Refuse-derived fuel (RDF) was designed in order to emphasize combustible components from waste to material and to evaluate the lower heating value for specific gasification rate. On this design model, a 10 kW downdraft gasifier parameters will be computed and then details regarding method of processing will be studied through process design to model. Refuse Derived Fuel (RDF) The potential RDF of the MSW shows it is a suitable fuel source for energy production and power generation. The combustibles in RDF resource consist (i.e. plastics, paper, textile, mixed organic waste, wood, and rubber) in an evaluation of energy recovery and benefits in terms of heating value and homogeneity obtained. The following Table 1 [4] shows the values of the proximate analysis and ultimate analysis for the RDF material for Moisture, ash content and volatile matter (wt%). Methodology The steps used to calculate the expression of the design to model process are presented in figure 1. First the assumptions of the design process were considered and to obtain moisture content (MC), volatile matter (VM), and ash content from proximate analysis and the weight percentage (%) of carbon (C), hydrogen (H), sulfur (S) and nitrogen (N) from ultimate analysis. In thermochemical calculations, Stage (I), gasifier efficiency is calculated by Lower heating values of biomass and product gas of the RDF fuel. Then Stage (II), the mass flow rate obtained and in Stage (III) volume of the gasifier is obtained. The height and diameter of the gasifier is calculated during Stage (IV) by using the specification gas rate.MSW gasifier is a design model which has the possibility to calculate the diameter and height of thermo-chemical gasification systems, in order to evaluate the optimize size of the gasifier, starting from lower heating value to specific gasification rate are computed. The reviewed parameters are is used to enter initial and input data for stoichiometric thermochemical expressions of the process model. After the calculation of the lower heating value, the gas yield and energy efficiency calculated, with the help of all thermo-chemical equations the specific gasification rate is derived [6]. 3.2. Method of data processing The following simplified chemical formulas are used for the gasification model to process. At first [7], the Lower heating value, gas yield and energy efficiency and next phase [8], power input, gas flow rate, volume of the gasifier and specific gasification rate are calculated. Power input = power output/gasification efficiency (kW) Gas flow rate = [power input/LHVgas] * 3600 (m 3 /h) Bio mass flow rate = Gas flow rate/gas yield (kg/h) Air flow rate = (Stoichiometric air-fuel ratio * Equivalence ratio * Biomass flow rate) (kg/h) Results and discussions The primary purpose of this research methodology is to address the gasifier dimensions based on parameters from the theoretical model of the feasibility of gasifying a RDF in the downdraft gasifier. The characteristics calculation of the proximate analysis (wt %) of the sample was conducted according to ASTM standard D1542 [4] and encompassed the calculation of moisture content (MC), volatile matter (VM), and ash content. The elemental analysis reported Table [4] were used in calculations as the percentage (%) of carbon (C), hydrogen (H), sulfur (S) and nitrogen (N). The thermochemical calculations for the design were stage (I) to obtain the gasification efficiency of the RDF fuel, equation (4) is used to determine LHVbiomass of the RDF = 32,548 KJ/kg, and LHVgas = 5.87 MJ/Nm 3 and product gas yield rate = 4.05 Nm 3 /kg % from [11] using GAS 3100 . The gasifier efficiency using equation (1), from the above calculations is η=73.04%. The mass flow rate stage (II) using equation (8) is obtained by power output and the gas flow rate for 10kW gasifier is 10 Kg/hr. And then stage (III), using equation (11) the volume of the gasifier is obtained is 0.85 m 3 . The diameter and height in Figure 2, of the gasifier is calculated using the gasification specific rate and volume of the gasifier. Conclusion Design of MSW gasifier was carried out based on the requirement of thermal power output for maximize energy efficiency and minimize the environmental impacts. At first, the heating values of RDF waste components were calculated and then gasifier volume was determined by finding the input and output flow details. The dimensions of the gasifier were determined by the specific gasification rate were reactor diameter 0.5 m and length 1.7 m.
2021-05-21T16:57:44.117Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "c81a77503017da18f2d787b6ec1a63f4ad2107b2", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1130/1/012023/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "43e079912ce313a32d7f537dc7fd746365c1b3ca", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
57757544
pes2o/s2orc
v3-fos-license
A Turn-On Fluorescent Sensor for Glutathione Based on Bovine Serum Albumin-Stabilized Gold Nanoclusters A fluorescence sensor for the detection of glutathione based on the fluorescence recovering of the bovine serum albumin-stabilized gold nanoclusters is reported. This study indicates that glutathione restores the copper-ion-quenched fluorescence by coordinating the bound copper ion in the bovine serum albumin molecule used for stabilizing the gold nanocluster through its sulfydryl. Under the experimental conditions, the fluorescence response showed a linear relationship with the concentration of glutathione over the range from 10 µM to 400 µM. The fluorescence sensor successfully detected glutathione in commercial drug products. Introduction Development of glutathione (GSH) assay methods has received attention due to its diverse functions in organisms and extensive market prospects. GSH, an important nonenzymatic antioxidant, is found in almost all cell types. GSH protects cells from damage of reactive oxygen species such as hydroxyl radical, hydrogen peroxide, and lipid peroxides, directly by eliminating free radicals, and indirectly by serving as a cofactor for glutathione peroxidase [1,2]. GSH also participates in other physiological processes such as control of cell proliferation and nucleotide metabolism [3,4]. Based on its essential role in the health of organisms, GSH is used in clinic to treat kinds of diseases such as liver disease and uremia and reduce the side effects correlated with chemoradiotherapy. Many analytical methods, such as high performance liquid chromatography, capillary electrophoresis, fluorophotometry, and electrochemistry, have been developed for detection of GSH [5][6][7][8]. Among these methods, fluorophotometry has advantages over the other techniques at sensitivity, simplicity, and costs. In recent years, fluorescent probes for detection of GSH have been designed and investigated for overcoming the disadvantages of traditional fluorometric assays [9][10][11][12]. Although these fluorescent probes successfully detected GSH from various samples, including aqueous solutions, human serum, bovine serum album (BSA), and liposome, they suffered from complicated and tedious synthesis procedures. Bovine serum albumin-protected fluorescent gold nanoclusters (AuNCs-BSA) reported by Xie et al. have given rise to research interest in sensing applications owing to the advantages of facile preparation, high fluorescence quantum yield (∼6%), favorable photostability, and good biocompatibility [13]. Xie's research group developed a simple label-free method for the selective and sensitive detection of Hg 2+ based on fluorescence quenching of AuNCs-BSA triggered by Hg 2+ -Au + interactions [14]. Liu et al. reported a AuNCs-BSA-based fluorescent sensor for the recognition and determination of cyanide in aqueous solution, which was based on the fluorescence quenching of AuNCs-BSA induced by the Elsner reaction between cyanide and gold atoms of AuNCs-BSA [15]. Durgads et al. demonstrated the AuNCs-BSA can be used as a selective fluorescence "turn-off" sensor for Cu 2+ in live cells based on fluorescence quenching of AuNCs-BSA resulting from intersystem crossing of the excited electron from the gold cluster stimulated by the bound Cu 2+ in the BSA molecule [16]. Their paper also showed that the 2 International Journal of Analytical Chemistry copper-ion-quenched emission was reversible with a copper chelator glycine. A previous study demonstrated that the fluorescence of GSH-capped gold nanoparticles was quenched by Cu 2+ due to the complexation between Cu 2+ and GSH [17]. Thus, we assumed that GSH might be able to retrieve the copperion-quenched fluorescence of AuNCs-BSA by coordinating Cu 2+ . GSH was found to be much more effective than glycine on restoring the fluorescence quenched by copper ions in our study. Thus, we have developed a fluorescence "turn-on" sensor for GSH based on the AuNCs-BSA-Cu system. Detection of GSH. For fluorescent detection of GSH, varying volumes of 10 mM GSH solutions were mixed with the AuNCs-BSA solution containing Cu 2+ which was prepared by adding 30 L 10 mM Cu 2+ solution to 250 L AuNCs-BSA solution, and the mixtures were diluted to 5 mL with HEPES buffer (pH=7.2). Fluorescence emission spectra of the as-prepared solutions were measured under 480 nm excitation. Sample Preparation. A bottle of reduced glutathione powder for injection was dissolved and diluted to 100 mL with ultrapure water. After four reduced glutathione tablets were ground, the powder was dissolved in ultrapure water and filtered. The filtrate was finally diluted to 100 mL with ultrapure water. The fluorescence quenching of AuNCs-BSA in the presence of Cu 2+ was attributed to the binding of Cu 2+ on to the BSA used for stabilizing the gold nanocluster, which enabled the paramagnetic Cu 2+ to prompt intersystem crossing of the excited electron from the gold cluster and consequently decreased the fluorescence intensity [16]. A control experiment showed that GSH had no influence on the fluorescence spectrum of AuNCs-BSA in the absence of Cu 2+ , indicating that the fluorescence recovery induced by adding GSH to the AuNCs-BSA-Cu system resulted from the interaction between GSH and Cu 2+ . GSH, a natural tripeptide that consists of glutamate, cysteine, and glycine, contains various coordinating function groups such as carboxyl, amido, sulfydryl, and acylamino, which facilitates its molecules to form complexes with metal ions. GSH was replaced by glutamic acid, cysteine, and glycine, respectively, to observe the change in fluorescence properties of the AuNCs-BSA-Cu system and identify the binding site on GSH for Cu 2+ . It is apparent in Figure 1 that the fluorescence intensity restored by cysteine was close to that by GSH at the same concentration and much stronger than that by glycine or glutamic acid. Considering the facts that Cu 2+ is characterized by a strong affinity for SH residues and among the three amino acids constituting GSH only cysteine has a sulfydryl, we speculate that GSH recovers the copper-quenched fluorescence of AuNCs-BSA by coordinating the bound Cu 2+ in the BSA molecule used for stabilizing the gold nanocluster through its sulfydryl. Optimization of Conditions for GSH Sensing. Concentration dependent effects of AuNCs-BSA and Cu 2+ on the detection of GSH were investigated. High concentrations of Cu 2+ were required for high fluorescence quenching efficiency at high concentrations of AuNCs-BSA, which means low detection sensitivity for GSH. On the other hand, too low a concentration of Cu 2+ would increase background International Journal of Analytical Chemistry fluorescence and narrow the allowing quantitative range of GSH due to low fluorescence quenching ability. In a solution with a total volume of 5 mL, 250 L AuNCs-BSA and 60 M Cu 2+ were finally selected for GSH sensing. The acid effect on the sensing system was studied over a pH range from 6 to 11. When the pH value increased in the tested range, diminutive change in the fluorescence intensity of AuNCs-BSA was observed, whereas the fluorescence intensity of the AuNCs-BSA in the presence of Cu 2+ increased, indicating the fluorescence quenching efficiency of Cu 2+ decreased with increasing of the pH value. It was also observed that the fluorescence recovering efficiency of GSH changed with the pH value. The fluorescence quenching and recovering efficiency are represented with F 0 /F 1 and F 2 /F 1 respectively, where F 0 and F 1 correspond to the fluorescence intensity of the AuNCs-BSA in the absence and presence of Cu 2+ , respectively. F 2 represents the fluorescence intensity of the AuNCs-BSA in the presence of Cu 2+ and GSH. As shown in Figure 2, the fluorescence recovering efficiency of GSH is stabilized and maximized at physiological pH. The HEPES buffer solution was finally employed to adjust the pH of solutions used in the measurement to 7.2. Time-dependent fluorescence signals of the sensing system were observed. The change in fluorescence properties of AuNCs-BSA in the absence and presence of Cu 2+ was not obvious within 30 minutes. However, the fluorescence intensity of the AuNCs-BSA in the presence of Cu 2+ and GSH slowly decreased with time, and thus the fluorescence recovering efficiency decreased with time ( Figure 3). Therefore, the fluorescence of the sensing system should be measured immediately upon adding GSH to the solution of AuNCs-BSA in the presence of Cu 2+ . Selectivity and Sensitivity for GSH Sensing. Although the presence of Pb 2+ , Co 2+ , or Ni 2+ with the same concentration of Cu 2+ (60 M) showed a quenching effect on the fluorescence of the AuNCs-BSA, their quenching efficiencies were much lower than that of Cu 2+ (Figure 4). The degree of interference of other metal ions, including K + , Ca 2+ , Mg 2+ , Zn 2+ , Cd 2+ , Mn 2+ , and Fe 3+ , for the detection of GSH was further investigated. On the basis of a relative error range from -5% to 5% in detecting 50 M GSH, the tolerance concentrations were as follows: 1 mM for K + , Ca 2+ , Mg 2+ , 500 M for Zn 2+ , Mn 2+ , Cd 2+ , and 100 M for Fe 3+ . Some amino acids were also used to evaluate the selectivity of the sensing system. As shown in Figure 5, only cysteine could result in significant fluorescence recovery of the AuNCs-BSA, whereas no obvious changes in the quenched fluorescence were observed in the presence of other amino acids such as glycine, lysine, proline, glutamic acid, tryptophan, and phenylalanine at the same concentration of GSH (50 M). Under the optimum detection conditions, the relationship between the fluorescence recovering efficiency (F 2 /F 1 ) and the concentration of GSH over the range from 10 M to 400 M could be expressed by a linear equation (R 2 = 0.996), Figure 5: Selectivity of the sensor for GSH over amino acids (F 2 is the fluorescence intensity of the AuNCs-BSA in the presence of Cu 2+ and GSH or amino acids.). F 2 /F 1 = 0.0063C GSH + 1.09 ( Figure 6). The limit of detection for GSH was calculated to be 1.2 M. Application. Commercial reduced glutathione tablets and reduced glutathione powder for injection were employed as practical samples to evaluate the applicability of the GSH sensor developed here. The recovery and relative standard deviation obtained with a standard addition method through five parallel tests are presented in Table 1. Conclusions We found that GSH restored effectively the copper-quenched fluorescence from the AuNCs-BSA and therefore develop a new fluorescence "turn-on" sensor for GSH detection. The sensor shows advantages such as fast and sensitive response to GSH, simplicity in preparation and usage, and environmental friendliness. The recovery and precision obtained from commercial GSH drug products indicate the potential application of the GSH sensor. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper.
2019-01-22T22:30:20.765Z
2018-12-02T00:00:00.000
{ "year": 2018, "sha1": "f1c49cb8dd1924605afec68bfe7e820f6d7d4725", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijac/2018/1979684.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a412901c9e13d1c93d20ae5505c5835b6be9ec55", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
256867332
pes2o/s2orc
v3-fos-license
Safety and Efficacy of Buprenorphine-Naloxone in Pregnancy: A Systematic Review of the Literature The prevalence of opioid use among pregnant people has been increasing over the past few decades, with a parallel increase in the rate of neonatal abstinence syndrome. Opioid agonist treatment (OAT) including methadone and buprenorphine is the recommended management method for opioid use disorders during pregnancy. Methadone has been extensively studied during pregnancy; however, buprenorphine was introduced in the early 2000s with limited data on the use of different preparations during pregnancy. Buprenorphine-naloxone has been incorporated into routine practice; however, only a few studies have investigated the use of this medication during pregnancy. To determine the safety and efficacy of this medication, we conducted a systematic review of maternal and neonatal outcomes among buprenorphine-naloxone-exposed pregnancies. The primary outcomes of interest were birth parameters, congenital anomalies, and severity of neonatal abstinence syndrome. Secondary maternal outcomes included the OAT dose and substance use at delivery. Seven studies met the inclusion criteria. Buprenorphine-naloxone doses ranged between 8 and 20 mg, and there was an associated reduction of opioid use during pregnancy. There were no significant differences in gestational age at delivery, birth parameters, or prevalence of congenital anomalies between buprenorphine-naloxone-exposed neonates and those exposed to methadone, buprenorphine monotherapy, illicit opioids, or no opioids. In studies comparing buprenorphine-naloxone to methadone, there were reduced rates of neonatal abstinence syndrome requiring pharmacotherapy. These studies demonstrate that buprenorphine-naloxone is a safe and effective opioid agonist treatment for pregnant people with OUD. Further large-scale, prospective data collection is required to confirm these findings. Patients and clinicians may be reassured about the use of buprenorphine-naloxone during pregnancy. Introduction Untreated opioid use disorder (OUD) in pregnancy is associated with significant maternal, fetal, and neonatal risks including fetal growth restriction, preterm labor, and increased perinatal morbidity and mortality [1,2]. Data from the 2020 National Survey on Drug Use and Health indicated that 8.3% of pregnant women in the United States had used illicit drugs in the past month, with 0.4% reporting opioid misuse [3]. The national rate of maternal opioid-related diagnoses in the United States increased from 3.5 in 1000 delivery hospitalizations in 2010 to 8.2 per 1000 in 2017 [4]. Concomitantly, the rates of neonatal abstinence syndrome (NAS) almost doubled in the United States from 4 in 1000 birth hospitalizations in 2010 to 7.3 per 1000 in 2017 [4]. Opioid agonist therapy (OAT) is the recommended treatment for OUD in pregnancy with the proven benefits of decreasing maternal illicit opioid use and improving maternal and neonatal health outcomes [1,2]. Methadone maintenance treatment has traditionally been considered the standard of care for OUD in pregnancy [2,[5][6][7]. However, in 2010, the first randomized controlled trial of buprenorphine in comparison to methadone in pregnancy demonstrated that buprenorphine was an acceptable alternative with comparable safety and efficacy to methadone [5]. Buprenorphine was also shown to decrease the severity of NAS in comparison to methadone, findings consistent with a larger body of non-randomized studies [5,6]. Buprenorphine is routinely available as a combination product with naloxone, which is intended to act as a deterrent to injection use, due to the risk of precipitated withdrawal. When buprenorphine/naloxone is taken sublingually, naloxone has minimal bioavailability and does not cause any antagonist effect [6,7]. Historically, due to inadequate safety data about the effects of naloxone in pregnancy, buprenorphine monotherapy was recommended instead of the combination product for pregnant people [1,2,[7][8][9][10][11]. The need for further research was recommended to establish the safety of buprenorphine/naloxone during pregnancy. More recently, there has been a notable change in the Health Canada approved product monograph for buprenorphine/naloxone (brand name Suboxone ® ) eliminating pregnancy as a contraindication to its use [2,8]. The goal of this study was to conduct a systematic review of the literature relating to maternal and neonatal safety and efficacy of buprenorphine-naloxone in pregnancy. These findings will serve to update clinical practice guidelines and will impact clinical decision making related to the management of OUD during pregnancy. Data Sources and Study Selection The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (2009) was followed for this systematic review. A comprehensive search strategy was developed in collaboration with an Information Specialist at the University of Toronto. Medline, Embase, and Cochrane Library databases were searched from 1990 until October 2020. Keywords included buprenorphine, naloxone, and pregnancy. Manual reviews of references lists were also performed to ensure that no relevant studies were omitted. The results of this search were first screened for duplicates, and then both authors screened the remaining titles and abstracts for eligibility criteria prior to full-text retrieval. Where decisions were unable to be made from the title and abstract alone, the full paper was retrieved. Disagreements about eligibility were resolved by consensus. Article were included if they met the following criteria: (a) study included only pregnant people with a history of opioid use or opioid use disorder, (b) buprenorphine-naloxone was used at some point during pregnancy, and (c) primary or secondary outcomes of interest were reported. Only randomized controlled trials and observational cohort or case control studies published in peer-reviewed publications were eligible for inclusion. We excluded expert opinions, editorials, review articles, and guidelines. Articles were also excluded if they were not in the English language. The primary outcomes consisted of gestational age at delivery, birth parameters (birth weight, length, and head circumference), congenital anomalies, and neonatal abstinence syndrome (NAS). Specific NAS measures included neonatal intensive care unit (NICU) admission, prevalence of NAS pharmacotherapy, and duration of hospital stay. The secondary outcomes related to maternal OAT dose and substance use at delivery. Data Extraction and Analysis A data extraction spreadsheet was developed and piloted by both authors to ensure inter-rater reliability. Both authors independently extracted data relating to study characteristics, demographics, and outcomes of interest for eligible studies. In cases of disagreement, the full text article was reviewed, and consensus was achieved based on further discussion. Variability in study design and measured outcomes did not allow for meta-analysis of data. Reporting of Study Risk of Bias Assessment The Risk of Bias of Non-randomized Studies of Interventions (ROBINS-I) tool was used to assess the risk of bias of the included studies. The ROBINS-I tool comprises seven domains of potential bias, and each domain was assessed as having a low, moderate, serious, or critical risk of bias. Results The literature search identified 168 unique articles, of which 12 full text articles were retrieved for further screening (Figure 1). Seven studies met the inclusion criteria for this systematic review. Reporting of Study Risk of Bias Assessment The Risk of Bias of Non-randomized Studies of Interventions (ROBINS-I) tool was used to assess the risk of bias of the included studies. The ROBINS-I tool comprises seven domains of potential bias, and each domain was assessed as having a low, moderate, serious, or critical risk of bias. Results The literature search identified 168 unique articles, of which 12 full text articles were retrieved for further screening (Figure 1). Seven studies met the inclusion criteria for this systematic review. Study Characteristics All included studies were retrospective observational studies involving a total of 302 mother-infant dyads exposed to buprenorphine-naloxone (Table 1). Two studies were performed in outpatient treatment programs in Canada, and the other five studies were conducted in the United States [12][13][14][15][16][17][18]. The two Canadian studies compared buprenorphine-naloxone exposure during pregnancy to illicit opioid use or no opioid exposure during pregnancy [17,18]. These two studies originated from the same Northwestern community in Ontario, Canada, and may have included data on the same population of patients, with Jumah et al. extending their study for an additional 6 months in 2015. However, the sample sizes for the buprenorphine-exposed population were significantly different. Dooley et al. reported on 30 buprenorphine-exposed pregnancies, whereas Jumah et al. included 62 buprenorphine-exposed pregnancies. Two studies from the US reported outcomes of single-cohort studies with no comparison group [12,13]. The other three studies compared buprenorphine-naloxone to buprenorphine monotherapy (n = 1) or methadone (n = 2) [14][15][16]. The majority of studies included participants with any buprenorphine-naloxone exposure in pregnancy, while two studies included only those stabilized on buprenorphine-naloxone at the time of delivery [12][13][14][15][16][17][18]. One study excluded patients who switched OAT, including to or from methadone, during pregnancy [14]. Study Characteristics All included studies were retrospective observational studies involving a total of 302 mother-infant dyads exposed to buprenorphine-naloxone (Table 1). Two studies were performed in outpatient treatment programs in Canada, and the other five studies were conducted in the United States [12][13][14][15][16][17][18]. The two Canadian studies compared buprenorphine-naloxone exposure during pregnancy to illicit opioid use or no opioid exposure during pregnancy [17,18]. These two studies originated from the same Northwestern community in Ontario, Canada, and may have included data on the same population of patients, with Jumah et al. extending their study for an additional 6 months in 2015. However, the sample sizes for the buprenorphine-exposed population were significantly different. Dooley et al. reported on 30 buprenorphine-exposed pregnancies, whereas Jumah et al. included 62 buprenorphine-exposed pregnancies. Two studies from the US reported outcomes of single-cohort studies with no comparison group [12,13]. The other three studies compared buprenorphine-naloxone to buprenorphine monotherapy (n = 1) or methadone (n = 2) [14][15][16]. The majority of studies included participants with any buprenorphine-naloxone exposure in pregnancy, while two studies included only those stabilized on buprenorphine-naloxone at the time of delivery [12][13][14][15][16][17][18]. One study excluded patients who switched OAT, including to or from methadone, during pregnancy [14]. Maternal demographics were not uniformly reported across studies (Table 1). Participants had a mean age of 26 to 27 years and were predominantly white, with the exception of one study in which the majority were Indigenous [12][13][14][15][16][17][18]. Most had some high school education, were predominantly single, and had at least one previous birth [12][13][14][15][16]18]. Studies also reported high rates of concurrent use of tobacco (58-89%), alcohol (~20%), and cannabis (10-61%) among women taking buprenorphine-naloxone [14][15][16][17][18]. Significant demographic differences reported by these studies included higher gravidity and parity among the buprenorphine-naloxone group and non-exposed individuals in the comparison groups [17,18]. Maternal Outcomes The results from included studies indicated that buprenorphine-naloxone was effective at reducing opioid use by delivery among women with opioid use disorders [12][13][14][15][16][17][18] ( Table 3). Substance use at delivery was measured by urine drug screening (UDS) in five studies and by self-report confirmed by UDS in one study [12][13][14][15][16][17]. According to these measures, the rates of substance use at delivery ranged widely from 0% to 55% (Table 4) [12][13][14][15][16][17]. Specifically, women prescribed buprenorphine-naloxone reported lower rates of illicit opioid use compared to those not using any opioid agonist medication [17,18]. There were conflicting findings about substance use at delivery when buprenorphinenaloxone was compared to methadone use during pregnancy. One study found women who were prescribed buprenorphine-naloxone had higher rates of substance use at delivery than those on methadone maintenance treatment, whereas another study did not show any differences in urine toxicology positivity rates between the two groups [15,16]. Table 3. Maternal outcomes at delivery. Risk of Bias in Included Studies A summary of the risk of bias in each domain for included studies is presented in Table 4. Three of the included studies showed a low overall risk of bias based on these domains [12,13,16]. The other four studies were judged to be at low or moderate risk of bias for all domains. The studies by Dooley et al. and Jumah et al. were classified as being at higher risk for confounding and classification bias due to their opioid-exposed comparison group consisting of both women using other forms of OAT and women using illicit opioids [17,18]. Gawronski et al. was also deemed to be at higher risk for confounding and classification of interventions due to the lower compliance rate with buprenorphine-naloxone compared to methadone [15]. Mullins et al. was deemed to be at higher risk for selection bias since the choice of medication was at the discretion of the prescribing physician [14]. Discussion Among these heterogeneous studies, the demographic and substance use characteristics of the women included in these cohorts are typical of those presenting for OAT in pregnancy, consisting of women in their late 20s, mostly single, and most with a high school education [7,10]. There were no reports of adverse effects in buprenorphinenaloxone-exposed pregnancies compared to those exposed to buprenorphine monotherapy, methadone, illicit opioids, or no opioids. Birth weight, length, and head circumference as well as gestational age at delivery were not significantly different among neonates exposed to buprenorphine-naloxone [12][13][14][15][16][17][18]. In addition, rates of congenital anomalies in buprenorphine-naloxone-exposed neonates were comparable to expected rates in the general population [19,20]. The findings of significantly lower rates of NAS requiring pharmacotherapy and shorter duration of hospital stay in buprenorphine-naloxone groups are consistent with existing evidence of reduced severity of NAS in neonates exposed to buprenorphine compared to those exposed to methadone [5,6]. The wide range of rates of pharmacotherapy for the management of NAS in buprenorphine-naloxone-exposed neonates may be explained by differences between studies in NAS assessment and management, including the threshold to initiate pharmacotherapy for NAS, rooming-in policies, and levels of antenatal opioid exposure. The practice of rooming-in has been shown to decrease the need for pharmacotherapy for NAS; however, only one study explicitly stated whether a rooming-in policy was in place [18,21]. Studies that reported low rates of NAS pharmacotherapy promoted low-dose OAT protocols and opioid tapering prior to delivery [17,18]. In both of these studies, the extremely low rates of NAS likely reflect the neonates' minimal exposure to opioids prior to delivery, as opposed to any characteristic of buprenorphine-naloxone. However, maintenance treatment with OAT is recommended over medical detoxification or rapid tapering off of OAT due to adverse outcomes, such as high relapse rates or return to use and maternal overdose [1,7]. This review also found that buprenorphine-naloxone was effective in reducing illicit opioid use as demonstrated by lower rates of substance use at delivery [17,18]. In one study comparing buprenorphine-naloxone to methadone, buprenorphine-naloxone was associated with a higher positive UDS rate at delivery, which is likely attributable to reduced adherence with buprenorphine/naloxone (86%) dosing compared to methadone (99%) [15]. While early studies showed buprenorphine to be less efficacious than methadone, subsequent studies have consistently found the efficacy to be equivalent when rapid induction and sufficient dosage are used [22]. This is in keeping with the other included study comparing buprenorphine-naloxone to methadone, which found no statistically significant difference in rates of substance use at delivery between the two groups [16]. Our results related to the use of buprenorphine/naloxone during pregnancy are similar to those from another recent publication [23]. Link et al. conducted a systematic review and meta-analysis that included only five studies of 291 buprenorphine-exposed pregnancies compared to other opioid exposures, mainly methadone and buprenorphine. The articles meeting inclusion criteria varied from those in our systematic review. Link et al. excluded any studies without opioid-exposed comparison group(s) and studies where they could not determine OAT use. Their selection process facilitated the ability to conduct meta-analyses for neonatal birth and NAS-related outcomes. Since the goal of the review by Link et al. was primarily related to neonatal outcomes, maternal demographics and maternal outcomes such as substance use in addition to OAT and maternal OAT dose at delivery were not adequately addressed. These maternal parameters are important details when determining the applicability of findings to a particular patient population. Similar to our conclusions, Link et al. suggested that buprenorphine-naloxone use during pregnancy resulted in similar pregnancy outcomes compared to women on other forms of OAT based on their included studies. No serious adverse maternal or neonatal outcomes were associated with the use of buprenorphine-naloxone during pregnancy. The only significant finding based on their meta-analysis was that neonates exposed to buprenorphine were less likely to require treatment for NAS compared to methadoneexposed neonates. The authors also acknowledged the limitations in terms of the number and quality of the studies regarding the use of buprenorphine-naloxone in pregnancy. Limitations This systematic review is limited by a small overall population of~300 buprenorphinenaloxone-exposed dyads with minimal racial diversity. The studies were heterogeneous in terms of timing and duration of buprenorphine-naloxone exposure in pregnancy, reported outcomes, and NAS protocols. Furthermore, all studies consisted of retrospective cohorts focused on short-term outcomes, with no longitudinal and developmental data available. The lack of prospective research, including randomization into exposure groups, is another limitation of the current data. The most significant concern identified was the lack of control for confounding variables in study analyses. These variables included higher rates of smoking in buprenorphine-naloxone groups [14][15][16][17][18]. These factors would be expected to potentially confound results by increasing adverse outcomes in buprenorphine-naloxone groups; however, some studies did attempt to control for the presence of polysubstance use as a confounding variable in their analysis, and poorer outcomes were not seen in these results. Conclusions In this systematic review of the available literature, the results from included studies consistently showed no evidence of maternal or neonatal safety concerns with the use of buprenorphine-naloxone in pregnancy. Buprenorphine-naloxone was reported to be associated with reduced substance use during pregnancy, as well as reduced severity of NAS when compared to methadone. Clinicians should counsel pregnant people about the benefits and risks of initiating or continuing buprenorphine-naloxone as an alternative for the management of opioid use disorders during pregnancy. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article.
2023-02-15T16:19:31.673Z
2023-02-11T00:00:00.000
{ "year": 2023, "sha1": "be08a9545b5b806b5db9d662bf76a946a68c7cb0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1873-149X/30/1/4/pdf?version=1676119904", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4f416eb3c1da8d38f7fe417d3bc631db4c59024", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [] }
115200582
pes2o/s2orc
v3-fos-license
Phosphorothioate DNA Mediated Sequence-Insensitive Etching and Ripening of Silver Nanoparticles Many DNA-functionalized nanomaterials and biosensors have been reported, but most have ignored the influence of DNA on the stability of nanoparticles. We observed that cytosine-rich DNA oligonucleotides can etch silver nanoparticles (AgNPs). In this work, we showed that phosphorothioate (PS)-modified DNA (PS-DNA) can etch AgNPs independently of DNA sequence, suggesting that the thio-modifications are playing the major role in etching. Compared to unmodified DNA (e.g., poly-cytosine DNA), the concentration of required PS DNA decreases sharply, and the reaction rate increases. Furthermore, etching by PS-DNA occurs quite independent of pH, which is also different from unmodified DNA. The PS-DNA mediated etching could also be controlled well by varying DNA length and conformation, and the number and location of PS modifications. With a higher activity of PS-DNA, the process of etching, ripening, and further etching was taken place sequentially. The etching ability is inhibited by forming duplex DNA and thus etching can be used to measure the concentration of complementary DNA. INTRODUCTION Interfacing DNA with nanomaterials has resulted in many interesting hybrids for analytical (Liu and Lu, 2006;Liu et al., 2009;Zhou et al., 2017), nanotechnology (Wilner and Willner, 2012;Pu et al., 2014;Tan et al., 2014;Seeman and Sleiman, 2017;Shen C. et al., 2017;Chidchob and Sleiman, 2018;Hu et al., 2018), and biomedical applications (Qu et al., 2000;Cao et al., 2002;Liu et al., 2015;Lu et al., 2017;Sun et al., 2018). Such applied research in turn stimulated fundamental surface and biointerface studies (Herne and Tarlov, 1997;Storhoff et al., 2002;Liu, 2012;Carnerero et al., 2017). Most of previous research focused on DNA-directed assembly (Mirkin et al., 1996;Liu and Lu, 2003;Sharma et al., 2009;Chou et al., 2014;Liu and Liu, 2017;Lin et al., 2018), or DNA-templated growth of nanomaterials (Nykypanchuk et al., 2008;Surwade et al., 2013;Wu et al., 2014;Song et al., 2015), while etching or dissolution of nanoparticles by DNA was much less explored. We reason that such studies are also important for the following reasons. First, nanoparticles were always assumed to be stable during DNA conjugation or assembly. If DNA can dissolve nanoparticles, such assumptions need to be updated and care has to be taken for long-term storage of such materials. In addition, DNA-mediated etching of nanoparticles can be a way of controlled release. Finally, it can further our fundamental understanding of DNA/nanoparticle interfaces. Using a relatively high concentration of DNA (e.g., >1 µM), we recently observed etching of silver nanoparticles (AgNPs) by DNA oligonucleotides (Hu et al., 2019). For spherical AgNPs, poly-cytosine (poly-C) was the most effective, while the other three types of homopolymers did not display an obvious effect. The base composition of DNA is critical for etching silverbased nanomaterials. Poly-C DNA can effectively etch AgNPs, but the required high DNA concentration and specific DNA sequence restricted its applications in analytical detection and controlled release. So far, we have studied only unmodified DNA. We reason that the effect of DNA might be further improved by introducing modifications with stronger metal ligands. Phosphorothioate (PS) modification refers to replacing one of the non-bridging oxygen atoms by sulfur ( Figure 1A) Liu, 2014, 2015;Huang et al., 2015a,b;Liu et al., 2018). The PS sites on DNA can bind strongly to thiophilic metals (e.g., Au and Ag) and PS-modified DNA (PS-DNA) has been used for nanomaterial synthesis (Ma et al., 2008;Farlow et al., 2013;Weadick and Liu, 2015;Shen J. et al., 2017), nanostructure assembly (Jiang et al., 2005;Lee et al., 2007;Pal et al., 2009;Shen J. et al., 2017), and biosensing (Zhang et al., 2009;Huang P. J. J. et al., 2016). We previously compared adsorption of PS-DNA with normal phosphodiester DNA (PO-DNA) on AuNPs, and concluded that the PS-DNA was more strongly adsorbed (Zhou et al., 2014). PS-DNA was also used to functionalize quantum dots (Ma et al., 2008;Farlow et al., 2013). In addition, PS modifications have been used to probe the reaction mechanism of ribozymes (Cunningham et al., 1998;Huang and Liu, 2014;Huang et al., 2015aHuang et al., , 2019Thaplyal et al., 2015). All these studies took advantage of the strong affinity between PS and thiophilic metals. Since silver is also strongly thiophilic, we speculate that PS-DNA may be more effective for etching AgNPs in a less DNA sequencedependent manner. In this work, we systematically studied the effect of PS modifications and found that it could significantly decrease the needed DNA concentration. At the same time, the sequence of DNA was less important, making DNA-mediated etching available for many more sequences. The effects of pH, DNA length, the number and location of PS modifications and DNA conformation were also systematically studied and compared with the normal DNA of the same sequences, leading to interesting multi-stage etching and ripening process and chemically controlled etching. Instrumentation UV-vis absorption spectra were recorded on a spectrometer (Agilent 8453A). The morphology of AgNPs was examined by a transmission electron microscope (TEM, Philips CM10). The etching kinetics of AgNPs were monitored using a microplate reader (SpectraMax M3). Dynamic light scattering (DLS) measurements were carried out using a Zetasizer Nano 90 (Malvern) at 25 • C. Circular dichroism (CD) spectra were collected on a Jasco J-715 spectrophotometer (Jasco, Japan). Comparison of Etching by PO and PS-DNA In a typical experiment, a 15-mer DNA (20 µM, 35 µL) was incubated with an equal volume of AgNPs (10 µg/mL) at 37 • C for 1.5 h. The final concentration of DNA was 10 µM. Then, the sample was analyzed by a spectrometer. Effect of pH on Etching Kinetics Typically, PO-C 15 , PS 14 -C 15 , or PS 14 -T 15 (20 µM, 50 µL) was mixed with the AgNPs (10 µg/mL, 50 µL) in a 96-well plate. Then, 10 µL of 10 mM buffer with different pH values (citrate buffer for 4.0, 5.0, and 6.0; MOPS for 7.0 and 7.9) was added and incubated at 37 • C for 1.5 h. The absorbance intensity was monitored at 395 nm every 0.5 min under the kinetic mode using the plate reader. PS-DNA Mediated Etching of AgNPs To test the effect of PS modification (see Figure 1A for its structure), we used the four types of 15-mer DNA homopolymers both with the normal phosphodiester (PO) backbone and with full PS modifications (each bridging phosphate contained a PS modification). Our 20 nm AgNPs had a strong surface plasmon peak at 395 nm ( Figure 1C). Adding the normal PO-A 15 DNA had no effect and the UV-vis spectrum retained its original shape. In contrast, the PS 14 -A 15 (note that a 15-mer DNA has only 14 bridging phosphates) dropped the extinction peak intensity by over 80%. From this experiment, we concluded that the sulfur atoms in the PS-DNA were the reason for the decreased extinction of the AgNPs. From TEM (Figure 2A and Figures S1, S2A), our starting AgNPs were monodispersed ∼20 nm spheres. After adding the PS 14 -A 15 , overall the AgNPs became smaller (Figures 2B,F), indicating its etching. Similar experiments were also performed with the other DNA sequences, and the same observations were also made with the two T 15 DNAs from both their UV-vis spectra ( Figure 1D) and TEM ( Figure 2C). The lack of etching by PO-A 15 and PO-T 15 is in agreement with the relatively low affinity between these two DNA bases and silver surface (Basu et al., 2008;Wu et al., 2014). When the PO-G 15 was added, the extinction intensity of the AgNPs dropped by about 20% (Figure 1E), while a nearly 80% drop was observed when PS 14 -G 15 was added. At the same time, the peak red shifted by 18 nm. This suggested formation of larger AgNPs, which was confirmed by TEM ( Figure 2D). Therefore, with this DNA concentration and reaction time, the PS 14 -G 15 helped Ostwald ripening of the AgNPs. Etching was the first step of the interaction, where the AgNPs were dissolved by the added DNA. Extensive etching and deposition of dissolved silver species on larger AgNPs (thus with lower solubility) resulted in the subsequent ripening. The PO-C 15 DNA was very effective in etching the AgNPs and it decreased extinction by 65% ( Figure 1F). The extinction of the PS 14 -C 15 -treated sample further decreased the extinction peak to nearly 90%. Interestingly, under this condition, the ripening process caused only a 3 nm redshift, which was much smaller than that induced by PS 14 -G 15 (18 nm). Meanwhile, the average size of the PS 14 -C 15 -treated AgNPs (Figure 2E) was smaller than that of the PS 14 -G 15 treated sample (Figure 2D), indicating the cytosine and guanine bases also played a role on etching. With these four pairs of DNA, we plotted the peak intensity drop in Figure 1B. All the PS sequences dropped the intensity by a similar value (the red bars), while a much larger difference was observed with the normal PO-DNA (blue bars). Since all the PS-DNA sequences had a significant etching effect, PS-DNA can etch the AgNPs in a less sequence-independent manner. This might be useful since etching can be general to different DNA sequences. Comparison of PS and DNA Base Coordination The above experiments indicated that PO-T 15 is one of the least effective sequences for etching AgNPs. Therefore, the etching effect of PS 14 -T 15 should mainly come from the PS sites. PO-C 15 is the best PO DNA and only the bases are available for etching, while PS 14 -C 15 is likely to be a most effective sequence overall. For PS 14 -C 15 , both the PS sites and the cytosine bases are likely to contribute to silver binding. Since these three sequences are representative, they were chosen for further studies. Their silver coordination sites are marked in black and red circles in Figure 1A. We first studied the effect of DNA concentration. In each case, the peak intensity decreased with increase of DNA concentration (Figures 3A-C). At the same time, some redshifted peaks were observed at high DNA concentrations. All these experiments were performed with an incubation time of 1.5 h, when the systems were approaching equilibrium (Figure 4). By plotting the decrease of peak height against DNA concentration, we obtained their apparent binding curves ( Figure 3D). Among these DNAs, PO-C 15 had the lowest response with an apparent K d of 3.87 µM. PS 14 -C 15 had a tighter binding with a K d of 1.78 µM. Interestingly, PS 14 -T 15 had a K d similar to that of PS 14 -C 15 , and thus from this standpoint, the base's contribution was minimal. Even at a relatively low DNA concentration, the base did not contribute much to etching. Adding PS modifications decreased the concentration requirement for C 15 by 2.2-fold, while the improvement for T 15 was close to infinity (compared to PO-T 15 ). We then plotted the shift of peak wavelength (Figure 3E), where the upper half of the figure is for red shifted samples and the lower half for blue shifts. With low concentrations of PO-C 15 , a gradual blue-shift of the AgNPs peak was observed and the maximal shift was achieved with 1.5 µM DNA, indicating etching of the AgNPs to form small particles. Then, the peak started to red shift attributable to Ostwald ripening. When the DNA concentration was more than 7.5 µM (crossing the dashed line in Figure 3E), the peak red-shifted relative to the original AgNPs. For PS 14 -C 15 , most of their spectra were red shifted (e.g., ripening) except for the DNA concentration below 1 µM. From TEM, the overall size was indeed decreased for the 0.5 µM PS 14 -C 15 treated sample (Figure S2B). A wide size distribution with both larger and smaller AgNPs was observed for more PS 14 -C 15 (7.5 µM), confirming the ripening ( Figure S2C). However, we cannot rule out slight aggregation of AgNPs occurring at the same time, which also caused the red shifted spectra. By comparing PO-C 15 and PS 14 -C 15 , both showed etching at low DNA concentrations and then ripening with more DNA added. PS 14 -C 15 has a tighter affinity with silver allowing it to achieve the etching-to-ripening transition at a lower DNA concentration. Interestingly, for PS 14 -C 15 , the red shift initially increased but later decreased when the concentration of PS 14 -C 15 was more than 3.75 µM (red trace in Figure 3E). A similar trend was also observed for PS 14 -T 15 despite smaller shifts compared to that of PS 14 -C 15 . This difference may be ascribed to their different DNA bases, suggesting that the cytosine bases of PS 14 -C 15 also participated in the etching process. We reason that PS 14 -C 15 had a complex multi-stage etching process. Low concentration of DNA contributed to AgNPs etching and further ripening (Figures S2A-C). Further increased PS 14 -C 15 DNA could further etch the larger AgNPs from the previous ripening step, which yielded the decreased red shift. The etching and thus size decrease was also confirmed by TEM ( Figure S2D). As a result, we proposed a three-stage mechanism for PS-DNA to interact with AgNPs: etching, ripening and further etching ( Figure 3F). For the PO-DNA, we only observed two stages (etching-ripening of AgNPs) indicating that cytosine bases alone were incapable of etching larger AgNPs, which were thermodynamically more stable than the originally used 20 nm ones. Since PS 14 -T 15 was also not very obvious than PS 14 -C 15 for this three-stage process, both cytosine bases and PS of PS 14 -C 15 contribute to the etching-to-ripening transition (with PS being the major contributor). Kinetics and Effect of pH To further study etching, we followed the reaction kinetics. Since the conformation of poly-C DNA is strongly affected by pH (Dong et al., 2014;Huang Z. et al., 2016), we also measured the etching kinetics at different pH values. For PO-C 15 , etching was strongly inhibited at low pH ( Figure 4A). In particular, when pH was at 6 or lower, etching was essentially fully inhibited. In contrast, PS 14 -C 15 had the same rate of etching regardless of pH from 4 to 7.9 ( Figure 4B). We fitted the kinetic data of the PS-DNA to a first-order equation and obtained a rate constant of 41.3 h −1 , which was much faster than the PO kinetics of 183.4 h −1 at pH 7.9 (the rate of the PO samples was even slower at lower pH). Since the only difference here was the base, we reason that the inhibited PO-C 15 etching must be related to its base protonation and formation of secondary structures such as the i-motif (Figure 4D). Using circular dichroic (CD) spectroscopy, a strong positive peak at around 285 nm and a small negative peak near 260 nm were observed suggesting an intramolecular i-motif structure of PO-C 15 at pH 4.0 (the black spectrum in Figure S3) (Liu and Balasubramanian, 2003). Such a folded conformation could shield the bases and inhibit their interaction with AgNPs or with Ag + . For the PS 14 -C 15 -mediated etching, pH had no effect on etching. Since the PS modifications could cause a reduced melting temperature compared to the PO counterpart (Gonzalez et al., 1991), the PS 14 -C 15 was incapable of forming i-motif at 37 • C even under acidic conditions (the red spectrum in Figure S3). Therefore, the exposed PS and the bases in the random-coil structured PS 14 -C 15 could serve as the ligand for etching the AgNPs. The pH-independent etching also appeared for PS 14 -T 15 (Figure 4C), demonstrating the generality of PS-DNA-mediated etching of AgNPs. The Number of PS Modifications and AgNP Etching The above experiments used 15-mer DNA with full PS modification. We then varied the number of DNA length and PS modifications (see Figure 5A for the DNA sequences). First, the DNA length was explored. To minimize the effect of the DNA base, PS-modified poly-T DNAs were tested. The length of DNA varied from 5-mer to 15-mer, and the total PS modification was maintained to be the same (e.g., the molar concentration of PS 4 -T 5 was 3.5 times of that of PS 14 -T 15 ). The peak of the PS 14 -T 15 sample dropped more than that of PS 4 -T 5 , suggesting that longer DNA was more effective and thus the importance of polyvalent binding ( Figure 5B). We then varied to the number of PS modifications, while the DNA length was maintained at 15-mer. The number of PS modifications was reduced from 14 to 7, 4, 2, and 1 ( Figure 5C). The peak intensity gradually dropped with increased PS modifications. For the poly-T DNAs with 1-7 PS modifications, the drop in the peak intensity was linearly proportional to the number of PS (Figure S4), further highlighting that the PS responsible for the AgNP etching. This also provided a method to quantitatively tune the extent of etching. Further increase of the PS modifications to 14 did not bring in much more changes, suggesting that seven PS modifications could be sufficient with the 1.5 h incubation time. Finally, we explored the effect of the location of PS modifications. Compared to the uniform distribution of PS in the whole DNA backbone of PS 7 -T 15 , the 7 PS modifications in PS 7r -T 15 were concentrated on the 3 ′ -terminus of the DNA. Interestingly, the evenly distributed PS 7 -T 15 had a stronger decrease (Figure 5D), implying that PS coordination is more effective when they are separated. DNA Conformation Dependent Etching Effective adsorption of PS-DNA on AgNPs could be important for the etching process. All the above experiments used flexible single-stranded DNA oligonucleotides, while a rigid DNA structure (e.g., duplex) may hinder the attachment of DNA to AgNPs due to restricted binding sites ( Figure 6A). To test this hypothesis, we explored the effect of DNA conformation on etching by forming duplex DNA. However, PS modifications can weaken the stability of duplex DNA as reflected from the reduced melting temperature (T m ) (Gonzalez et al., 1991). Furthermore, the A-T base pair with a PS modification showed more decreased T m than that of the C-G base pair (Stein et al., 1988). Therefore, we designed a PS-modified random DNA (named PS-R DNA) with a high GC content ( Figure 6B). This DNA could etch the AgNPs (the black spectrum in Figure 6C), and the etching efficiency was gradually inhibited with increasing dose of the complementary DNA (cDNA). The inhibiting efficacy was sharply decreased when the misDNA with a single mismatched base was added, while a full non-complementary DNA (T 30 ) had little inhibition effect ( Figure 6D). Therefore, we can attribute the cDNA-dependent etching to the formation of the duplex DNA. In other words, single-stranded DNA is much more effective for etching the AgNPs, although the PS backbone is still fully exposed in duplex DNA. This implies that DNA needs to fold into optimal binding structures, and etching cannot take place effectively on isolated PS sites. CONCLUSIONS In summary, we reported that PS modifications on DNA could improve etching of AgNPs in several aspects. First, the sequence generality is significantly expanded, and the introduced PS allows essentially any DNA sequence to etch AgNPs beyond just poly-C DNA. Furthermore, the required DNA concentration decreased clearly, and at the same DNA concentration the rate of etching was much faster than that with PS modifications. The etching process also effectively took place for PS-DNA despite the low pH, which could inhibit etching induced by normal PO-DNA (e.g., poly-C DNA). At the same time, we could control the etching efficacy through changing DNA length and the number and location of PS modifications. With stronger etching efficiency, the reaction process was found to contain three stages: etching by low concentrations of PS-DNA, followed by Ostwald ripening at medium DNA concentrations, and further etching in the presence of high DNA concentrations. This work has expanded the scope of the interaction between DNA and nanomaterials, and it might lead to interesting analytical and biomedical applications. For example, etching of various silver nanostructures may produce visible color change for colormetric biosensors. These sensors might detect multiple analytes by using aptamers and by designing strategies to target the PS sites. At the same time, it also calls for attention regarding the stability of nanomaterials when designing hybrid materials containing silver nanoparticles (and potentially other materials) with DNA. AUTHOR CONTRIBUTIONS SH, JW, and JL designed the experiments and wrote the paper. SH performed the experiments. PH contributed in the DNA design. All authors read and approved the final version of the manuscript. FUNDING Funding for this work is from the Natural Sciences and Engineering Research Council of Canada (NSERC) and the National Natural Science Foundation of China (21575166, 21876208). ACKNOWLEDGMENTS SH was supported by the Chinese Scholarship Council (CSC) Scholarship (201706370185) to visit the University of Waterloo.
2019-04-16T13:21:58.088Z
2019-04-16T00:00:00.000
{ "year": 2019, "sha1": "235546682f19bc5a236c2b3880f2a7532aa8578e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2019.00198/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "235546682f19bc5a236c2b3880f2a7532aa8578e", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
226533114
pes2o/s2orc
v3-fos-license
Factors contributing to the public proneness towards quacks in Sindh The present study is to explore the factors and reasons behind public proneness towards quacks in the rural areas of Sindh, Pakistan and to manifest the public on how these quacks are duping vulnerable and quackery-prone peoples for financial gain which may induce human lives in life-threatening health conditions. The study also interprets a better understanding of the public needs, especially in the rural areas of the Sindh that may give a hope for deliverance from quacks. Commentary Quacks are usually trained in the rudiments of clinical medicine under licensed doctor tutelage where they hone their skills in primary healthcare services such as compounding and dispensing pharmacy practices and/or they get some informal training as a substitute to phlebotomist. Among their ilk are malpractitioners: (a) individuals who have worked as assistants to qualified physicians, (b) graduated lab technicians who have switched to healthcare, (c) graduated lab technicians working as a substitute to pathologist, sonologist, radiologist and hematologist, (d) inherited midwives without having any formal training or qualifications, (e) Diploma/Bachelor of Homeopathic Medicine and Surgery (DHMS/BHMS) working beyond their scope of practices (i.e. practicing allopathic medicines), and (f) non-qualified person working under the name of a licensed doctor which is called rent-seeking activity. About 6,000 quacks are practicing medicine in Sindh [1]. William H. Gordon in 1967, has categorized the quack-prone peoples into four classes [2]. According to Viola W. Bernard, the biggest reason behind the public vulnerability towards quacks is their inner fears and the quacks give the impression to offer some magical defenses against it [3]. Herein we would like to unveil the peculiar public susceptibility and inclination towards quacks especially in the rural areas and slums of the Sindh, Pakistan. These observations and hypotheses have evolved over time from regular basis anti-quackery campaigns across Sindh by the Directorate of Anti-Quackery-Sindh Healthcare Commission [4]. The following factors may give valuable insights regarding public proneness towards quacks in Sindh, which have not been reported so far: Exaggerated claims of quack to cure any disease: whenever a new discovery is made in the field of medical sciences the quacks venture into it by taking an advantage of inadequate knowledge and lack of interest among public. They pretense to have an effective treatment and vast knowledge of the subject against any disease. At the same time, protecting themselves by emphasizing that there is no guarantee for everyone to be cured. Quacks are creating a persona that entices vulnerable peoples, especially the local youth in Sindh is entrapped at the hands of quacks for their nostrums (herbal supplements) and placebos that will supposedly enhance sexual performance or stamina. According to Unani specialists in Sindh, more than 90 percent of their young clients consult for sexual problems in men. Adult obesity has been rising to an alarming extent [5] and quacks were found involved in practicing obesity medicine too. Usually, less or averaged educated peoples are more prone to quacks because quacks publicize their miraculous healing in a misleading manner among community. "Money back guarantee" is the most popular persuader of the quacks in their advertisements to sell their nostrums as hair tonics to cure baldness. Generally, peoples take this chance to try their luck if they can be cured without spending lots of money on hair transplantation or surgery to promote hair growth (Table 1). Sensational claims of quack to surprise their clients: to retain the faith in their clients, quacks tend to make them surprise by their fake inspirational success stories. Oftentimes they are observed to claim that they can diagnose many diseases just by feeling the pulse. In order to do so, they hold patient´s wrist briefly and conclude their determinations. They are usually aware of the recent scientific advices about some diseases and selling their clients a false hope at a high price. They use to tell their clients that thousands of peoples have been cured at the hands of their forefathers and they have some inherited magical cures, discoveries and some unveiled secrets that is unknown to others. Quacks know, they can't solve various health problems and issues, but it's quacks nature to clutch at straws. It is often observed that senior citizens due to their aching muscles are more vulnerable to quacks and are seeking some magical cures to soothe their aching muscles. Lack and unavailability of a licensed doctor when healthcare services are needed: the governmental organizations in Sindh, such as, Basic Health Unit (BHU), People's Primary Healthcare Initiative (PPHI), Rural Health Centers (RHC), District Headquarters (DHQ) and Taluka (THQ) hospitals provide specialist care in the morning hours only and are situated at a distance of about 30-35 km from many populated rural areas. Hence, lack and unavailability of substitute qualified doctor in late evening/night emergency situation are another factor that influences quackery-prone peoples. It is observed that desperation and vulnerability set in when someone is suffering from the highly debilitating injury or illness and a licensed doctor is not accessible at that time. Under such situations, everything can be believed that sounds hopeful. This desperation causes the local to begin trusting the quacks. Therefore, locals of rural areas depend on quacks for accessible and affordable healthcare. Nowadays, quacks are available on motorbikes (mobile quacks) to head out after patients in the rural areas and are immediately available in emergency situations. These mobile quacks cycle around villages, so that they are easily accessible to the locals. Building strong relationship with patients without money: the entrepreneurial mindset and behavior of the quacks contribute a significant role in building a quack-patient relationship. The feudalism has crept so much in the rural areas of Pakistan, and they are the most influential in their respective areas. So, quacks are usually giving their families a cost-free healthcare services, consequently the locals are compulsive to go quacks. Quacks usually participate in social gatherings a lot and they almost know every house in many villages. Sometimes, they treat their patients without money or on account in their impoverished community, which may have been to encourage their clients not to switch to a competitor. Quacks are informally trained under tutelage of a qualified doctor, and the doctors sometimes send them to their client´s home for dressing change or insulin injection etc. Sometimes, these informally trained staff are also assisting the doctors to lighten patient´s burden. This is how patient´s trust and relationship are developed in their dispensers, compounders and lab technicians. So, they take an advantage of the public trust to continue their malpractices for the rest of his/her life as a successor of the well-regarded and renowned doctor. Another factor that induces quackery-prone peoples is their trust in quacks because they are being treated by them for the last several years. Symptomatic treatment to immediately relief the patients: this has been a general psyche of the locals of rural areas and slums that they only believe in quick-fix or immediate effect of the treatment. If they are not administered injections and/or given intravenous fluids, they are not satisfied at all by the doctor. On the other hands, quacks almost always administer injections to their patients, whether for headache or fever, and give intravenous fluids to line their pockets without realizing the basic cause of the disease. Quack´s palliative treatment/measures typically include mix medicines such as broad-spectrum antibiotic, injections and an anti-inflammatory to cover all common diseases found among community of their respective areas. Recommendation: it is a collective duty and responsibility of the accountable institutions, medical societies, most importantly the community, health professionals, medical licensing boards, stakeholders, law and enforcement agencies, and regulatory bodies in the country need to take some urgent and practical steps on its numerous failures and begin to systematically proffer solutions alongside an efficient implementation machinery. Conclusion: the growing predilection for quacks among rural residents has been a serious threat to public health. The Government shall improve healthcare infrastructure in the rural areas of Sindh so that doctors will be encouraged to go and practice over there. This is how the primary and secondary healthcare level will be strengthened with optimal health care facilities and adequate staff. However, public education, awareness, and sensitization is recommended to effectively combat the menace of quackery for the sake of public health and to save the status of the highly regarded medical profession. It is illegal to sell intoxicating drugs, antibiotics, painkillers and injections to the quacks in most parts of Pakistan, but the law is rarely enforced. Public Awareness Awareness seminars and programs be organized at the district level across the Sindh time to time about the severity and outcomes of quackery practices. The local electronic and print media in regional language can play important role in creating awareness among common residents about the harmful aspects of quackery. Special programs on the prevalence of potential infectious diseases and health risks by quacks be broadcast from local cable TV network time to time in regional language. 2 Sensitizing a wide range of stakeholders We request the stakeholder be more communicative with us in the field and with DAQ-SHCC's inspection & enforcement teams that will help us to ease our collaboration with law and enforcement agencies and local authorities in order to get deliverance from the menace of quacks. 3 Governing bodies at provincial and national levels Healthcare regulatory authorities at the provincial and national level should support the cause and monitor quality of care and health care professional's databases to ensure influential and political bias of licensure. 4 Social mobilization tools Mobilization of the community by identifying key persons such as retired health professionals, local NGO's, social activists, and district health officers who can act as a contact person between locals and the administration. Note: the opinions expressed in these recommendations are the responsibility of the authors and do not necessarily reflect the official policy of the Sindh Healthcare Commission (SHCC).
2020-10-28T19:17:35.504Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "77697232910e158846d44963807711a43c6ee707", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11604/pamj.2020.37.174.23411", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01d52a203a752f1bb220254c5835ba178b17495c", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
235325783
pes2o/s2orc
v3-fos-license
Development, validation, and application of the ribosome separation and reconstitution system for protein translation in vitro Stress-induced molecular damage to ribosomes can impact protein synthesis in cells, but cell-based assays do not provide a clear way to distinguish the effects of ribosome damage from stress responses and damage to other parts of the translation machinery. Here we describe a detailed protocol for the separation of yeast ribosomes from other translational machinery constituents, followed by reconstitution of the translation mixture in vitro. This technique, which we refer to as ribosome separation and reconstitution (RSR), allows chemical modifications of yeast ribosomes without compromising other key translational components. In addition to the characterization of stress-induced ribosome damage, RSR can be applied to a broad range of experimental problems in studies of yeast translation. INTRODUCTION Cell-free translation systems are powerful experimental assets with a wide variety of applications. They allow protein production in a tightly controlled environment using either endogenous transcripts or mRNA reporters. The generated proteins can then be used in subsequent applications like pull-down assays or analyzed as readouts of translation reactions addressing roles of cis-and trans-acting factors in translation (Carlson et al. 2012;Chong 2014). Additionally, the cell-free translation reaction allows subsequent supplementation with carefully designed additional factors. For example, this approach was instrumental to identifying the order of molecular events in complex cotranslational mechanisms (Shao and Hegde 2014;Kuroha et al. 2018). Despite all the advantages of cell-free translation systems, they remain insufficient in dissecting the effects of stress on translational executors. In fact, under stressful conditions, various molecules of the translational machinery undergo modifications (Tanaka et al. 2007;Chan et al. 2012;Gu et al. 2014;Simms et al. 2014;Endres et al. 2015;Wu et al. 2018;Yan et al. 2019). Thus, it is impossible to distinguish between a stressor's impact on a partic-ular molecule of interest and on other translationally essential elements. Built of RNAs and proteins, ribosomes are, unsurprisingly, highly susceptible to chemical modifications. Indeed, ribosomes undergo significant modifications when exposed to chemical compounds, metals, or reactive oxygen species (ROS) (for review, see Shcherbik and Pestov 2019). In addition, in response to a variety of stress conditions, ribosomes and ribosome-bound nascent chains are subject to post-translational protein modifications, such as ubiquitination (for review, see Dougherty et al. 2020). How performance of modified ribosomes is altered during protein synthesis remains largely unknown. This shortcoming is primarily due to the unavailability of a suitable experimental platform that would allow modification of ribosomes exclusively while keeping other translationally essential molecules intact. To overcome this technical limitation, we sought to develop a method for isolating translationally active ribosomes that could be subsequently returned back into translationally active ribosome-free yeast lysate charged with an mRNA reporter (schematics in Fig. 1). This approach allows incorporation of a ribosomal modification step into the procedure, in which ribosomes are exposed to a modifying agent of choice either in vitro or in vivo. The success of this approach depends on the purification of intact and translationally functional ribosomes from the cellfree extract (CFE) or cell culture. The methodology for isolating ribosomes and ribosomal complexes has been described for different organisms and is fine-tuned to each experiment's goals, which does not always require translational activity of the isolate. Thus, the particular purpose of the isolation dictates the stringency of the ribosome purification protocol. In general, there are two primary strategies to isolate ribosomes, ultracentrifugation and immunoprecipitation (IP), both of which have substantial limitations. Centrifugation-based technology, described by different laboratories, requires lengthy and often numerous spins and, thus, subjects ribosomes to prolonged exposure to ribonucleases and proteases present in crude cellular lysates. Another limitation of centrifugationbased ribosome isolation is the poor solubilization of the resulting ribosomal pellet (Munoz et al. 2017). On the contrary, the IP approach is fast and avoids pelleting, resulting in soluble ribosomes (Oeffinger et al. 2007). However, it demands incorporating a tag on a surface-exposed r-protein, which may interfere with ribosome activities. In addition, the IP approach requires high salt concentrations in the precipitation and elution buffers to avoid pulling down nonspecific molecules, potentially leading to undesirable stripping of ribosomal cofactors that may perform auxiliary roles during translation (Shi et al. 2017;Simsek et al. 2017;Mazaré et al. 2020). Here, we report an experimental protocol for ribosome separation and reconstitution (RSR) developed for purifying translationally competent ribosomes from Saccharomyces cerevisiae. The purified ribosomes retain their translational competency when supplied back to translationally active, ribosome-free CFE and can synthesize proteins from various mRNA reporters or endogenous transcripts present in the CFE. To the best of our knowledge, a yeast-based RSR-like protocol has never been reported before. Considering that yeast cells can be cultured in large quantities and are very amenable to genetic manipulations, our protocol may provide significant methodological advances to studies of eukaryotic translation, ribosome biology and protein quality control. Because the RSR approach also allows treating ribosomes with a modifying agent of interest under defined conditions in a test tube, it can facilitate studies of diverse types of chemical or physical factors capable of impairing ribosome functionality. In this communication, using the ROS inducer menadione and chemotherapeutic drug cisplatin as two rRNA modifiers with different characteristics, we demonstrate the capabilities of the RSR technique for the analysis of effects of environmental and intracellular ribosome stressors. B A FIGURE 1. Experimental workflow for ribosome separation and reconstitution (RSR). Ribosomes for a cell-free translation reaction can be isolated from a previously prepared CFE (A), or by the direct lysis of yeast cells (B). (a) CFE is ultracentrifuged at 180,000g (180K) for 2 h at 4°C, producing ribosome-containing pellet P180 and ribosome-free supernatant S180 (b). Ribosomal pellet P180 is solubilized (c) and added back to S180 (d), along with the energy regeneration system, amino acids, and (optionally) a reporter mRNA (e). Translation reactions are carried out at 21°C for 60-90 min (f). Alternatively, yeast cells are lysed with glass beads, and the clarified cellular lysate is next layered onto a 20% glycerol cushion (g); ribosomes are precipitated by centrifugation through the cushion at 180,000g (180K) for 2 h at 4°C (h). The resulting supernatant is discarded, ribosomal pellet P180 is solubilized (i) and added to the CFE-derived S180 to assemble the translation reaction (j). Establishment and validation of the RSR system We have recently developed, validated, and applied a cryogenic lysis-based method to prepare yeast cell-free translation extracts (CFE) capable of protein synthesis from mRNA reporters and endogenous cellular transcripts (Trainor et al. 2021b). To develop a system that allows ribosome separation followed by reconstitution of translation in vitro, we used CFE as a starting platform. We first aimed to purify ribosomes from CFE by one-step ultracentrifugation and assess their quality and activity during protein synthesis in vitro by returning them into translationally active, ribosome-free CFE charged with an mRNA reporter (schematics in Fig. 1A). Preparation of ribosomes from CFE by one-step ultracentrifugation To pellet ribosomes, we centrifuged one aliquot of CFE (∼560 µg of RNA, 1590 µg of proteins) at 180,000g for 2 h at 4°C ( Fig. 1-a) in the TLA55 Beckman rotor and collected the supernatant (S180) and pellet (P180) fractions ( Fig. 1-b). Pelleted ribosomes were solubilized in translation reaction buffer A [20 mM Hepes-KOH, pH 7.4; 100 mM KOAc; 2 mM Mg(OAc) 2 ; and 2 mM DTT] (Fig. 1-c and see below; Wu and Sachs 2014) and analyzed by northern blotting, along with S180 and complete CFE used as controls. Hybridization with probes specific to 25S rRNA, 18S rRNA, tRNA Val , and tRNA Glu verified that the two fractions generated by this ultracentrifugation step (180,000g for 2 h; 180Kcentrifugation hereafter) represented the ribosome-free supernatant (S, Fig. 2A, lane 2) and ribosome-enriched pellet (P, Fig. 2A, lane 3). As expected, tRNAs, visible in the complete CFE, cofractionated with the supernatant fraction ( Fig. 2A, lanes 1 and 2). rRNAs derived from the pellet fraction revealed no signs of degradation ( Fig. 2A), suggesting that 2 h of centrifugation does not affect rRNA integrity. Characterization of ribosomes by sucrose gradient centrifugation analysis Next, using sucrose gradient centrifugation analysis, we examined which ribosomal species were precipitated during 180K-centrifugation ( Fig. 1-a). As a control, we used a CFE sample that was not subjected to ribosome pelleting. Gradients were fractionated into 12 fractions, and RNA was extracted from each fraction and analyzed by northern hybridization with probes specific to 25S and 18S rRNAs. The gradient analysis showed that ribosomes predominantly accumulated in the 80S fraction, with only residual amounts present on polysomes before and after 180Kcentrifugation (Fig. 2B). These data demonstrate that 180K-centrifugation is sufficient to pellet nonpolysomal 80S ribosomes. Interestingly, we also tested sucrose gradient-based ribosome isolation as an alternative approach to the 180Kcentrifugation of the CFE (Supplemental Fig. S1A). In our hands, ribosomes derived by this technique exhibited an increased degree of rRNA degradation and were significantly less active in translation (Supplemental Fig. S1B-D). Thus, we used 180K-centrifugation in all later experiments as a fast way to recover ribosomes (2.5 h total time vs. 7.5-8 h required for gradient-based isolation, Fig. S1D), which preserved well rRNA integrity and translational activity. Optimizing solubilization of pelleted ribosomes In our early trials of the RSR protocol, we found that obtaining a homogeneous suspension of pelleted ribosomes was critical for the reproducibility of the following translation assays. This step was also previously identified as a main limitation of the centrifugation-based ribosome purification approach (Munoz et al. 2017). In fact, we routinely observed that pelleted ribosomes were sticky and difficult to resuspend by pipetting. Due to lack of detailed published information on the resuspension procedure for pelleted ribosomes, we tested the effects of temperature and automated agitation in facilitating ribosomal pellet solubilization. Three CFE-derived ribosomal pellets were incubated in 100 µL of buffer A for 30 min at 8°C, 21°C, and 37°C with 1200 rpm shaking in Eppendorf thermomixers. Subsequent centrifugation of the ribosomal suspensions at 21,000g for 15 min at 4°C did not produce any visible pellets or insoluble debris. The RNA concentrations were similar in all ribosomal suspensions; pellets resuspended at 8°C yielded 4.50 µg/µL of RNA, pellets resuspended at 21°C yielded 4.52 µg/µL of RNA, and pellets resuspended at 37°C yielded 4.43 µg/µL of RNA. Quantifying 18S and 25S rRNAs detected by northern hybridization (Fig. 3A) likewise demonstrated similar levels of these rRNAs regardless of the temperature used during solubilization (Fig. 3B). Northern blot analysis also revealed that ribosomes incubated at 8°C, 21°C, and 37°C for 30 min contained intact 25S and 18S rRNAs with no signs of degradation ( Fig. 3A), arguing that the solubilization step ( Fig. 1-c) does not affect rRNA stability. To examine translational activity of the 180K-pelleted/ solubilized ribosomes, we assembled in vitro translation reactions that contained the ribosome-free extract (S180, Fig. 1-b), 3 µg of ribosomes resuspended at different temperatures, amino acids, and the energy-regeneration system ( Fig. 1-e-f). Reactions were programmed with 200 ng of TAP-RLuc mRNA reporter (Renilla luciferase gene fused with TAP-tag). We found that ribosomes solubilized at all temperatures tested synthesized TAP-RLuc as determined by quantitative Renilla luciferase assays (Fig. 3C) and confirmed by western blotting (Fig. 3D). However, incubation at 37°C reduced translational activity of ribosomes approximately twofold (Fig. 3C). The reduced translation activity of 37°C-resuspended ribosomes could be explained by irreversible modifications that might occur at 37°C or by disassociation of key translation factors (Cox et al. 1973;Danielsson et al. 2015). This result also correlated with poor performance of ribosomes during CFE-based translation at 37°C, further confirming that this temperature is not optimal for ribosomes extracted from BY4741 cells under low ionic stringency conditions, such as 100 mM KOAc and 3 mM Mg(OAc) 2 (Pestova et al. 1998;Algire et al. 2002;Khatter et al. 2014;Wu and Sachs 2014). Based on published literature, this buffer composition appears to be optimal to promote correct folding of rRNAs within the ribosomal structure and provide intersubunit stability (Khatter et al. 2014). Interestingly, another study demonstrated that translationally active lysates prepared from the yeast background strain GRF-18 resulted in a higher protein yield and faster kinetics of protein synthesis in vitro at 37°C than at 25°C (Altmann et al. 1989). Therefore, it seems reasonable to propose that different yeast genetic backgrounds might have individual specific temperature requirements. B A C FIGURE 2. Analysis of ribosomes before and after 180K-centrifugation. (A) One-step ultracentrifugation generates stable ribosomes and ribosome-free lysate. Aliquots of complete CFE (CFE), separated S180 and P180 (CFE: S, P), and P180 isolated from cells with glass-bead lysis (Cells: P) were resolved on a denaturing agarose gel and analyzed by northern hybridization with indicated probes. Prior to transfer onto nylon membrane, the gel was stained with SYBR Gold. (B,C) Ribosomes precipitated by one-step ultracentrifugation exist as 80S monosomes. (B) Complete CFE and solubilized P180 were centrifuged through 15%-45% (w/v) sucrose gradients and fractionated with continuous absorbance measurement at 254 nm to visualize ribosomal peaks. RNA was extracted from individual fractions and analyzed by northern hybridization as described in A. (C ) Sucrose gradient centrifugation analysis performed with the total cellular lysate obtained with glass-bead lysis and its resuspended P180. Unless using BY4741 or its derivatives, researchers will have to determine the optimal temperature for the in vitro translation reaction and examine temperature requirements for the ribosome solubilization step of the RSR procedure as illustrated in Figure 3C. Thus, precipitation of ribosomes from a BY4741-derived CFE by one-step centrifugation at 180,000g followed by 30-min shaking in a thermomixer at 8°C-21°C recovers well-preserved ribosomes that exhibit high translational activity when combined with a ribosome-free supernatant, allowing for an effective implementation of the RSR approach. Reaction time Having established P180-ribosome solubilization and buffer composition requirements ( Fig. 3A-D), we next examined the kinetics of protein synthesis using the RSR approach in comparison to complete CFE. Identifying time points at which the translated product increases with a steady rate and before the synthesis plateaus is critical for reliably comparing the activity of ribosomes derived from different experimental conditions. We assembled 30 µL reactions using either complete CFE or the pelleted, 21°C-solubilized ribosomes (24 µg of total RNA from P180) added to ribosome-free extract (S180). Both reactions were charged with 400 ng of capped TAP-RLuc mRNA reporter, and aliquots of the reaction were analyzed every 30 min for 3 h using a Renilla luciferase assay (Fig. 4A). To account for the different amounts of ribosomes present in the CFE and RSR reactions, we normalized the Renilla luciferase units by the amounts of 18S rRNA present in each reaction. For this normalization, we extracted RNA from each reaction after the luciferase assays were completed and quantified 18S rRNA by northern hybridizations and phoshorimaging (Fig. 4A, bottom panels). This analysis revealed that protein synthesis progressed with nonlinear kinetics, with maximum rates achieved during the first 30-90 min for both RSR and CFE-assembled translation reactions ( Fig. 4A; Supplemental Fig. S2). Interestingly, we detected a higher yield of TAP-RLuc in the RSR reactions than in the CFE reactions when normalized for the 18S rRNA amount (Fig. 4A). The RLuc signal normalized by 25S rRNA was found to follow a similar trend as RLuc/18S (Supplemental Fig. S2B). Although the exact mechanism for this effect remains unclear, the observation that centrifugation and resuspension of ribosomes may alter their activity indicates that it is essential to apply identical separation steps to all ribosomes being compared in order to correctly interpret data from RSR experiments. In addition, these data indicate that the practical timeframe for a translation reaction is limited to 30-90 min in the present protocol [i.e., 12 µg of rRNA (P180) in a 15 µL reaction charged with 400 ng of the Renilla luciferase mRNA]. The optimal reaction time should be determined for any new CFE batch and ribosome precipitation/resuspension condition by running control reactions as shown in Figure 4A. Optimization of the ribosome content We next examined the dependency of the translation efficiency on the concentration of ribosomes in a reaction. We assembled 15 µL translation reactions with different amounts of ribosomes added to the ribosome-free supernatant S180. These reactions were charged with 200 ng of capped TAP-RLuc mRNA reporter, incubated at 21°C for Figure 2A were resuspended in buffer A by shaking for 30 min at the indicated temperatures. Resuspended RNA was analyzed by northern hybridization with 25S rRNA and 18S rRNA specific probes. (B) The hybridization signals corresponding to the full-length 25S rRNAs and 18S rRNAs were converted to phosphorimaging units and plotted as bar graphs. The error bars represent standard error of the mean (SEM) of three experiments. The differences between the samples were nonsignificant (NS); statistical analysis was performed by one-way ANOVA. (C) 3 µg of resuspended ribosomes were placed into translation reactions containing ribosome-free translational lysate S180 charged with capped TAP-RLuc mRNA (200 ng per reaction). Reaction products were analyzed by the Renilla luciferase assay and the data are presented as bar graphs, wherein error bars represent standard error of the mean (SEM) of three experiments. (D) Proteins and RNA were extracted from the luciferase reactions and further characterized by western blots using anti-TAP antibodies and by northern hybridizations using TAP-specific probe. 90 min, followed by a Renilla luciferase assay and western blotting to assess the production of TAP-RLuc. As expected, and consistent with the previous experiment ( Fig. 2A), neither luminescent signal nor TAP-RLuc protein were detected in the reaction containing S180 only (Fig. 4B, 0 µg of rRNA), suggesting that S180 lacks endogenous ribosomes after the 180K-centrifugation procedure. The reporter synthesis efficiency increased with increasing ribosomal content, and B A C D FIGURE 4. Optimization of the translation reaction. (A) Time course of translation reactions assembled with complete CFE (blue) or with ribosomes purified from CFE via 180K-centrifugation (RSR, magenta). Both reactions were programmed with 400 ng of capped TAP-Renilla mRNA reporter; the final reaction volume was 30 µL. Each CFE reaction was estimated to contain 60 µg of the total RNA. To the RSR reactions, we added 24 µg of purified ribosomes. Reaction aliquots (4.5 µL) were collected at the indicated time points, levels of generated reporter proteins were measured by the Renilla luciferase assay, normalized by the 18S rRNA hybridization signal in each sample, and plotted as linear and log 10 graphs. Each reaction was set in triplicate. The bottom panel shows a representative northern blot of the RNA extracted from the reactions and hybridized with an 18S rRNA-specific probe. (B,C) Efficiency of translation in RSR assay reactions depends on the concentration of purified ribosomes. (B) Indicated amounts of solubilized ribosomes derived from P180 generated by one-step centrifugation of CFE were added to S180. Each reaction was charged with 200 ng of capped TAP-RLuc mRNA reporter. (C) Total RNA was extracted from the RSR/luciferase reactions from B and analyzed by northern hybridizations using probes specific to 25S rRNA, 18S rRNA, and TAP-RLuc mRNA. (D) mRNA dose dependence. Translation reactions were assembled with complete CFE (blue) or with ribosomes purified from CFE via 180K-centrifugation (RSR, magenta). Reactions were programmed with the indicated amounts of capped TAP-Renilla mRNA reporter; the final reaction volume was 15 µL. For the CFE reaction, we used unfractionated CFE (30 µg total RNA), while the RSR reaction contained 4 µg of purified ribosomes. RNA was extracted from the RSR/luciferase reactions and analyzed by northern hybridization with an 18S rRNA-specific probe (bottom). Radioactive signal corresponding to the fulllength 18S rRNA was converted to phosphorimaging units and used to quantify the luminescent signal derived from the same sample. All reactions in B and C were assembled in triplicate and carried out for 90 min at 21°C. In all graphs, error bars represent standard error of the mean (SEM) of three experiments. the highest amounts of TAP-RLuc were detected in the reaction containing the highest concentration of ribosomes that could be added to the 15-µL reaction volume (12 µg of rRNA; Fig. 4B). Northern hybridization of RNA extracted from these RSR reactions confirmed increases of 18S and 25S rRNAs (Fig. 4C, top) and verified the stability of TAP-RLuc mRNA post reaction (Fig. 4C, bottom). Thus, to achieve detectable levels of the TAP-RLuc reporter, the amount of ribosomes purified by one-step centrifugation can range between 2-12 µg per 15 µL reaction. Optimization of the mRNA concentration To examine how mRNA amount affects protein synthesis in a translation reaction, we added different amounts of capped TAP-RLuc mRNA (100, 200, and 400 ng) to 15-µL translation reactions assembled with S180 and 4 µg of P180-derived ribosomes. For comparison, we also tested different concentrations of the TAP-RLuc mRNA in reactions with unfractionated CFE. The amounts of the synthesized TAP-RLuc protein were determined by a Renilla luciferase assay, after which RNA from the luciferase reactions was extracted and analyzed by northern hybridizations as described above. This mRNA-titration experiment (Fig. 4D) demonstrated that under the conditions tested (4 µg of ribosomes, 90 min reaction duration, 15 µL reaction volume, 21°C), exceeding 200 ng of mRNA resulted in a saturation of both RSR-and CFE-based reactions. Consistent with the timecourse experiment (Fig. 4A), translation reactions performed in the RSR format were more efficient than those with unfractionated CFE (Fig. 4D). Taken together, these data indicate that to accurately compare ribosome activity when using the RSR format of cell-free translation, concentrations of multiple components in the translation reactions must be carefully controlled. The values obtained above, including reaction time ( Translational activity of P180-derived ribosomes after centrifugation of CFE through 20% glycerol cushion Purifying ribosomes via centrifugation through a cushion is routinely used in various biochemical applications (Jenner et al. 2012;Mehta et al. 2012;Khatter et al. 2014). In fact, like IP, this approach allows separation of ribosomes from low molecular weight molecules, including endogenous and exogenous cellular components, which might interfere with the translation reaction. For example, treating CFE with ribosome-modifying compounds (discussed below) would require removing the drug prior to adding ribo-somes to the translation reaction. Thus, the ability to use cushion-centrifuged ribosomes would be a powerful feature of the RSR translation system. To assess the activity of ribosomes prepared by centrifugation through a cushion, we either directly pelleted ribosomes from CFE as described above or centrifuged CFE through a 20% glycerol cushion at 180,000g for 2 h. The resulting P180 pellets were solubilized, and 9 µg of P180 were added to the ribosome-free translational lysate (S180) along with 400 ng of capped TAP-RLuc mRNA reporter. Renilla luciferase assay revealed that ribosomes isolated by centrifugation through the cushion were as active as those pelleted directly from CFE (Fig. 5A, bars 3 and 4). Activity of ribosomes purified from yeast cells by glass-bead lysis To establish, validate, and optimize the RSR format of cellfree translation reactions, we used the translationally active CFE as the ribosome source in the experiments above (Figs. 3-5A). The next question we wanted to address was whether ribosomes extracted directly from cells in culture would be active in the RSR format (Fig. 1B). Total cellular lysates prepared by a conventional glass bead-B A 3-4). For RSR, ribosomes were pelleted directly from CFE (lane 3) or by centrifugation of CFE through a 20% glycerol cushion (lane 4). In lane 2, no ribosomes were added to the reaction. (B) Activity of ribosomes prepared from CFE and from cells lysed by glass-bead beating. RSR translation reactions were assembled with 9 µg of ribosomes pelleted from the complete CFE (P180 from CFE) or with 9 µg of ribosomes purified from cell lysate by 180K-centrifugation through 20% glycerol cushion (P180, cushion, cells). In the control reaction, no ribosomes were added, and the background levels of luminescence are detected. In A and B, each reaction contained 400 ng of capped TAP-RLuc mRNA reporter. Reactions were incubated at 21°C for 90 min and the reaction products were analyzed by a Renilla luciferase assay. Error bars represent SEM of three experiments. beating lysis technique were centrifuged through a 20% glycerol cushion at 180,000g for 2 h at 4°C to separate heavy ribosomal particles from low molecular weight cellular contaminants (Fig. 1g-h). Similar to CFE-derived ribosomes, ribosomes isolated by this method showed little apparent degradation (Fig. 2A, lane 4) and were detected in the 80S fraction of the sucrose gradient (Fig. 2C). When these ribosomes, freshly isolated from cells (Fig. 1g-i), were added to translation reactions containing S180 and charged with TAP-RLuc mRNA (Fig. 1-j), they demonstrated translational competency, as a measurable luminescent signal was achieved (Fig. 5B, bar 3). However, we observed a significant decline (∼30-fold) in the reporter synthesis efficiency with cell-derived ribosomes compared to those isolated from CFE (Fig. 5B). Nevertheless, the Renilla luciferase signal was still sufficiently high, suggesting that using cell-derived ribosomes is a reasonable alternative to CFEpurified ribosomes. Practical applications of the RSR system The RSR system established here (Figs. 1-5) can be used for various applications. For example, the RSR approach allows the analysis of changes in translation caused by ribosomedirected effects of stressors like oxidants or chemotherapeutic drugs capable of modifying and damaging various ribosome components. To illustrate RSR's utility in assessing activity of modified ribosomes, we tested two RNA modifiers: (i) the cell-permeable drug menadione (vitamin K3), which promotes oxidation of RNA and proteins in cells (Shedlovskiy et al. 2017b;Zinskie et al. 2018;Smethurst et al. 2020); and (ii) the cell-impermeable drug cisplatin, also known to modify nucleic acids, including rRNAs (Dedduwa-Mudalige and Chow 2015; Melnikov et al. 2016). Accordingly, we isolated ribosomes from drug-treated living cells (for menadione) or treated ribosomes in vitro (for cisplatin), followed by assaying ribosome activity in cellfree translation reactions. Ribosome isolation from cells treated with menadione Menadione is a pro-oxidant used as an extracellular stressor of yeast cells because of its stability in the medium during yeast culture treatment and high cell wall/membrane permeability (Jamieson 1992). Previous studies have found that treating yeast cultures with high doses of menadione (up to 600 µM) triggers extensive rRNA fragmentation followed by induction of the apoptotic program (Mroczek and Kufel 2008;Shedlovskiy et al. 2017b). In contrast, treating yeast cultures with low doses of menadione (25-50 µM) does not affect cell viability and is accompanied by 25S rRNA cleavage specific for the expansion segment ES7L (Shedlovskiy et al. 2017b). Menadione promotes oxidation indirectly by affecting the primary cellular antioxidant gluta-thione, resulting in ROS accumulation (Ochi 1996;Kim et al. 2014;Morris et al. 2014). Because menadione-induced ribosome oxidation can occur only in the cellular context, we applied the strategy illustrated in Figure 1B to study how the activity of ribosomes is altered by a menadione treatment of the cell culture. Exponentially growing yeast cultures (BY4741 wild-type yeast strain) were treated with 50 µM and 100 µM menadione for 2 h at 30°C or remained untreated. Cells were lysed by glass bead shearing, and lysates were centrifuged through a 20% glycerol cushion to pellet ribosomes (Fig. 1g,h). Ribosomal pellets were resuspended in buffer A by shaking at 21°C for 30 min (Figs. 1-i, 3), and an aliquot of the ribosome suspension containing 2 µg of RNA was analyzed by northern hybridizations with 18S and 25S rRNA-specific probes (Fig. 6A). Consistent with our previous studies (Shedlovskiy et al. 2017b), we detected the formation of degradation products predominantly in 25S rRNA, while 18S rRNA remained less affected by the drug treatment. Therefore, we used the 18S rRNA signal as a normalizer for the following experiments (Fig. 6B,C). In vitro [ 35 S]-Met/Cys incorporation into nascent polypeptides as a readout of translation To determine how menadione exposure affects ribosome activity in translation, P180 ribosome pellets obtained from cells treated with menadione and untreated control cells (Fig. 1B) were resuspended as described above and equal ribosome amounts, each containing 9 µg of RNA (Figs. 1-j, 6A), were added to the ribosome-free (S180) CFE fraction ( Fig. 1-b). This fraction was generated from CFE that derived from untreated cells and contained endogenous cellular mRNAs. The translation reactions (15 µL), set in triplicate, were also supplied with an energy mix and amino acids with radioactively labeled methionine (Met) and cysteine (Cys). Aliquots of the reaction products (4 µL) were collected at 10 min, 30 min, and 60 min postreaction, and proteins were precipitated with TCA. The amounts of labeled nascent polypeptides (trapped on filters) generated from endogenous mRNAs present in the CFE-derived S180 were measured by scintillation counting. The obtained CPM (count per minute) values were normalized by the 18S rRNA hybridization signal quantified by phosphorimaging in an aliquot of each reaction's ribosome input (Fig. 6A, bottom panel). As expected, amounts of [ 35 S]-Met/Cys-labeled polypeptides increased over time when ribosomes isolated from untreated cells were used (Fig. 6B, red curve). Ribosomes extracted from cells treated with 50 µM menadione were also translationally active, generating ∼1.5 times less total protein over time than untreated ribosomes, while ribosomes isolated from cells treated with 100 µM menadione were inactive (Fig. 6B, blue and green curves). These data are consistent with our previous observation that 100 µM menadione treatment promotes rRNA fragmentation and significantly affects cell viability (Shedlovskiy et al. 2017b). Using translation reporters as a readout of translation As an alternative to [ 35 S]-Met/Cys labeling of polypeptides translated from endogenous mRNA present in the CFE, translation reactions can be charged with mRNA for a reporter protein. To test this approach, we first supplied 400 ng of capped TAP-RLuc mRNA and 9 µg of ribosomes extracted from 100 µM menadione-treated or untreated cells (Fig. 6A) into the translation reactions ( Fig. 1-j) and measured luminescence produced by the synthesized TAP-RLuc using the Renilla luciferase assay. As in [ 35 S]-Met/Cys-labeling, we normalized the luminescence signal by the amount of ribosomes (Fig. 6A) added to the reactions, thus generating the Rluc/18S rRNA ratios (Fig. 6C). Consistent with previous data (Fig. 6B), the amount of the protein reporter synthesized in the reactions driven by menadionetreated ribosomes was significantly lower than that of ribosomes extracted from untreated cells (Fig. 6C). To further evaluate the processivity of ribosomes prepared from menadione-treated cells, we next tested translation of a dual firefly-nanoluciferase mRNA. Dual reporters are commonly used in translation analysis, as they represent a powerful experimental tool that allows data normalization by calculating the ratio of the ORF2 reporter over the ORF1 reporter (schematics in Fig. 6D), which helps in data interpretation and reduces experimental variability. We charged the ribosome-free translationally active lysate (S180, Fig. 1-b) with 400 ng of capped firefly-nanoluciferase (FLuc-nanoLuc, hereafter) mRNAreporter along with 9 µg of ribosomes extracted from cells treated with 25 µM, 50 µM, and 100 µM menadione. As a control, we used ribosomes extracted from untreated cells. Reaction products were analyzed by the E B A C D FIGURE 6. Ribosome translational activity decreases upon treatment with menadione or cisplatin in dose-dependent manner. (A) Mid-log wild-type cells BY4741 grown in YPD were treated with 50 µM or 100 µM menadione for 2 h at 30°C or left untreated. Cells were lysed, and ribosomes were precipitated by ultracentrifugation through 20% glycerol cushion as illustrated in Figure 1B. Ribosomal pellet P180 was resuspended in buffer A, and 2 µg of total RNA was analyzed by northern hybridizations with the indicated probes. (B) Ribosomes (9 µg RNA) prepared as described in A were added to translationally active ribosome-free lysate S180 prepared from CFE (Fig. 1A), along with amino acids containing labeled [ 35 S]-Met/Cys. Reactions were incubated at 21°C, 4 µL aliquots were taken at indicated time points, and proteins were precipitated by TCA. Incorporation of [ 35 S]-Met/Cys into nascent peptides was measured by scintillation counting; CPM (count per minute) values were plotted as graphs. (C) Ribosomes (9 µg RNA) prepared as described in A were added to translation reactions containing S180 and 400 ng of capped-mRNA reporter encoding TAP-RLuc (∼56 kDa). Reaction products were analyzed by the Renilla luciferase assay. The luminescent signals were normalized by phosphorimaging units from A and the resulting RLuc/18S rRNA ratios are presented as bar graphs. (D, top) Schematics for Firefly-nano luciferase reporter (∼81 kDa). (Bottom) Mid-log wild-type cells BY4741 grown in YPD were treated with 0, 25 µM, 50 µM, or 100 µM menadione for 2 h. Ribosomes were extracted, solubilized, and added (9 µg RNA) to translation reactions containing 400 ng of the capped dual firefly-nano luciferase reporter mRNA. Reaction products were analyzed by Nano-Glo Dual-Luciferase Reporter assay; the Nano-Luc/FLuc ratio is shown on the right panel. (E) Complete CFE was treated with cisplatin at the concentrations indicated in the figure. Ribosomes were purified from the drug-treated CFE by centrifugation through a glycerol cushion as described for Figure 5A. Ribosomes from the resuspended P180 (9 µg RNA) were added to untreated S180 charged with 300 ng of capped TAP mRNA reporter. Control reactions on the left contained ribosomes only (P180) or ribosome-free supernatant only (S180). TAP and ribosomal protein Rpl3 (control for ribosome amount) were detected by western blotting. In panels C-E, all translation reactions were incubated at 21°C for 90 min. In all graphs, error bars represent standard error of the mean (SEM) of three experiments. Dual-Luciferase Reporter assay from Promega. We measured the luminescent signal derived from firefly luciferase (Fig. 6D, left panel), then measured nano-luciferase luminescence (Fig. 6D, middle panel). Synthesis of both reporters correlated negatively with increasing menadione concentrations during treatment (Fig. 6D, left and middle panels). Interestingly, the ratio of nanoLuc to FLuc remained constant in every reaction (Fig. 6D, right panel), indicating that ribosomes that remain capable of engaging in translation in menadione-treated cells can fully synthesize the entire FLuc-nanoLuc reporter and thus are not significantly impaired in their ability to carry out elongation. Reduced translational efficiency of ribosomes from CFE with high-dose cisplatin Cisplatin is a chemotherapeutic drug widely used to treat many types of cancer (Dasari and Tchounwou 2014). Although the main target of cisplatin is DNA, it can also modify RNA, in particular rRNAs (Mezencev 2015). Here, we examined the effects of cisplatin on the translational ability of ribosomes in the RSR system. Since cisplatin is membrane-impermeable, we treated CFE (instead of cells, as for menadione) with various concentrations of this drug for 2 h at 21°C. Untreated CFE was used as a control. To separate cisplatin-modified ribosomes from other CFE components and excess drug, ribosomes were precipitated by one-step centrifugation through 20% glycerol cushion. Ribosome-enriched pellets were washed and resuspended in 100 µL of buffer A with shaking at 21°C for 30 min; the 9 µg of rRNA were placed into cisplatin-untreated ribosome-free lysate S180 (Fig 1A-c). The reactions were charged with 300 ng of capped TAP mRNA reporter. Reactions were incubated at 21°C for 90 min (Fig. 1f) and reporter protein synthesis was analyzed by western blotting using anti-TAP antibodies, with antibodies that detect Rpl3 used as an internal control. As expected, no protein signal was detected in reactions containing only ribosomes or only S180 (Fig. 6E, lanes 1-2), while addition of ribosomes to S180 resulted in strong TAP-reporter synthesis (Fig. 6E, lane 3). The ribosome translational ability declined only when cisplatin was used at high concentrations (0.5 and 1 mM, Fig. 5E, lanes 7-8), while treatment with 50, 100, and 250 µM had no discernible effect on the reporter levels (Fig. 6E, lanes 4-6). Previous studies have identified rRNA sites susceptible to cisplatin modifications. For example, Melnikov et al. (2016) revealed 2.6 Å-resolution crystal structures of bacterial 70S exposed to cisplatin, which demonstrated the drug's ability to stably intercalate into rRNA structures. Similarly, Rijal and Chow used in vitro and in vivo experimental systems to show that cisplatin can bind both purified 30S subunits and those that are in the 70S ribosomal complex (Rijal and Chow 2009). Furthermore, in yeast, cisplatin binds to RNA more efficiently than to DNA (Hostetter et al. 2012). Taken together, our data demonstrating reduced ribosomal activity upon exposure to cisplatin, which, along with the growing evidence that cisplatin binds RNA, helps explain mechanisms of cisplatin toxicity in cells. DISCUSSION We have devised a yeast-based biochemical approach (outlined in Fig. 1) that allows the efficient isolation of modified or damaged ribosomes from cells or cell-free extracts. These ribosomes can be subsequently combined with undamaged, translationally active ribosome-free cell lysates, charged with an mRNA reporter or with radioactively labeled amino acids, after which the generated proteins are analyzed. The data presented in this report illustrate applications of this "Ribosome Separation and Reconstitution" (RSR) approach for studying the effects of damage to yeast ribosomes introduced both in vivo and in vitro. Through optimizations of the RSR procedure, we established conditions under which yeast ribosomes purified from CFE or cell cultures by one-step centrifugation remain stable (Fig. 2) and retain their translational activity (Figs. 3-6). As such, translation reactions reconstituted in vitro with ribosome-free lysates and pellet-derived ribosomes result in efficient translation of both endogenous CFE-derived transcripts (Fig. 6B) and various mRNA reporters (Figs. 3-6). In previous studies, cell-free reactions reconstituted from eukaryotic ribosomes and nonribosomal translation components obtained from different sources relied on the rabbit reticulocyte lysate to generate ribosome-free supernatants containing factors necessary for translation, with cultured cells (Panthu et al. 2015;Penzo et al. 2016) or tissues and organs (Panthu et al. 2015) serving as ribosome donors. The mammalian protocols used overall similar conditions to ours to pellet ribosomes, namely, centrifugation at 140,000g for 5 h (Penzo et al. 2016) or at 240,000g for 2 h 15 min (Panthu et al. 2015). A significantly higher speed for ultracentrifugation of rabbit reticulocytes has been reported in (Rau et al. 1998), wherein ultracentrifugation at 420,000g for 20 min was sufficient to separate cytosolic components from ribosomes and ribosome-associated proteins. Examining rRNAs and r-protein Rpl3 as a readout of ribosomal content in the S180 supernatant generated by centrifuging yeast CFE at 180,000g for 2 h indicates that these conditions are sufficient to generate yeast lysate devoid of ribosomes ( Figs. 2A, 4C, 6E), while tRNAs remain largely in the S180 supernatant after the 180K-centrifugation ( Fig. 2A). Functional assays further support these data, as no protein reporter products were detected in translation reactions that lacked exogenously added ribosomes (Figs. 4B, 5A,B, 6E). Our experiments highlight several critical parameters for a comparative analysis of different ribosome preparations with the RSR approach. First, we find that ribosome pellets should be resuspended under conditions that produce a homogeneous ribosome suspension; the solubilization temperature of the ribosome-enriched P180 (Fig. 1b) should not exceed 21°C for BY4741-derived ribosomes (but may need to be optimized for other yeast strains). Second, the amounts of ribosomes added to the translation reactions must be carefully controlled. While the RSR format of cell-free translation reactions tolerates different ratios of ribosomes to the ribosome-free fraction (Fig. 4B), it is important to maintain the same ratio in all reactions in a given set. One useful approach to control the amount of ribosomes added to a reaction is through northern blotting-based quantification of 18S rRNA (Figs. 4,6;Supplemental Figs. S2,S3). Third, the mRNA concentration and reaction time sufficient to obtain detectable amounts of the translated product while still within the acceptable synthesis range (Fig. 4A,D; Supplemental Figs. S2, S3) may vary for individual mRNA reporters and need to be optimized. Thus, it is important to perform the mRNA titration and time kinetics experiments to determine the optimal range of these parameters for each new batch of reagents. Interestingly, cell-free protein translation using ribosomes isolated through the RSR protocol was consistently observed to be more efficient in our hands than unfractionated CFE ( Fig. 4A; Supplemental Fig. S2B, upper panels). The reasons for this unexpected observation are currently unclear and will require additional experimentation to explain. Compared to ribosomes purified by pelleting from CFE (directly or through glycerol cushion), those extracted from cells via conventional glass bead-beating procedure were approximately an order of magnitude less efficient in the translation reaction (Fig. 5B). Thus, the cryogenic lysis technique used in our CFE preparation (Trainor et al. 2021b) appears to be the preferable way for maintaining the ribosomes' translational activity during their isolation. The main disadvantage of this method is that it requires a large sample volume, is relatively laborious, and limited by the number of samples that can be processed simultaneously. Further optimizing buffer composition for beadbeating lysis could be a reasonable strategy to improve the recovery of active ribosomes from cells. Another currently untested possibility is to apply a spheroplastingbased cell lysis approach, which requires enzymatic digestion of the cell wall followed by lysis using osmotic pressure, freeze-thawing, or other cell-disruption strategies (Darling et al. 1969;Mann et al. 1972;Izawa and Unger 2017). In this study, we tested RSR with ribosomes subjected to menadione-induced oxidative stress in the intracellular environment and to an in vitro treatment with the chemotherapeutic drug cisplatin. The results presented in Figure 6 demonstrate that both conditions affected the ribosomes' total translational activity in a dose-dependent manner. How ribosome modifications affect translation of challenging sequences, such as mRNA with stalls, rare codon stretches, or programmed ribosome frameshifting, can be addressed in future studies using the RSR methodology in combination with dual-luciferase reporters (Fig. 6D). In conclusion, the RSR protocol described here provides an effective way to assess translational competency of the chemically modified or damaged ribosomes without the confounding damage to other translation factors. Data from our laboratory demonstrate that this approach can also be adapted to studying translation properties of genetically altered ribosomes, such as those that contain mutations in r-proteins (B.M.T. and N.S, personal observations). Thus, we anticipate that RSR will be broadly applicable for dissecting translational consequences of diverse types of modifications in ribosome composition and structure. Yeast strain, medium, yeast culture treatment We used YPD media (1% yeast extract, 2% peptone, and 2% dextrose) that was sterilized by filtration through a 0.2 µm PES membrane filter system ("Rapid-Flow" from Thermo Scientific). Wild-type BY4741 (MATa his3-1 leu2-0 met15-0 ura3-0) was purchased from Open Biosystems. For experiments with menadione treatment, overnight BY4741 yeast cultures were diluted with fresh YPD at A 600 of ∼0.3 and grown for an additional 2-4 h at 30°C to A 600 of ∼0.6-0.7. Various concentrations of menadione (indicated in figures and figure legends) were added to the cultures; cells were grown for an additional 2 h at 30°C agitating, harvested, washed with water, and lysed. Plasmids pYes2 was purchased from Invitrogen. To generate pYes-TAP, we amplified the TAP sequence using the pBS1761 plasmid (a kind gift of Dr. Mike Henry) as a template with the forward primer containing the BamHI site and the reverse primer containing a stop codon followed by the XhoI site; PCR TAP-product was cloned into pYes between BamHI and XhoI. The same template and the forward primer were used to amplify a no-stop TAP coding sequence, in which the reverse primer annealed upstream of the stop codon and contained the XhoI site. TAPNoStop PCR product was cloned into pYes between BamHI and XhoI, resulting in the pYes-TAPNoStop construct. The Renilla luciferase gene was amplified by PCR from pJD375 with a forward primer containing the XhoI site and reversed primer containing the XbaI site and cloned into pYes-TAPNoStop between the XhoI and XbaI sites, resulting in the pYes-TAP-RLuc fusion. The sequences of pYes-TAP and pYes-TAP-RLuc were verified by sequencing. To generate the dual-luciferase reporter construct pYes-FLuc-nanoLuc, a firefly luciferase gene was amplified using pJD375 plasmid as a template (a kind gift of Dr. Jonathan Dinman), with the forward primer containing the HindIII site and the reverse primer containing the BamHI site, whereby the reverse primer was designed to anneal upstream of the firefly luciferase gene stop codon. The PCR product was cloned into pYes2 between BamHI and HindIII, resulting in the pYes-FLucNoSTOP construct. Nano-luciferase was amplified using pF4Ag NanoLuc plasmid from Addgene (cat# 137777) as a template. The forward primer contained the BamHI site, while the reverse primer contained the XhoI site. The PCR product was cloned into the pYes-FLucNoSTOP construct between the BamHI and XhoI sites. The sequence of pYes-FLuc-nanoLuc was verified by sequencing. RNA isolation, northern blotting, and signal quantification To isolate RNA from CFE, S180, P180, and from in vitro translation/ Renilla luciferase reactions, we used TRI REAGENT-LS according to the manufacturer's recommendations. To isolate RNA from gradient fractions, each fraction was treated with 100 µg/mL proteinase K in the presence of 1% SDS and 10 mM EDTA for 20 min at 42°C, followed by phenol/chloroform extraction and isopropanol precipitation. All RNA pellets were resuspended in FAE solution (Formamide, 10 mM EDTA) for 15 min at 65°C, with shaking. RNA was separated on 1.2% formaldehyde-containing agarose gel as described in Mansour and Pestov (2013). Prior to transfer onto Nylon membrane (GE Healthcare, cat# NS0921), gels were stained with SYBR Gold and scanned using a Typhoon 9200 imager (GE Healthcare) at 532 nm to visualize RNA. For hybridizations, we used a [ 32 P]-labeled probe specific for the gene encoding TAP (5 ′ -GCCGAATTCTCCCTGAAAA-3 ′ ), a [ 32 P]-labeled probe y540 against 25S rRNA (5 ′ -TCCTACCTGATTTGAGGTCAAAC-3 ′ ), or a [ 32 P]-labeled probe y500 against 18S rRNA (5 ′ -AGAATTTCACC TCTGACAATTG-3 ′ ). We used Typhoon 9200 in the phosphorimaging mode to detect a radioactive signal, which was analyzed with ImageQuant software (GE Healthcare). For quantification, the volume of the hybridization signal corresponding to the RNA species of interest was converted to phosphorimaging units, and the background (average image background) was subtracted. CFE (cell-free extract) preparation The cryogenic lysis-based method for CFE preparation is described in detail in Trainor et al. (2021b). In brief, cells of the Saccharomyces cerevisiae BY4741 strain were grown in 1 L of YPD medium to OD 600 ∼ 0.8, harvested by centrifugation, washed twice in H 2 O and twice in freshly prepared buffer A [20 mM Hepes-KOH (pH 7.4), 100 mM KOAc, 2 mM Mg(OAc) 2 , 2 mM DTT]. The weight of the cell pellet was measured, and the pellet was resuspended in buffer A containing 8% mannitol in a 2:3 volume/cell weight ratio. Cell slurry was dripped directly into liquid nitrogen to form small ice beads. Frozen yeast/buffer beads were transferred into a prechilled grinding vial containing a metal rod and placed into a SPEX freezer mill chamber filled with liquid nitrogen. Yeast/buffer beads were powdered using the following setting: 1 min of grind, 1 min off, eight cycles in total. The powdered cells were transferred into a prechilled 10.4 mL ultracentrifuge tube (Beckman), allowed to thaw on ice, and yeast suspension was centrifuged in a Beckman ultracentrifuge for 15 min at 4°C at 30,000g using a fixed-angle Beckman rotor Type 80 Ti. The clear phase between the pellet and cloudy upper lipid layer was collected (∼6 mL) and centrifuged again for 35 min at 4°C at 100,000g in a Beckman rotor Type 80 Ti. Once again, the clear phase between the pellet and cloudy upper layer was collected, and 2.5 mL was applied on gel filtration column PD10 Sephadex G-25 (GE Healthcare) preequilibrated in buffer A containing 20% glycerol at 4°C. For elution, also performed at 4°C, we used 5 mL of buffer A containing 20% glycerol and collected 10 fractions (500 µL each). RNA content in each fraction was measured spectrophotometrically, and fractions with at least 60%-75% of the highest RNA concentration were pooled, aliquoted into Eppendorf tubes (100 µL aliquots) and frozen in liquid nitrogen for storage at −80°C. In this procedure, we did not use protease or RNase inhibitors. PCR and RNA reporter preparation PCR reactions and reporter RNA synthesis were performed as described in Trainor et al. (2021b) with minor modifications. Briefly, a sequence corresponding to a reporter gene cloned in pYes2 as described in the "Plasmids" section was amplified with DreamTaq polymerase using forward (5 ′ -CGGATCGGA CTACTAGCAGCTG-3 ′ ) and reverse (5 ′ -TTCATTAATGCAGGG CCGCAAATT-3 ′ ) primers that anneal upstream and downstream from the T7 promoter and the CYC1 terminator elements on pYes2, respectively. PCR products were concentrated using ZYMO columns. m7G-capped mRNA was generated using 1 µg of PCR-generated DNA template and mMESSAGE mMACHINE T7 Transcription kit, according to the manufacturer's recommendations. The reaction was carried out at 37°C for 2 h, followed by Ribosome isolation for RSR One aliquot of CFE (100 µL, RNA concentration ∼ 5.6 µg/µL) was centrifuged at 180,000g for 2 h in a Beckman TLA55 rotor (55,000 rpm) at 4°C. The supernatant was collected, transferred to a new tube, and stored on ice (S180). The pellet was rinsed with buffer A [20 mM Hepes-KOH, pH 7.4; 100 mM KOAc; 2 mM Mg(OAc) 2 ; and 2 mM DTT] (Wu and Sachs 2014), resuspended in 100 µL of the same buffer by agitation at 21°C for 30 min (or as indicated in figure legends), centrifuged using a tabletop centrifuge at 21,000g for 15 min at 4°C, and ribosome suspension was transferred into a new tube. RNA concentration was measured. Alternatively, we centrifuged 100 µL of complete CFE through the 20% glycerol cushion (500 µL) prepared in buffer A at 180,000g for 2 h in TLA55 rotor (55,000 rpm) at 4°C. The supernatant was carefully discarded; the ribosomal pellet was processed as described above. To isolate ribosomes from cells, cells were collected by centrifugation using preparative centrifuge at 2200g (we used an Eppendorf centrifuge 5810R equipped with the A-4-62 rotor set at 3300 rpm for 3 min at 4°C). Cell pellets were washed twice with buffer A supplemented with 200 µg/mL of heparin (used as an RNase inhibitor) and lysed by 10-12 cycles of 30 sec vortexing followed by a 30 sec incubation on ice in the presence of 425-600 µm glass beads (Sigma, cat# G8772). Cell lysates were cleared by centrifugations at 3200g for 10 min and at 21,000g for 15 min in a tabletop centrifuge at 4°C; the supernatant was layered onto 20% glycerol cushion (500 µL) prepared in buffer A and centrifuged at 180,000g for 2 h in TLA55 rotor (55,000 rpm) at 4°C. Supernatant was discarded; the ribosomal pellet was processed as described above for CFE. Translation reactions using RSR format For RSR translation reactions with mRNA reporters, we applied the protocol for translation reactions described for complete CFE (see above), with an exception that instead of 7.5 µL of CFE, we used 6 µL of ribosome-free supernatant (S180) and 1.5 µL of ribosomes (concentrations are indicated in figure legends and in text). Reactions were incubated at 21°C for 30-180 min (indicated in figure legends). For RSR translation reactions with endogenous transcripts present in CFE, we used 6 µL of ribosome-free supernatant (S180), 1.5 µL of ribosomes (concentrations are indicated in figure leg-ends and in text), and 7.5 µL of 2 × master mix [40 mM Hepes-KOH (pH 7.6), 20 μM of essential amino acids minus Methionine and Cysteine, 4 mM Mg(OAc) 2 , 100 mM KOAc, 40 mM creatine phosphate, 0.12 U creatine kinase, 4 mM DTT, 2 mM ATP, 0.2 mM GTP, 0.8 U RiboLock and 1 mCi EasyTag EXPRESS[ 35 S]Met/Cys Protein Labeling mix]. Reactions were incubated at 21°C. At 5, 30, and 45 min, 4 µL aliquots were taken from the reaction tube and added to 96 µL of 1M NaOH; tubes were incubated at 37°C for 10 min (to hydrolyze RNA), mixed with 900 µL of ice-cold 25% TCA supplemented with 2% casamino acids and incubated on ice for 30 min to precipitate the translation products. Precipitation mixtures were applied on Whatman GF/A glass fiber filters, washed six times with 5% TCA, once with 70% ethanol, air-dried, and placed in a scintillation vial. A total of 2 mL of scintillator was added into each vial and incorporation of [ 35 S-Met/Cys] into polypeptides was determined by counting in a scintillation counter. Cisplatin treatment of ribosomes Aliquots of CFE were treated with various concentrations of cisplatin (as indicated in Figure 6E) at 21°C for 2 h. Ribosomes were next isolated for RSR as described above using centrifugation through the 20% glycerol cushion. Resuspended ribosome pellets (9 µg of RNA) were added to 15 µL translation reactions containing 300 ng of the capped TAP mRNA reporter. Reactions were incubated at 21°C for 90 min and proteins were analyzed by western blotting using PAP to detect TAP. Antibodies against Rpl3 were used to control loading. Luciferase assays and statistical analysis We used the Renilla Luciferase Assay System from Promega (cat#E2810) and the Nano-Glo Dual-Luciferase Reporter Assay System (cat#N1610) according to the manufacturer's protocol. The luminescent signal was measured on a GLOMAX 20/20 luminometer. Statistical analysis was performed by one-way ANOVA with GraphPad PRISM 9. For RNA and protein extraction from the luciferase reaction, 100 µL of TRI REAGENT-LS reagent were added to 100 µL of the luciferase reaction. Samples were stored at −80°C prior to processing according to the manufacturer's recommendations. RNA pellets were resuspended in 12 µL of FAE (formamide, 10 mM EDTA) (Shedlovskiy et al. 2017a), and equal volumes of the dissolved RNA (5 µL) were analyzed by northern hybridizations using [ 32 P]-labeled probes specific for 18S and 25S rRNAs. The radioactive signals corresponding to the rRNAs were measured as phosphorimaging units to obtain RLuc/18S rRNA and RLuc/25S rRNA ratios, where RLuc is the Renilla luciferase luminescence units, and 18S rRNA and 25S rRNA represent phosphorimaging units corresponding to the full length rRNAs in the same reaction. Western blotting To analyze in vitro translation reaction products, proteins were isolated from 15 µL of the translation reactions using TRI REAGENT-LS according to the manufacturer's recommendations. Protein pellets were analyzed as described in Trainor et al.
2021-06-04T13:14:22.940Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "bf179f4202564fe7c50a51a3e085dbec3a4c3e14", "oa_license": "CCBYNC", "oa_url": "http://rnajournal.cshlp.org/content/27/12/1602.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e1aa579163235f7149c670961271b2e30a82a9c9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269787290
pes2o/s2orc
v3-fos-license
Investigating gene-environment interaction on attention in a double-hit model for Autism Spectrum Disorder Autism Spectrum Disorder (ASD) is a neurodevelopmental behavioral disorder characterized by social, communicative, and motor deficits. There is no single etiological cause for ASD, rather, there are various genetic and environmental factors that increase the risk for ASD. It is thought that some of these factors influence the same underlying neural mechanisms, and that an interplay of both genetic and environmental factors would better explain the pathogenesis of ASD. To better appreciate the influence of genetic-environment interaction on ASD-related behaviours, rats lacking a functional copy of the ASD-linked gene Cntnap2 were exposed to maternal immune activation (MIA) during pregnancy and assessed in adolescence and adulthood. We hypothesized that Cntnap2 deficiency interacts with poly I:C MIA to aggravate ASD-like symptoms in the offspring. In this double-hit model, we assessed attention, a core deficit in ASD due to prefrontal cortical dysfunction. We employed a well-established attentional paradigm known as the 5-choice serial reaction time task (5CSRTT). Cntnap2-/- rats exhibited greater perseverative responses which is indicative of repetitive behaviors. Additionally, rats exposed to poly I:C MIA exhibited premature responses, a marker of impulsivity. The rats exposed to both the genetic and environmental challenge displayed an increase in impulsive activity; however, this response was only elicited in the presence of an auditory distractor. This implies that exacerbated symptomatology in the double-hit model may situation-dependent and not generally expressed. Introduction Autism Spectrum Disorder (ASD) is a neurodevelopmental behavioral disorder affecting approximately one in 100 children worldwide [1].The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criterion for ASD focuses on deficits in social behavior and communication, as well as restricted interests and repetitive actions [2].Intellectual disabilities and abnormal processing of sensory input are also seen in individuals with ASD [2].Accordingly, past research has reliably shown that individuals with ASD demonstrate difficulty accurately perceiving social cues and exhibit inflexible behavior [3][4][5].Inflexible behavior is commonly seen through repetitive motor actions and strict custom routines [6,7].Presently, ASD is predominantly diagnosed through behavioral assessment [2].Due to the number of potential risk factors, both genetic and environmental, it is valuable to investigate their roles in the emergence of ASD-like symptomatology. The emergence of genetic sequencing technology has helped researchers elucidate the bases of neurobiological disorders that were previously categorized as idiopathic [8].Through largescale sequencing cohorts, researchers have discovered that the genetic basis of ASD is widely heterogenous.ASD can be caused by single-gene (monogenic) mutations.For instance, a mutation in the X-linked FMR1 gene, commonly known as Fragile X Syndrome, accounts for approximately 2% of identified cases [9].Contrarily, rare and common variants are not necessarily pathogenic [10,11].The impact of such risk factors is dependent on the type of variant, the modified genes, and the compounded effect of additional genetic and environmental impacts [12,13].For instance, a large-scale genome-wide association study discovered 102 common variants associated with ASD; however, they only account for around 3.5% of the total heritability of the disorder [14]. Beyond genetics, many sources discuss the influence of environmental factors on the etiological basis of ASD; these include parental age, maternal medication use, and post-natal family environment [15,16].More recently, the maternal inflammatory response, triggered by a viral infection, has been found to harmfully affect fetal development [17].While certain rare pathogens pose distinctive threats to the brain, the infection caused by several viruses result in a similar risk for ASD development.This implies that the induced maternal immune activation (MIA) is the core mechanism impacting the fetus, as opposed to the virus itself [18][19][20].The MIA hypothesis has been validated by activating the immune system with a range of pathogens during pregnancy.Subsequently, the offspring are assessed for corresponding symptomatology that is attributed to human neurodevelopmental disorders [19,21].A greater understanding of the effects of maternal immune reaction on fetal brain development is imperative, especially with the ongoing after-effects of the COVID-19 pandemic. With the growing repertoire of genetic and environmental risk factors attributed to ASD, more studies have begun characterizing double-hit gene-environment models for neurodevelopmental disorders [22,23].For example, a national surveying study in Sweden revealed that ASD heritability is at approximately 50%, reinforcing the proportionally impactful role of the environment [24].Additionally, in a study centered around maternal metabolic and inflammatory conditions, children with diabetic mothers had an increased risk of ASD onset compared to children with a genetic predisposition alone [25].Nonetheless, each human is genetically and behaviorally dissimilar, undermining potential risk-factor interactions [26].Animal models alone can substantiate the claim that environmental risk factors exacerbate genetic predispositions [26].The present study aimed to characterize a double-hit gene-environment rodent model for ASD, particularly focusing on the process of attention. The genetic factor in our present model is the loss-of-function of the Contactin-associated protein-like 2 (CNTNAP2) gene.CNTNAP2 is highly expressed in brain areas implicated in ASD and poses a risk through both common and rare variations [27,28].CNTNAP2 codes for a cell adhesion molecule involved in synaptic formation, cortex organization and neuronal migratory function [29,30].Consequences of CNTNAP2 loss-of-function mutations were uncovered by Strauss et al. [31] in a population of Old Order Amish children.Out of the affected population, 70% exhibited ASD-associated symptoms [31].The preclinical Sprague-Dawley rodent model with a homozygous gene knockout (Cntnap2 KO) displays endophenotypes including hyperactivity, repetitive behaviors, and reduced vocalizations [32,33].For the environmental challenge, we used polyinosinic polycytidylic acid (poly I:C) to trigger an antiviral-like immune response in pregnant dams.Poly I:C elicits its effects through toll-like 3 receptors (TLR3), a highly conserved innate immune receptor [34,35].Due to the proven efficacy of the Sprague-Dawley poly I:C model, we studied the effect on attention, an ASD related behavior, in combination with the Cntnap2 risk factor. Visuospatial attention is an integral characteristic when assessing social-communicative deficits attributed to ASD [36].Selective attention and orienting have a critical role in cognitive development and can even play a role in regulating emotional states in humans [37].Prior studies have found that children with ASD have lessened attention, particularly to facial cues; however, most rodent models focus on non-salient visual cues [38,39].Based on previous reports on attentional deficits in ASD, we hypothesized that Cntnap2 deficiency interacts with poly I:C immune activation to exacerbate ASD related attentional changes exhibited by each model alone.We employed the 5-choice serial reaction time task (5CSRTT), a standard rodent behavioral task used to assess attention [40].The 5-CSRTT was developed as the rodent equivalent to Leonard's choice reaction time task in the Cambridge Neuropsychological Test Automated Battery [41].We predicted that the double-hit model displays poorer performance on the 5CSRT task compared to either single-hit model. Animals The study was conducted with 19 wildtype (WT) and 22 homozygous knockout (Cntnap2 KO) male Sprague-Dawley rats.The n-value may differ in the displayed test results as certain rats were seizure prone and exhibited atypical baseline behavior.Horizon Discovery (Boyertown, PA; originated at SAGE Laboratories, Inc. with Autism Speaks; the line is presently upheld by Envigo) provided heterozygous breeders with a five base-pair deletion at exon six in the Cntnap2 gene, created by zinc-finger nuclease target site CAGCATTTCCGCACC|aatgga| GAGTTTGACTACCTG.All genotypes tested in this experiment were littermates acquired from heterozygous crossings. Pregnant dams were exposed to either saline or poly I:C on gestation day (GD) 9.5.This timepoint corresponds to the first trimester in humans, where the risk of MIA on ASD onset in the offspring is highest [42].On GD 9.5, pregnant females (n = 16; n poly I:C = 8, n saline = 9) underwent brief isoflurane anesthesia.The dams were injected with either 0.9% saline or 4 mg/kg poly I:C (Sigma Lot#037M4011V) into the tail vein.From weaning (PD 21), the offspring were placed in open cages, provided with ad libitum food and water, and were put on a 12-hour light-12-hour dark cycle.Polycarbonate huts and crinkled paper supplemented the cage environment.The offspring were housed in same-sex pairs prior to and during behavioral testing.All behavioral testing was conducted in the light-cycle (7:00 a.m. to 7:00 p.m.). 5-choice serial reaction time task (5-CSRTT) At PD 100, the rats were food restricted to a target weight of 90% of their normal weight.Through operant conditioning, the rats were trained to locate and report a passing visual stimulus shown pseudo-randomly in one of five sites on a horizontal mask of apertures.The detailed protocol for pretraining and baseline acquisition are well documented in a priorly published protocol by Mar and colleagues [43]. Task outline for 5-CSRTT protocol.The default house-light setting in the chamber is off for all training and testing sessions.At the beginning of every session, a pellet (50 food: 50 sugar) is dropped in the food tray to motivate the rat to begin the task.To initiate the trial, the rat must nose poke inside the food tray and activate the sensor.There is a 5 s delay prior to stimulus presentation.The rat must then nose poke the illuminated stimulus panel to collect a food reward at the tray.If the correct panel is selected, a pellet is dropped with a pure tone.If an incorrect panel is selected, the house light is turned on with a white tone as punishment.After a 5 s intertrial interval, the tray illuminates to instigate the start of the subsequent trial.The stimulus duration is dependent on the training or testing stage.When the rat does not interact with the screen during stimulus presentation, the trial is considered an omission.If the rat nose pokes the screen during the delay period, the trial is labelled as a premature response.Lastly, if the rat continuously nose pokes the panel after a correct choice, its is labelled as a perseverative response. Pretraining and baseline training period.Pretraining allowed the rat to acclimate to the touchscreen chamber and screen.The rats also learned the fundamentals of the system; this includes habituation, initiating the task, and associating a correct response with a sugar-pellet reward [43].The training period was divided across 13 baseline levels.The session length for all levels was set at 60 minutes, with a total of 60 trials per session.The intertrial interval was set at five seconds across all training stages.The latency or delay prior to the stimulus was also set at five seconds across all training stages.The stimulus duration was set at 60 seconds and was gradually decreased to 0.5 seconds at Baseline Stage 13.To move past a stage, the rat had to achieve greater than 80% accuracy, and omit less than 20% of trials.The results display the findings from Baseline 13 (0.5 second delay) as it corresponds to the duration value used in the test paradigms. This study employed a fixed training period structure.Once the rats completed pretraining, they had 40 days to complete the training phase prior to testing.If the rat completed the training in less than 40 days, the testing paradigms were introduced earlier.This structure ensured that the rats showing no improvement did not overtrain at lower baseline levels indefinitely.Additionally, this structure allowed fast learners to complete the training phase without being overtrained, which is a limitation that can be seen in a group training structure where all rats would be trained until the very last rat completed their baseline training.Rats unable to complete baseline training independently were moved to the final baseline training stage on day 40.Extreme outliers were removed prior to the initiation of the testing protocol. Test paradigms.The first test was a Short-Delay variation.This test randomized the delay prior to stimulus onset at values less than the standard 5-second delay.This variation assessed global attention, providing the rat with less time to prepare for stimulus onset.The second test paradigm was a Long-Delay variation.The delays prior to stimulus onset were set at values greater than the standard 5-second delay.This test assessed inhibition and impulsivity because the rat was provided more opportunity to prematurely respond prior to stimulus onset.The third test paradigm was the Distraction variation.This test employed an auditory distractornoise of 105dB at randomized time points between initiation and stimulus presentation.This test measured selective attention.Each of the four test paradigms were set at 100 trials to be completed within 60 minutes or less. Measures for attention processing.The following measures were used to assess the rats' executive function: response accuracy (number of correct over all trials) to measure of attentional selectivity, omissions (trials with no response) to measure sustained attention, and premature responses (nose poke prior to stimulus) or perseverative correct responses (continued response after correct action feedback) as measures of impulsivity and compulsion, respectively. Statistics All data are mean ± standard deviations unless otherwise stated.Data was analyzed using a two-way analysis of variance ANOVA for the baseline task acquisition stage.The two between-subjects factors for the two-way ANOVA were genotype (WT, Cntnap2 KO) and prenatal exposure (saline, poly I:C).A three-way mixed analysis of variance ANOVA (two between-subject factors and one within-subjects factor) was used for the three test paradigms: Short Delay, Long Delay, and Distraction.The three-way ANOVA was followed by the Bonferroni pairwise comparison post hoc, utilizing the IBM SPSS statistics software.The statistical significance was placed at p < .05.There was also no significant interaction effect between genotype and prenatal exposure on average accuracy for baseline stage 13 (F(1, 42) = 0.420, p = 0.521, partial η2 = 0.011).The measure of omission was used to assess sustained attention during task acquisition.There was no main effect seen by either risk factor alone (genotype main effect: F(1, 42) = 2.489, p = 0.123, partial η2 = 0.061; prenatal exposure main effect: F(1, 42) = 0.037, p = 0.849, partial η2 = 0.001; Fig 1B).No significant interaction between the models was seen (F(1, 42) = 0.238, p = 0.628, partial η2 = 0.006). 5CSRTT baseline training revealed a significant increase of perseveration in Cntnap2 KO rats, but no significant differences in the other measures.This indicates that neither Cntnap2 KO, nor MIA, has a severe impact on learning in the 5CSRTT task. 5CSRTT testing: Short-delay paradigm The Short-Delay paradigm altered the onset of the stimulus presentation after trial initiation.Specifically, the stimulus was presented either 0.5, 1.5, 3.0, or 4.5 seconds after trial initiation pseudo randomly; the standard delay/latency period on all baseline stages was a 5.0 second delay. The Long Delay 5CSRTT revealed an interaction of genotype and prenatal exposure, with significantly increased premature responses in poly I:C offspring with Cntnap2 KO, indicating an accumulating effect of a genetic deletion and MIA on this measure of impulsivity. 5CSRTT testing: Distraction paradigm The Distraction paradigm altered the onset of a distractive auditory stimulus presentation after trial initiation.Specifically, a distracting noise of 105dB was presented either at 0.0, 0.5, The Distractor 5CSRRT paradigm revealed a lower rate of omissions in Cntnap2 deficient rats, while once more confirming their higher perseveration rate.Furthermore, the premature responses in prenatally exposed poly I:C animals seemed to be enhanced in Cntnap2 KO animals only, but this could only be observed at one specific distraction timepoint. Discussion This study explored the separate and combined effects of Cntnap2 deficiency and poly I:C maternal immune activation on relevant measures of attentional processing.The research aimed to better assess the impact of genetic-environment interaction on ASD associated impairments of cognitive function, using the 5-CSRTT.The 5-CSRTT quantifies attention, impulsivity, and cognitive flexibility in preclinical models for ASD.Several previous studies have shown the construct validity of the 5-CSRTT as a model for attention [41,44].Due to the equalized effects of genetic and environmental risk factors, it was initially hypothesized that Cntnap2 deficiency would interact with poly I:C MIA to exacerbate ASD-related attentional alterations seen in each model individually.However, the presented results indicated only one incidence of such an interaction across all relevant measures in the 5-CSRTT. Baseline training In the baseline training stage, Cntnap2 -/-and poly I:C MIA did not interact to exacerbate ASD-associated symptoms across the measures of accuracy, omission, premature and perseverative correct responses.Cntnap2 -/-rats exhibited increased perseverative response in comparison to their wildtype counterpart; however, no other measures were influenced by either model alone.The baseline data raised several questions, including whether both risk factors influenced typical attentional processing during training, or if task acquisition posed a sufficient cognitive challenge to differentiate between the experimental groups. Prior 5-CSRTT studies with separate ASD models provide varying results.In a 5-CSRTT study by Anshu and colleagues [38], valproic acid was administered prenatally at GD 12.5.The valproic acid model is consistently used in the field of ASD research due to its construct and face validity [45].Male Sprague-Dawley rats prenatally exposed to valproic acid showed poorer performance at the first and last stages of baseline acquisition [38].Contrastingly, a neonatal white-matter injury (WMI) study did not find a significant difference in accuracy between treatment and wildtype counterparts during baseline training.WMI can lead to hindered cognitive processing and an increased risk of ASD [46].These disagreeing results suggest that baseline acquisition is dependent on the ASD risk-factor models' ability to learn a novel task, or different baseline protocols that can be employed in the 5-CSRTT [47].The double-hit Cntnap2 KO and prenatally exposed rats did not exhibit any deficit in learning and performing the task at baseline. Cntnap2 KO rats showed an increase in perseverative correct responses in the baseline training paradigm.Perseverative responses are defined as continuous pokes after the correct stimuli has been selected.The continuous response is a marker of compulsiveness-a tendency to perform repetitive adverse behaviors [48].The rat's maladaptive tendency to continually nose poke a previously rewarded aperture can also act as a marker of cognitive inflexibility [49]. Variable short and long delay The delay and distraction paradigms highlighted key differences between treatment groups in the test stage.Firstly, decreased accuracy in poly I:C exposed rats was seen at the shortest delay value alone.There are no direct models of ASD that display variable Short-Delay outcomes; however, prenatal exposure to poly I:C can lead to cognitive inflexibility [50].Rats prenatally exposed to poly I:C show persistence in latent inhibition (LI), a process by which introduction to a nonreinforced stimulus affects the subsequent learning of a matching reinforced stimulus [50].LI is also a key process for assessing cognitive inflexibility.Although the 5-CSRTT does not assess LI, novel delays may challenge their preconceived understanding of the task protocol, requiring adaptive mechanisms to maintain high performance.Similarly, the study investigating white matter insult discovered poorer accuracy performance with a variable intertrial interval [46]. In the short-delay paradigm, the Cntnap2 KO rats had a lower omission rate than their wildtype counterparts; however, this behavioral difference was only apparent in non-poly I:C exposed rats.Cntnap2 KO rats prenatally exposed to poly I:C did not have significantly lower omission rates in comparison to WT rats also exposed to poly I:C.At the same time, Cntnap2 KO rats showed greater perseverative correct responses in the Short-Delay paradigm [49].Previous 5-CSRTT studies did not employ the Cntnap2 model; nonetheless, other behavioral paradigms were used to assess repetitive behavior.A study by Wang et al. [51] found that Cntnap2 KO mice exhibited increased grooming, a robust phenotype for repetitive behaviors.A recent paper by Scott et al. [32] identified that Cntnap2 KO rats showed increased full body rotations and self grooming compared to the wildtype counterpart.These measures of repetitive behavior are based on instinctive and uncontrolled actions.The 5-CSRTT provides insight into repetitive behaviors that are newly developed after exposure to a novel task; nonetheless, there could be a neural mechanism that links repetitive and perseverative actions across behavioral paradigms.A 5-CSRTT study that lesioned an important region in the dopaminergic system (core subregion of the nucleus accumbens) found comparable perseverative behaviors and lack of inhibitory control [52].Remarkably, antagonizing dopaminergic receptor D2 decreased repetitive self grooming and perseverative behaviors in Cntnap2 KO mice [53].Consequently, dopaminergic signalling may play a large role in modulating repetitive behaviors seen in instinctive and learned behaviors [53]. The Long-Delay paradigm elicited an interactive effect for premature responses between genotype and MIA.To elaborate, Cntnap2 -/-rats exposed to MIA had a greater number of premature responses to their saline treated counterparts.Contrarily, wildtype rats exposed to MIA did not have greater premature tendencies compared to the saline treated wildtypes.Premature responses are a staple measure of impulsivity in preclinical rodent models [54,55].Impulsivity arises as a phenotype for atypical inhibitory response control [56].Additionally, the rats may have exhibited impulsive tendencies due to a phenomenon known as delay-discounting [57].With the increased stimulus delay, the value of the reward can depreciate.Therefore, the rat prematurely prompts the screen as opposed to waiting for the stimulus.Delay-discounting relies on the dorsolateral prefrontal cortex, a region involved in attention and inhibitory control [58]. Distraction effect on 5-CSRTT performance The distraction paradigm elicited three separate parameter effects.A distractor was played pseudo randomly at four different timepoints preceding the stimulus onset.Similarly to the short-delay paradigm, Cntnap2 -/-rats had less omissions than their wildtype counterparts.This finding is not consistent across previous ASD 5-CSRTT studies [38,55]; however, the study that looked at WMI found a decrease in omission rate with a fixed visual distractor [46].Despite this, the present study utilized an auditory distractor.The rise in premature and perseverative responses may act as an alternative explanation in the distraction protocol.Due to increased interaction with the screen, the rats are less likely to omit a trial entirely. Inhibitory response control and cognitive inflexibility can explain the rise in perseverative and premature responses.Like the short delay paradigm, the rats performed repetitive behaviors as a maladaptive response to a novel stimulus.On the other hand, the premature responses present a novel outcome in the distractor task.Different from the Long-Delay paradigm, there is no interactive effect between genotype and MIA.Rather, poly I:C MIA triggered premature responses at the earliest distractor timepoint alone.Poly I:C may potentially affect an inhibitory response mechanism that Cntnap2 -/-did not.Rodents utilize temporal strategies to 'time' when they should attend to the oncoming stimulus [55].It is a mediating behavior developed because of training.The distractor can interfere with the rat's temporal strategy, triggering a premature response [55].The dorsolateral prefrontal cortex (DLPFC) is involved in both temporal strategies, and sensory hypersensitivity to increased stimuli [59,60].Poly I:C has been found to increase dendritic branching in the pyramidal cells of the DLPFC, potentially explaining an increase in impulsivity due to an auditory distractor [61]. Limitations Although the findings provide insight into the attentional processing of a gene-environment model, future studies must address respective changes in neurobiological function to better understand the behavioral phenotype.One potential neurophysiological confound to our findings is motor planning and execution.The prefrontal cortex is known to have an inhibiting modulatory control over the premotor cortex, the area most associated with motor planning [62].In fact, several studies on this cortico-cortical connection state that altered prefrontal activity results in motor impulsivity [63,64].Specifically, response cells in the premotor cortex may fire prematurely because of poor prefrontal cortex inhibition [63].Future 5-CSRTT studies must assess if there is a significant correlation between involuntary motor impulsivity and attentional impulsivity because of premature anticipatory neuronal firing.Moreover, the 5-CSRTT task cannot act as a standalone measure of attention.For example, the continuous performance task requires the rodent to discriminate between difference stimuli based on changes in color, brightness, contrast, etc [65].Employing additional paradigms may provide a more holistic understanding of the impact of gene-environment risk model. Future directions and conclusion In conjunction with behavioral assessment, subsequent studies must look at the neurobiological explanation behind the behavioral impairments demonstrated in this study.Regarding attentional involvement of Cntnap2 and poly I:C, a more causative link between behavioral output and mechanistic explanations can be made.Utilizing deep brain stimulation to attenuate ASD-phenotypes can directly look at the causative impact of risk factors.A study by Bekovsky et al. [66], found that the use of deep brain stimulation attenuated ASD-like sensorimotor and latent inhibition dysfunction in a poly I:C rat model.Potentially using the targets such as the dorsolateral prefrontal cortex and the ventral striatum, we can better appreciate the causal link behind atypical brain development and atypical attentional processing. The present investigation showed that both MIA and genetic knockout can affect the cognitive process of attention separately; The Cntnap2 KO model showed conserved changes in repetitive behavior, while the MIA model exacerbated impulsivity and lack of inhibitory control.Although neural alterations may converge across both models, a double-hit model does not certainly result in additive behavioral effects.However, attention deficits can be exacerbated in the double-hit model when presented with specific external stimuli, e.g., an auditory distractor.The parameters of this touchscreen task may also negate certain environmental influences that are generally present in a more natural environment.Due to the heterogeneity of ASD phenotypes, continually characterizing double-hit models will facilitate our understanding about the etiological basis of the disorder. Fig 2 . Fig 2. Short-delay 5CSRTT testing.A) There was a decrease in accuracy in offspring of poly I:C injected rats at the shortest delay of 0.5 s, regardless of genotype.B) Cntnap2 KO rats exhibited a decreased omission rate compared to wildtype.There was no simple main effect of genotype on rats exposed to poly I:C; however, there was a simple main effect of genotype on non-exposed rats.C) There was no significant effect of genotype or prenatal exposure, nor any interaction, on premature response.D) Cntnap2 KO rats exhibited increased perseverative correct responses regardless of prenatal exposure.*p < .05. Results are shown as mean ± SEM.N WT/SALINE = 10; N WT/POLY I:C = 9; N KO/SALINE = 12; N KO/POLY I:C = 10.For detailed data on different short delay intervals, please see S2 Fig in S1 File.https://doi.org/10.1371/journal.pone.0299380.g002 Fig 2D, for detailed data per delay, please see S2 Fig in S1 File). Fig 3 . Fig 3. Long delay 5CSRTT.A) When tested under the long-delay paradigm, the rats did not exhibit model-dependent accuracy changes; B) or changes in omission.C) There was no significant risk factor effect on the perseverative response measure in this testing paradigm.D)WT rats prenatally exposed to poly I: C did not show a significant increase in premature responses compared to their saline counterpart.E) However, Cntnap2 KO rats prenatally exposed to poly I: C showed an increase in premature responses compared to their saline counterpart.F) Poly I:C treated Cntnap2 KO rats exhibited higher premature responses than poly I:C treated WT rats.*p < .05. Results are shown as mean ± SEM.N WT/SALINE = 10; N WT/POLY I:C = 9; N KO/SALINE = 12; N KO/POLY I:C = 10.For detailed data on different short delay intervals, please see S3 Fig in S1 File.https://doi.org/10.1371/journal.pone.0299380.g003 Fig 4 . Fig 4. Distractor 5CSRTT.A) When tested under the Distractor paradigm, the rats did not exhibit model-dependent accuracy changes.B) Rats with Cntnap2 deficiency exhibited lower omission rates, regardless of prenatal exposure.C) Rats with Cntnap2 deficiency exhibited increased perseverative correct responses regardless of injection type.D) Rats prenatally exposed to poly I:C exhibited increased premature responses at a distractor timepoint of 0.5 seconds.*p < .05. Results are shown as mean ± SEM.N WT/SALINE = 10; N WT/POLY I:C = 9; N KO/SALINE = 12; N KO/POLY I:C = 10.For detailed data on different short delay intervals, please see S4 Fig in S1 File.https://doi.org/10.1371/journal.pone.0299380.g004 Table in S1 File.
2024-05-17T05:13:15.783Z
2024-05-15T00:00:00.000
{ "year": 2024, "sha1": "82dfc2aba8404559347b2672b178440ac39bebd1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0299380&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82dfc2aba8404559347b2672b178440ac39bebd1", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
88519705
pes2o/s2orc
v3-fos-license
Asymptotic theory of sequential detection and identification in the hidden Markov models We consider a unified framework of sequential change-point detection and hypothesis testing modeled by means of hidden Markov chains. One observes a sequence of random variables whose distributions are functionals of a hidden Markov chain. The objective is to detect quickly the event that the hidden Markov chain leaves a certain set of states, and to identify accurately the class of states into which it is absorbed. We propose computationally tractable sequential detection and identification strategies and obtain sufficient conditions for the asymptotic optimality in two Bayesian formulations. Numerical examples are provided to confirm the asymptotic optimality and to examine the rate of convergence. INTRODUCTION The joint problem of sequential change-point detection and hypothesis testing is generalized in terms of hidden Markov chains. One observes a sequence of random variables whose distributions are functionals of a hidden Markov chain. The objective is to detect as quickly as possible the disorder, described by the event that the hidden Markov chain leaves a certain set of states, and to identify accurately its cause, represented by the class of states into which the Markov chain is absorbed. The problem reduces to solving the trade-off between the expected detection delay and the false alarm and misdiagnosis probabilities. A Bayesian formulation of this hidden Markov model has been proposed by Dayanik and Goulding [2009]. It greatly generalizes the classical models, encompassing change-point detection, sequential hypothesis testing as well as their joint problem as in Dayanik et al. [2008]. There are mainly two directions of research in the Bayesian formulation. One direction is to find the means to calculate an optimal solution, while the other direction is to design asymptotically optimal solutions that are easy to calculate and implement. In the first direction, the problem can typically be expressed in terms of optimal stopping of the posterior probability process of each alternative hypothesis. However, there are only a very few examples that admit analytical solutions, and in practice one needs to rely on numerical approximations, for example, via value iteration in combination with discretization of the space of the posterior probability process. The computational burden and nontrivial computer representation of the optimal solution hinder the application of the findings of this first direction in practice. The second direction pursues a strategy that provides simple and scalable implementation, but gives only near-optimal solutions. The asymptotic optimality as a certain parameter of the problem approaches to an ideal value is commonly used as a proxy for the near-optimality. Asymptotically optimal strategies are in most cases derived via the renewal theory. In the sequential (multiple) hypothesis testing with i.i.d. observations, the log-likelihood ratio (LLR) processes become conditionally random † Bilkent University, Departments of Industrial Engineering and Mathematics, Bilkent 06800, Ankara, TURKEY. Email: sdayanik@bilkent.edu.tr. ‡ Department of Mathematics, Faculty of Engineering Science, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka 564-8680, Japan. Email: kyamazak@kansai-u.ac.jp . 1 walks. By utilizing the ordinary renewal theory, one can approximate the asymptotic behaviors of the expected sample size and the misidentification costs; see, for example, Baum and Veeravalli [1994]. On the other hand, when the observed random variables are not i.i.d. or when the change-point is not geometrically distributed, the asymptotic optimality is in general not guaranteed; instead, the existing literature typically shows that the r-quick convergence of Lai [1977] of a certain LLR process is a sufficient condition for asymptotic optimality. Dragalin et al. [1999] show, under the assumption on the r-quick convergence, the asymptotic optimality of the multihypothesis sequential probability ratio test (MSPRT) in the non-i.i.d. case of sequential multiple hypothesis testing. Dragalin et al. [2000] further obtain higher-order approximations by taking into account the overshoots at up-crossing times of LLR processes. As for the change-point detection, Tartakovsky and Veeravalli [2004a] consider the non-i.i.d. case and show the asymptotic optimality of the Shiryaev procedure under the r-quick convergence. Its continuous-time version is studied by Baron and Tartakovsky [2006]. Dayanik et al. [2013] obtained asymptotically optimal strategies for the joint problem of change-point detection and sequential hypothesis testing, showing that the r-quick convergence is again a sufficient condition for asymptotic optimality. The hidden Markov model is its generalization, and to the best of our knowledge, its asymptotic analysis has not been conducted elsewhere. For a comprehensive account on both analytical and asymptotic optimality of the change-point detection and sequential hypothesis testing, we refer the reader to Polunchenko and Tartakovsky [2012]. This paper gives an asymptotic analysis of the hidden Markov model and derives asymptotically optimal strategies, focusing on the following two Bayesian formulations: (1) In the minimum Bayes risk formulation, one minimizes a Bayes risk which is the sum of the expected detection delay time and the false alarm and misdiagnosis probabilities. (2) In the Bayesian fixed-error-probability formulation, one minimizes the expected detection delay time subject to some small upper bounds on the false alarm and misdiagnosis probabilities. The optimal strategy of the former has been derived by Dayanik and Goulding [2009]. The latter is usually solved by means of its Lagrange relaxation, which turns out to be a minimum Bayes risk problem where the costs are the Lagrange multipliers (or shadow prices) of the constraints on the false alarm and misdiagnosis probabilities. In theory, by employing a hidden Markov chain of an arbitrary number of states, one can achieve a wide range of realistic models. Unfortunately, however, the implementation is computationally feasible only for simple cases. The problem dimension is proportional to the number of states of the Markov chain, and the computation complexity increases exponentially fast. This hinders the applications of the hidden Markov model; in practice, obtaining exact optimal strategies are still limited to simple and classical examples. We propose simple and asymptotically optimal strategies for both the minimum Bayes risk formulation and the Bayesian fixed-error-probability formulation. The asymptotic analysis is similar for both formulations and can be conducted almost simultaneously. Similarly to Dayanik et al. [2013] and to the non-i.i.d. cases of changepoint detection and sequential hypothesis testing as reviewed above, we show that the r-quick convergence for an appropriate choice of the LLR processes is a sufficient condition for asymptotic optimality. We also show in certain cases that the limit can be analytically derived in terms of the Kullback-Leibler divergence, and under some conditions higher-order convergence can be attained using nonlinear renewal theory, which was pioneered by The partition of the state space of the hidden Markov model. The problem is to detect the exit time θ of the unobserved Y from Y 0 and identify the index µ of the class Y µ into which Y is eventually absorbed based only on the observations X modulated by Y . Woodroofe [1982] and Siegmund [1985]. Through a sequence of numerical experiments, we further acknowledge the convergence results of the LLR processes and the asymptotic optimality of the proposed strategies. The remainder of the paper is organized as follows. In Section 2, we define the two Bayesian formulations and review Dayanik and Goulding [2009]. In Section 3, we propose our strategies and derive sufficient conditions for asymptotic optimality in terms of the r-quick convergence of the LLR processes. In Section 4, we present examples where the limits of the LLR processes can be analytically obtained via the Kullback-Leibler divergence. Section 5 concludes the paper with numerical results. PROBLEM FORMULATIONS Consider a probability space (Ω, F, P) hosting a time-homogeneous Markov chain Y = (Y n ) n≥0 with some finite state space Y, initial state distribution η = {η(y) ∈ [0, 1], y ∈ Y}, and one-step transition matrix P = {P (y, y ′ ) ∈ [0, 1], y, y ′ ∈ Y}. Suppose that Y 1 , . . . , Y M are M closed (but not necessarily irreducible) mutually disjoint subsets of the state space Y, and let Y 0 := Y \ M k=1 Y k . In other words, Y 0 is transient and the Markov chain Y eventually gets absorbed into one of the M closed sets. Let us define as the absorption time and the closed set that absorbs Y , respectively. Here because Y 0 is transient (i.e. θ < ∞ a.s.), µ is well-defined. We also define M := {1, . . . , M } and M 0 := M ∪ {0}. The Markov chain Y can be indirectly observed by another stochastic process X = (X n ) n≥1 defined on the same probability space (Ω, F, P). We assume there exists a set of probability measures {P(y, dx); y ∈ Y} defined on some common measurable space (E, E) such that P (y n−1 , y n )P(y n , E n ) for every (y n ) 0≤n≤t ∈ Y t+1 , (E n ) 1≤n≤t ∈ E t , t ≥ 1. For every y ∈ Y, we assume that P(y, dx) admits a density function f (y, x) with respect to some σ-finite measure m on (E, E); namely, f (y, x)m(dx) = P(y, dx). A (sequential decision) strategy (τ, d) is a pair of an F-stopping time τ (in short, τ ∈ F) and a random variable d : Ω → M that is measurable with respect to the observation history F τ up to the stopping time τ (namely, be the set of strategies. The objective is to obtain a strategy (τ, d) so as to minimize the expected detection delay (EDD) for some m ≥ 1 and deterministic nonnegative and bounded function c : Y → [0, ∞), as well as the terminal decision losses (TDL's) The Bayes risk is a linear combination of all of these losses, for some m ≥ 1, c, and a set of strictly positive constants a = (a yi ) i∈M,y∈Y\Y i . In (2.1), while it is natural to assume c(y) = 0 for y ∈ Y 0 , we allow c(y) to take any nonnegative values for y ∈ Y 0 . On the other hand, in (2.2) and (2.3), we assume that any correct terminal decision (i.e., {d = i, Y τ ∈ Y i , τ < ∞}) is not penalized because otherwise the terminal decision loss (2.2) cannot be bounded by small numbers and Problem 2.2 below does not make sense. Problem 2.1 (Minimum Bayes risk formulation). Fix m ≥ 1, c, and a set of strictly positive constants a = (a yi ) i∈M,y∈Y\Y i , we want to calculate the minimum Bayes risk and find a strategy (τ * , d * ) that attains it, if such a strategy exists. Problem 2.2 (Bayesian fixed-error probability formulation). Fix m ≥ 1, c, and a set of strictly positive constants R = (R yi ) i∈M,y∈Y\Y i , we want to calculate the minimum EDD and find a strategy (τ * , d * ) ∈ ∆(R) that attains it, if such a strategy exists. Remark 2.1. Fix a set of positive constants R. We have In our analysis, we will need to reformulate the problem in terms of the conditional probabilities i be the expectations with respect to P i and P (t) i , respectively. We also let the unconditional probability that Y is absorbed by Y i be Because Y 0 is transient, we must have i∈M ν i = 1. Without loss of generality, we can assume ν i > 0 for any i ∈ M because otherwise we can disregard Y i and consider the Markov chain on Y \ Y i . In terms of those conditional probabilities, we have D (c,m) We decompose the Bayes risk such that for every (τ, d) ∈ ∆. In particular, with a yi = 1 for all y ∈ Y\Y i , ASYMPTOTICALLY OPTIMAL STRATEGIES We now introduce two strategies. The first strategy triggers an alarm when the posterior probability of the event that Y has been absorbed by a certain closed set exceeds some threshold for the first time, and will be later proposed as an asymptotically optimal solution for Problem 2.1. The second strategy is its variant expressed in terms of the log-likelihood ratio (LLR) processes and will be proposed as an asymptotically optimal solution for Problem 2.2. Let Λ(i, j) = (Λ n (i, j)) n≥1 be the LLR processes; Definition 3.1 ((τ A , d A )-strategy for the minimum Bayes risk formulation). Fix a set of strictly positive constants A = (A i ) i∈M , define strategy (τ A , d A ) by Define the logarithm of the odds-ratio process Then, (3.5) can be rewritten as Then we have Notice by (3.6) that Φ (i) n ≤ Λ n (i, j) for every n ≥ 1 and j ∈ M 0 \ {i}, and hence We will show that, by adjusting the values of A and B, the strategy (τ A , d A ) is asymptotically optimal in Problem 2.1 as for fixed a, and the strategy (υ B , d B ) is asymptotically optimal in Problem 2.2 as for fixed c. For the latter, we assume that, in taking limits, R i := (R yi ) y∈Y\Y i satisfy (3.9) for some strictly positive constants (β i ) i∈M . This limit mode will still be denoted by " R ↓ 0" for brevity. We will find functions A(c) and B(R) so that In fact, we will obtain results stronger than (3.10) and (3.11); we will show 3.1. Convergence of terminal decision losses and detection delay. As c and R decrease in Problems 2.1 and 2.2, respectively, the optimal stopping regions shrink and one should expect to wait longer. In Problem 2.1, when the unit sampling cost is small, one should take advantage of it and sample more. In Problem 2.2, when the upper bounds on the TDL's are small, one expects to wait longer to collect more information in order to satisfy the constraints. Moreover, the size of the stopping regions for (τ A , d A ) and (υ B , d B ) decrease monotonically as A and B decrease. Therefore, functions A(c) and B(R) should be monotonically decreasing as c and R decrease, respectively. We explore the asymptotic behaviors of the EDD and the TDL as A ↓ 0 and B ↓ 0. Moreover, assume, while taking limits B ↓ 0, that the ratio B i /B i for every i ∈ M is bounded from below by some strictly positive number so that it is consistent with how R decreases to 0 as we assumed in (3.9). We first obtain bounds on the TDL's that are shown to converge to zero in the limit. The LLR processes can be used as Radon-Nikodym derivatives to change measures as the following lemma shows. The proof is the same as Lemma 2.3 of Dayanik et al. [2013], and hence we omit it. Lemma 3.1 (Changing Measures). Fix i ∈ M, an F-stopping time τ , and an F τ -measurable event F . We have The next proposition can be obtained by setting F := {d = i} ∈ F τ in Lemma 3.1. Proposition 3.2 (Bounds on the TDL). We can obtain the following bounds on the TDL's. Using the bounds in Proposition 3.2 and Remark 2.1, we can obtain feasible strategies by choosing the values of A and B accordingly. Proposition 3.3 (Feasible Strategies). Fix a set of strictly positive constants We now analyze the asymptotic behavior of the detection delay. Proposition 3.4 below allows us to use τ Its proof is the same as that of Proposition 3.6 of Dayanik et al. [2013]. The posterior probability process ( Π (i) n ) i∈M 0 converges a.s. by Dayanik and Goulding [2009]. Moreover, because the posterior probability of the correct hypothesis should tend to increase in the long run, on the event {µ = i}, i ∈ M, it is expected that Π (i) n converges to 1 and that Π (j) n converges to 0 for every j ∈ M 0 \ {i} with probability one. This suggests the a.s.-convergence of Λ n (i, j) to infinity given µ = i for every j ∈ M 0 \ {i}. For the rest of this section, we further assume that the average increment converges to some strictly positive value. Assumption 3.2. For every i ∈ M, we assume that This is indeed satisfied in the i.i.d. case (Dayanik et al. [2013]). In Section 4, we will show that this is also satisfied in certain more general settings and that the limit can be expressed in terms of the Kullback-Leibler divergence. Let us fix any i ∈ M. We show that, for small values of A and B, the stopping times τ n /n ≈ l(i) for sufficiently large n as the next proposition implies. Proposition 3.5. For every For the proof of Proposition 3.5 above, (ii) follows immediately by Assumption 3.2 and (i) follows from Lemma 3.2 below after replacing Y (j) n , P, and (µ j ) j∈M 0 \{i} in the lemma with Λ n (i, j)/n, P i , and (l(i, j)) j∈M 0 \{i} , respectively, for every fixed i ∈ M. Lemma 3.2 is a straightforward extension of Lemma 5.2 of Baum and Veeravalli [1994] and is omitted. The following lemma can be derived from Proposition 3.5. The proof is the same as that of Lemma 3.9 of Dayanik et al. [2013]. Lemma 3.3. For every i ∈ M and any j(i) ∈ arg min j∈M 0 \{i} l(i, j), we have P i -a.s. Remark 3.2. We shall assume that 0 < B ij < 1 or −∞ < log B ij < 0 for all i ∈ M and j ∈ M 0 \ {i} as we are interested in the limits of certain quantities as B ↓ 0. This implies where the last equality follows from the first two equalities. For every i ∈ M, conditionally on {Y 0 ∈ Y i }, the Markov chain Y always admits a stationary distribution; namely, there exists a unique nonnegative w i (y), for every y ∈ Y i , such that see, e.g., Tijms [2003]. Then This and the a.s. finiteness of θ together with Lemma 3.3 prove the next lemma. Lemma 3.4. For every i ∈ M and any j(i) ∈ arg min j∈M 0 \{i} l(i, j), we have P i -a.s. Because we want to minimize the m th moment of the detection delay time for any m ≥ 1, we will strengthen the convergence results of Lemma 3.3. We require Condition 3.1 below for some r ≥ m. Condition 3.1 (Uniform Integrability). For given r ≥ 1, we assume that Because c(·) is bounded, this also implies the following. Lemma 3.5. For every i ∈ M, we have the followings. Hence, Condition 3.1 for some r ≥ m is sufficient for the L m -convergences. Lemma 3.6. For every i ∈ M and m ≥ 1, we have the following. (i) If Condition 3.1 (i) holds with some r ≥ m, then we have (3.16) Alternatively to Condition 3.1, we can use the r-quick convergence. The r-quick convergence of suitable stochastic processes is known to be sufficient for the asymptotic optimalities of certain sequential rules based on non-i.i.d. observations in CPD and SMHT problems and also in the diagnosis problem of Dayanik et al. [2013]. We first derive a lower bound in Lemma 3.7 below on the expected detection delay under the optimal strategy. The lower bound on the expected detection delay under the optimal strategy can be obtained similarly to CPD and SMHT; see Baum and Veeravalli [1994], Dragalin et al. [1999], Dragalin et al. [2000], Lai [2000], Tartakovsky and Veeravalli [2004b] and Baron and Tartakovsky [2006]. This lower bound and Lemma 3.6 above can be combined to obtain asymptotic optimality for both problems. Lemma 3.7. For every i ∈ M and j(i), we have We now study how to set A in terms of c in order to achieve asymptotic optimality in Problem 2.1. We see from Proposition 3.2 and Lemma 3.6 that the TDL's decrease faster than the EDD and are negligible when A and B are small. Indeed, we have, in view of the definition of the Bayes risk in (2.4), by Proposition 3.2 and Lemma 3.6, for This motivates us to choose the value of A i such that it minimizes For example, A i (c i ) = c i /(σ i l(i)) when m = 1. It can be easily verified that for every m ≥ 1 we have Its proof is similar to that of Proposition 3.18 of Dayanik et al. [2013]. It should be remarked here that the asymptotic optimality results hold for any σ i > 0. However, for higher-order approximation, it is ideal to choose its value such that In Section 4.3, we achieve this value using nonlinear renewal theory. We now show that the strategy (υ B , d B ) is asymptotically optimal for Problem 2.2. It follows from Proposition This together with Lemma 3.7 shows the asymptotic optimality. CONVERGENCE RESULTS OF LLR PROCESSES In this section, we consider two particular cases where Assumption 3.2 holds with l(i, j) expressed in terms of the Kullback-Leibler divergence defined below. We assume that X θ , X θ+1 , . . . are identically distributed on {µ = i} given θ, for every i ∈ M. For the purpose of determining the limit l(i, j), because each class is closed, we can assume without loss of generality that Y i consists of a single state, say, The conditional probability of that Y is absorbed by (4.2) We assume the following throughout this section. Assumption 4.1. For every i ∈ M, we assume that exists. Here, ̺ (i) = ∞ holds for example when P i {θ < M } = 1 for some M < ∞. In a special case where the change time is geometric with parameter p > 0 as in Dayanik et al. [2013], this is satisfied with ̺ (i) = | log(1 − p)|. We denote the Kullback-Leibler divergence of f i (·) from f j (·) by which always exists and is nonnegative. We assume f i (·) and f j (·) as in (4.1) for any i = j are distinguishable; namely, we assume the following. Assumption 4.2. We assume exists for every i ∈ M and j ∈ M 0 \ {i}, we further assume the following. exists by Assumption 4.3. Finally, we assume the following. We shall prove the following under Assumptions 4.1-4.4. (4) We assume in Section 4.3 for higher-order approximations that, for every i ∈ M, there is a unique j(i) ∈ M 0 \ {i} such that l(i) = l(i, j(i)) = min j∈M 0 \{i} l(i, j). Contrary to the case θ is geometric as in Dayanik et al. [2013], the uniqueness of j(i) does not exclude the case j(i) = 0. In particular, for the case j(i) = 0, the uniqueness implies that ̺ (k) is uniquely minimized by k = i. On the other hand, if j(i) ∈ Γ i , then l(i) < l(i, 0), q(i, j(i)) < min j∈M (q(i, 0) + ̺ (j) ), and Γ i = ∅. In order to show Proposition 4.1, we first simplify the LLR process as in (3.3). Define, for each j ∈ M, (4.8) Lemma 4.1. Fix i ∈ M. For any n ≥ 1, By this lemma, each LLR process admits a decomposition (4.9) Here notice that ̺ (j) < ∞ for j ∈ M \ (Γ i ∪ {i}) by Assumption 4.4. We explore the convergence for ( n l=1 h ij (X l ))/n and ǫ n (i, j)/n separately. For i ∈ M and j ∈ M 0 \ {i}, because θ is an a.s. finite random variable, a direct application of the strong law of large number (SLLN) leads to We now show that ǫ n (i, j) in (4.9) converges almost surely to zero. n converges a.s. as n ↑ ∞ to an a.s. finite random variable L By the characterization of ǫ n (i, j) in (4.9) and Lemma 4.2 (i)-(iii), This also holds when j = 0 because Indeed, the left-hand side of (4.13) equals . Because A j (n) → ̺ (j) by Assumption 4.1 and by Lemma 3.2, we have (4.13). This together with (4.10) shows Proposition 4.1. The a.s. convergence can be extended to the L r (P i )-convergence for r ≥ 1 as well, under additional integrability conditions. Firstly, as in Lemma 4.3 of Dayanik et al. [2013], for every i ∈ M, j ∈ M 0 \ {i} and r ≥ 1, we have Here, (4.14) holds if the following condition holds. On the other hand, by Lemma 4.2, ǫ n (i, j)/n → 0 as n ↑ ∞ in L r (P i ) under Condition 4.2 below. Notice in Lemma 4.2 (vi) that in order for L (i) n to converge in L r (P i ) to zero, it is sufficient to have Condition 4.2. Given i ∈ M, j ∈ M \ {i} and r ≥ 1, we suppose that (4.11) and (4.15) hold, and, if j ∈ M \ Γ i , (4.12) holds for the given r. In summary, we have the following L r -convergence results. Proposition 4.2. For every i ∈ M and j ∈ M 0 \ {i}, we have Λ n (i, j)/n → l(i, j) as n ↑ ∞ in L r (P i ) for some r ≥ 1 if Conditions 4.1 and 4.2 hold for the given r. Example 2. As a variant of Example 1, we consider the case X is not necessarily identically distributed in and Y (i) 0 is absorbed with probability one by Y i = {i} for each i ∈ M. This implies that The conditional probability of θ = t given {µ = i} as in (4.2) can be written Assumption 4.5. For every This ensures that q(i, j) > 0 and q (0) (i, j) > 0 where we use (4.4) and define We assume the following to ensure that E log Assumption 4.6. For every i, j ∈ M, we assume that q (0) (i, j) < ∞. We shall show the following under Assumptions 4.1, 4.4, 4.5, and 4.6. Similarly to Example 1 of Section 4.1, we simplify the LLR process as follows. Define we later show that Λ n (i, 0)/n ∼ min j∈M Λ (0) n (i, j)/n as n → ∞ under P i (see (4.20) below). Lemma 4.3. For i, j ∈ M, we have and for i ∈ M and j ∈ M \ {i} As in Example 1, we decompose each LLR process for every i ∈ M such that Λ n (i, j) = n l=1 h ij (X l ) + ǫ n (i, j), j ∈ M \ {i}, By the SLLN and Assumption 4.1, for every i ∈ M, we have P i -a.s. as n ↑ ∞ (4.17) We now show that ǫ n (i, j) converges almost surely to zero as n → ∞. Similarly to Lemma 4.2, the following holds. n converges a.s. as n ↑ ∞ to an a.s. finite random variable L n converges a.s. as n ↑ ∞ to an a.s. finite random variable L (j) if (4.18) holds and (4.19) By this lemma, for every i ∈ M, we have ǫ n (i, j)/n → 0 for j ∈ M \ {i}, and ǫ s. Hence by Lemma 3.2, (4.20) holds. We now pursue the convergence in the L r -sense. In view of (4.21), we have Λ n (i, 0)/n ≤ Λ (0) n (i, j)/n for any j ∈ M and Therefore, for the proof of the uniform integrability of Λ n (i, 0)/n, it is sufficient to show that of Λ (0) n (i, j)/n for every j ∈ M. As in Example 1, for every i ∈ M and r ≥ 1, we have (1/n) n l=1 h ij (X l ) which are satisfied under Condition 4.3 below. Condition 4.3. For given which is satisfied if ̺ (j) < ∞ and the following holds. On the other hand, by Lemma 4.2, ǫ n (i, j)/n → 0 as n ↑ ∞ in L r (P i ) under Condition 4.5 below for j ∈ M \ {i}, and, for j = 0, ǫ n (i, j)/n → 0 as n ↑ ∞ in L r (P i ) under Condition 4.6 below for j ∈ M. Notice as in Lemma 4.2 (vi) that in order for L (i) n to converge in L r under P i to zero, it is sufficient to have Condition 4.5. Given i ∈ M, j ∈ M \ {i} and r ≥ 1, we suppose that (4.23) holds, (4.18) holds, and (2) if j ∈ M \ Γ i , (4.19) holds for the given r. In summary, we have the following L r -convergence results. Proposition 4.4. (1) For every i ∈ M and j ∈ M \ {i}, we have Λ n (i, j)/n → l(i, j) as n ↑ ∞ in L r (P i ) for some r ≥ 1 if Conditions 4.3 and 4.5 hold for the given r, i (·, ·)) i∈M , and here we investigate if there exists some σ such that (3.19) holds. This can be obtained by a direct application of the theorems in Dayanik et al. [2013]. where it can be shown that H Fix i ∈ M. By Lemma 3.1 and because τ A = τ (i) (4.24) converges in distribution under P i to some random variable, say, W i . Then, as in Lemma 5.1 of Dayanik et al. [2013], H i (A i ) converge in distribution as A i ↓ 0 under P i to W i −log a j(i)i (note a j(i)i = a j(i)i = a j(i)i by the assumption that j(i) ∈ Γ i ). Now because x → e −x is continuous and bounded on Suppose the overshoot Lemma 4.5. Fix i ∈ M. If j(i) ∈ Γ i is unique and the overshoot W i (A i ) in (4.24) converges in distribution as Now we obtain the limiting distribution of (4.24). Similarly to Dayanik et al. [2013], we have a decomposition Φ By Lemmas 4.2 and 4.4 and because the last term of the right-hand side converges to zero P i − a.s., the remaining term ξ n (i, j(i)) converges to a finite random variable, and hence is slowly changing (cf. Definitions 5.2 and 5.3 of Dayanik et al. [2013]). This allows us to apply nonlinear renewal theory. Define a stopping time, T i := inf n ≥ 1 : n l=1 h ij(i) (X l ) > 0 , and random variable W i whose distribution is given by Therefore, a higher-order approximation for Problem 2.1 can be achieved by setting in (3.18), σ i := a j(i)i E i e −W i . 4.3.2. For the case j(i) = 0. Now suppose j(i) = 0 and is unique. As in Remarks 4.1 (4) and 4.2 (4), ̺ (j) and q (0) (i, j) + ̺ (j) in Examples 1 and 2, respectively, are minimized when j = i and is unique. Here we assume that a yi = a zi =: a (4.26) Similarly to the above, we have a decomposition: for every n ≥ 1, in Example 1, n (i, j)) , in Example 2. It remains to show that ǫ n (i, 0) (resp. ǫ (0) n (i, i) ) is slowly-changing for Example 1 (resp. Example 2). In view of Assumption 4.1 and Lemmas 4.2 and 4.4, it holds on condition that the following holds. Notice in Example 1 where the last term converges to zero by Assumption 4.1 and is hence slowly-changing. Assumption 4.8. For both Examples 1 and 2, we assume ζ (i) continuous in probability, i.e., for any ε > 0, there exists δ > 0 such that ii (X l ) > 0 ) for Example 1 (resp. Example 2), and the distribution of random variable W i is given by Following the same arguments as in the case j(i) ∈ Γ i , we have the following. Proposition 4.6. Fix i ∈ M and suppose j(i) = 0 is unique. Moreover, suppose (4.26) and Assumption 4.8 Therefore, a higher-order approximation for Problem 2.1 can be achieved by setting in (3.18), NUMERICAL EXAMPLES In this section, we verify the effectiveness of the asymptotically optimal strategies through a series of numerical experiments. Because the optimality results are fundamentally relying on the existence of the limits l(i, j) as in Assumption 3.2, we first verify their existence numerically and show that they can be obtained efficiently via simulation. We then evaluate the performance of the asymptotically optimal strategies and also the rate of convergence. Verification of Assumption 3.2. We consider both the case X is i.i.d. in each of the closed sets as studied in Section 4 and also the non-i.i.d. case where each closed set may contain multiple states. In order to verify the convergence results in Section 4, we consider Example 2 of Subsection 4. In Figure 2, we plot sample paths of Λ n (1, ·)/n under P 1 and Λ n (2, ·)/n under P 2 along with the theoretical limit l(i, j). In order to verify their almost sure convergence, we show in Table 1 the statistics on the position at time n = 500, 1000, 1500 based on 1000 samples for each. We indeed see that the mean value approaches the theoretical limit and the standard deviation diminishes as n increases, verifying the almost sure limit of the LLR processes. under P 1 and (b) Λ n (2, 0)/n (solid) and Λ n (2, 1)/n (dotted) under P 2 . The theoretical limit values l(·, ·) are also given. We plot in Figure 3 sample paths of the LLR processes Λ n (1, ·)/n under P 1 and Λ n (2, ·)/n under P 2 and also show in Table 2 the statistics on their positions at n = 500, 1000, 1500 based on 1000 sample paths. We observe that these processes indeed converge to deterministic limits almost surely. In fact due to the simple structure of the transient set Y 0 , the convergence seems to be faster than what are observed in Figure 2 and Table 1. It is also noted that the convergence holds regardless of the cyclic/acyclic structure of the closed sets. under P 1 and (b) Λ n (2, 0)/n (solid) and Λ n (2, 1)/n (dotted) under P 2 . 5.2. Numerical results on asymptotic optimality. We now evaluate the asymptotically optimal strategy in comparison with the optimal Bayes risk focusing on Problem 2.1 with m = 1. Dayanik and Goulding [2009] showed that the problem can be reduced to an optimal stopping problem of the posterior probability process Π, and in theory the value function can be approximated via value iteration in combination with discretization. In practice, however, the state space increases exponentially in the number of states |Y|, and it is computationally feasible only when |Y| is small (typically at most three or four). Moreover, we need to deal with small detection delay costs c and hence the resulting stopping regions tend to be very small in practical applications. For this reason, the approximation is affected severely by discretization errors as well. Here in order to provide reliable approximation to the optimal Bayes risk, we consider the following simple examples. Case 1 has been considered in Dayanik and Goulding [2009] where θ is geometric with parameter .05 under P 1 and .15 under P 2 . In Case 2, it is a sum of two geometric random variables under P. For X, we assume for both cases that it takes values in E = {1, 2, 3, 4} with probabilities P{X 1 = k|Y 1 = y} = f (y, k) given by We set the detection delay function c = [0, 0,c,c] and the terminal decision loss function a yi = 1 for y / ∈ Y i and it is zero otherwise. The limits l(i, j) can be analytically computed by Propositions 4.1 and the asymptotically optimal strategy can be constructed analytically. Here we set the value σ i = a j(i)i = 1 and hence A i (c) = c i /l(i), for every i ∈ M. In order to compute the optimal Bayes risk, we first discretize the state space of Π (|Y| − 1simplex) by 70 |Y|−1 mesh and then obtain the stopping regions by solving the optimality equation provided in Dayanik and Goulding [2009] via value iteration. The optimal Bayes risk is then approximated via simulation c asymptotic optimal ratio based on 10, 000 paths. The risk under the asymptotically optimal strategy is approximated based on 100, 000 paths. Table 3 shows the results. It shows the approximated Bayes risk (with 95% confidence interval) for both strategies and also the ratio between the two. It can be seen that the ratio indeed converges to 1. In fact, the results show that the convergence is fast and it approximates the optimal Bayes risk precisely even for a moderate value ofc. The proposed strategy can be derived analytically and its corresponding Bayes risk can be computed instantaneously via simulation. ACKNOWLEDGMENTS Proof. We have Moreover, we have As in the proof of Lemma A.1 of Dayanik et al. [2013], Combining the above and take infimum over ∆(R), Therefore the lemma holds because (τ, d) ∈ ∆(R) implies that R (1) Lemma A.2. Fix 0 < δ < 1, i ∈ M and j(i). We have lim inf Now in Lemma A.1, set j = j(i) and This vanishes as R i ↓ 0 because 0 < 1 − k √ δ < 1 and R i ↓ 0 =⇒ L ↑ ∞ and 0 < γ < c i . Indeed, by Dayanik et al. [2013], Lemma A.2. for any k > 1, P i sup n≤θ+L Λ n (i, j(i)) > kLl(i) L↑∞ − −− → 0, and because n−1 m=1 c(Y m )/n converges P i -a.s. to c i , P i min n≥L Proof of Lemma 3.7. Fix a set of positive constants R, 0 < δ < 1 and (τ, d) ∈ ∆. We have by Markov inequality By taking infimum and then limits on both sides, which is greater than or equal to δ by Lemma A.2. Therefore, the claim holds because 0 < δ < 1 is arbitrary.
2013-12-11T21:26:16.000Z
2013-12-11T00:00:00.000
{ "year": 2013, "sha1": "3114f33e3fe191536cec309ea4ad4558142b259f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3114f33e3fe191536cec309ea4ad4558142b259f", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
267072386
pes2o/s2orc
v3-fos-license
Hyperbaric Oxygen Therapy Impact for the Function of Kidney in Metabolic Syndrome (SM) Patients Hyperbaric Oxygen Therapy (HBOT) is the therapy of inhaling 100% pure oxygen in a hyperbaric chamber of more than 1 absolute atmosphere. Currently, the use of hyperbaric oxygen therapy is increasingly widespread, not only for decompression sickness and diving problems. But it has been used for clinical therapy, cosmetics, and geriatric care. The American Food and Drug Administration (FDA) has also confirmed various clinical indications, especially those related to metabolic syndromes such as Diabetes Mellitus (DM) and Diabetic Foot Ulcers (DFU). Then how is the use of Hyperbaric Oxygen Therapy for kidney disease? Through this literature review, it is hoped that information will be obtained regarding the effect of Hyperbaric Oxygen Therapy on kidney function, both benefits and side effects, especially in kidney disease due to metabolic syndrome (MetS). Hyperbaric Oxygen Therapy (HBOT) is the therapy of inhaling 100% pure oxygen in a hyperbaric chamber of more than 1 absolute atmosphere.Currently, the use of hyperbaric oxygen therapy is increasingly widespread, not only for decompression sickness and diving problems.But it has been used for clinical therapy, cosmetics, and geriatric care.The American Food and Drug Administration (FDA) has also confirmed various clinical indications, especially those related to metabolic syndromes such as Diabetes Mellitus (DM) and Diabetic Foot Ulcers (DFU).Then how is the use of Hyperbaric Oxygen Therapy for kidney disease?Through this literature review, it is hoped that information will be obtained regarding the effect of Hyperbaric Oxygen Therapy on kidney function, both benefits and side effects, especially in kidney disease due to metabolic syndrome (MetS). INTRODUCTION Kidney disease is a non-communicable disease with a high incidence.Kidney disease has a serious impact up to death, because the kidneys have an important role in the body's metabolism and homeostatic stability.There are many risk factors for kidney disease, one of which is metabolic syndrome.Metabolic syndrome is a collection or combination of various risk factors related to cardiovascular disease.According to data collected by the Indonesia Renal Registry (IRR) in 2014, the number of deaths of hemodialysis patients in Indonesia was 2,221 people with cardiovascular disease as the highest cause of death (59%). Hyperbaric Oxygen Therapy is one of the oldest treatments in the medical world.This therapy was first initiated around 1662 by Dr. Henshaw from England.However, the development of this therapy did not experience significant progress.Despite hundreds of years, HBOT therapy has had its ups and downs in terms of support and scientific evidence.Along with the development of science and research, now Hyperbaric Oxygen Therapy can be used for clinical therapy according to indications that have been approved by the FDA and UMHS. Then, what about the effect of Hyperbaric Oxygen Therapy for kidney disease?Are the kidneys included in the indications that are allowed to be given Hyperbaric Oxygen Therapy? THEORETICAL REVIEW There are many risk factors for kidney disease, one of which is metabolic syndrome.Metabolic syndrome is a collection or combination of various risk factors related to cardiovascular disease.According to data collected by the Indonesia Renal Registry (IRR) in 2014, the number of deaths of hemodialysis patients in Indonesia was 2,221 people with cardiovascular disease as the highest cause of death (59%). Hyperbaric Oxygen Therapy is one of the oldest treatments in the medical world.This therapy was first initiated around 1662 by Dr. Henshaw from England.However, the development of this therapy did not experience significant progress.Despite hundreds of years, HBOT therapy has had its ups and downs in terms of support and scientific evidence.Along with the development of science and research, now Hyperbaric Oxygen Therapy can be used for clinical therapy according to indications that have been approved by the FDA and UMHS. METHODOLOGY Qualitative research is research that is descriptive and tends to use analysis.Process and meaning (subject perspective) are more emphasized in qualitative research.The theoretical basis is used as a guide so that the research focus is in accordance with the facts in the field.Apart from that, this theoretical basis is also useful for providing a general overview of the research setting and as material for discussing research results.There is a fundamental difference between the role of theoretical basis in quantitative research and quantitative research.In quantitative research, research departs from theory to data, and ends in acceptance or rejection of the theory used; whereas in qualitative research the researcher starts from the data, utilizes existing theory as explanatory material, and ends with a "theory". Metabolic Syndrome and Kidney Disease Before discussing Hyperbaric Oxygen Therapy, we will first get to know about metabolic syndrome and kidney disease.There are several definitions of metabolic syndrome.In this literature review, the authors used three types of definitions, namely those from the World Health Organization (WHO), NCEP ATP-III and the International Diabetes Federation (IDF).These three definitions can be seen in table 1 below: Table 1.Criteria for the Definition of Metabolic Syndrome Among the components of the metabolic syndrome contained in table 1 above, such as insulin resistance, visceral obesity, high triglyceride levels, and hypertension influence each other in increasing Reactive Oxygen Species (ROS).Increased ROS in adipose cells and vascular circulation causes oxidative stress conditions.Oxidative stress is considered as one of the causes of diabetic endothelial-angiopathy dysfunction.Hyperglycemia conditions induce oxidative stress through three pathways, namely; increase in the polyol pathway, increase in glucose auto-oxidation and increase in protein glycosylates. Oxidative stress also affects the physiology of the vascular system, which causes a decrease in the production of Nitric Oxide (NO) produced by endothelial cells.Decreased NO production causes endothelial dysfunction thereby affecting the diameter of the endothelium.Narrowing of blood vessels can cause hypertension.Hypertension will exacerbate endothelial mechanical damage, so that endothelial inflammation occurs continuously (chronic inflammation).These conditions have an impact on the formation of atherosclerotic plaques and interfere with the work of the cardiovascular system. Chronic hypertension results in glomerular capillary injury.Increased glomerular capillary pressure for a long time will cause glomerulosclerosis.Glomerulosclerosis stimulates hypoxia chronic damage kidney. The human body has 2 kidneys which are located on the right and left.The kidneys are macroscopically pea-shaped, only about 7-12cm long and 1.5-2.5cmthick.Normal kidney weight is around 120-170 grams. Kidneys are vital organs that play an important role in maintaining homeostasis (environmental stability in the body), filtering (filtration) and removing waste products from the blood into urine (excretion).In addition to regulatory and excretory functions, the kidneys also secrete renin.Renin has an important role in regulating blood pressure, plays a role in the formation of vitamin D, regulates calcium and the synthesis of erythropoietin to stimulate red blood cell production. Kidneys are able to carry out their functions when in a healthy state.If there is interference or disease in the kidneys, this organ will experience a decrease and even lose its ability.As a result, the amount of harmful substances in metabolic waste and electrolyte fluids accumulates in the body.This can have a systemic impact throughout the body. To find out the decline in kidney function from an early age, kidney function tests can be done with blood and urine tests, including: 1) Blood tests by looking at the levels of creatinine, urea, glomerular filtration rate (GFR) 2) Examination of urine by looking at albumin or protein levels The best kidney function test is to measure the glomerular filtration rate (GFR).GFR measurement cannot be done directly, but through a formula calculation based on the measurement value of creatinine, gender and age of the patient.So it is called the estimated value of LFG (eLFG ). According to Chronic Kidney Disease Improving Global Outcomes (CKD KDIGO) in 2012, the classification of kidney disease is divided into: Hyperbaric Oxygen Therapy (Hbot) Hyperbaric Oxygen Therapy (HBOT) is a therapy that provides oxygen with a concentration of up to 100% and a pressure of more than 1 atmosphere absolute (ATA), which is carried out in a high-pressure air chamber.This therapy can act as the main therapy or complementary therapy. History of Hyperbaric Therapy started by dr.Henshaw of England who built a hyperbaric chamber in 1662 to treat several types of ailments.Then in 1921, dr.J. Cunningham began to advance the basic theory of using hyperbaric oxygen to treat hypoxic states.However, his efforts failed because he did not have strong scientific evidence. The 1930s studies on the use of hyperbaric oxygen began to be carried out in a more focused and in-depth manner.Until then around the 1950s, dr.Borrema succeeded in presenting the results of his research on the use of hyperbaric oxygen which dissolves physically in blood fluids, so that it can give life to a state without Hemoglobin which is called life without blood.The results of his research on the treatment of gas gangrene with hyperbaric oxygen made him known as the father of RUBT.Since then, hyperbaric oxygen therapy has developed rapidly and continues to this day. Meanwhile in Indonesia, Hyperbaric Oxygen Therapy was started to be used in 1960 by Lakesla in collaboration with the Naval Hospital dr.Ramelan, Surabaya.At first this Hyperbaric Oxygen Therapy was devoted to the Health of the Navy military and diving.However, over time, this therapy developed and HBOT service centers also spread in various cities in Indonesia. The working principle of HBOT utilizes the 4 laws of diving physics, namely: (1) Boyle's law: the greater the pressure, the air volume will be smaller and denser, (2) Dalton's law: if the pressure increases, the partial pressure also increases, (3) Henry's law : The higher the partial pressure, the easier it is for the gas to dissolve in the liquid, and (4) Charles' Law: At constant pressure, if the volume of gas increases, the temperature will also increase. The mechanism of action of HBOT includes several things, namely: (1) Reducing the volume of gas bubbles and accelerating the resolution of gas bubbles, (2) Ischemic and hypoxic areas will receive maximum O2 (hyperoxia), (3) Increase the formation of new capillaries (Angiogenesis/ neovascularization), (4) Suppressing the growth of germs (Antimicrobial), (5) Increasing the formation Fibroblasts, (6) Increase leukocyte phagocytosis, (7) Improve fitness, beauty and geriatric purposes. As a clinical therapy, HBOT also has appropriate doses that are adjusted to the indications, contraindications and patient needs.The therapeutic dose of HBOT is generally given at 2.4 ATA with 100% pure oxygen according to the dive chart guidelines.Especially in Indonesia, using the Kindwall table composed by Prof. Guritno. The Role of Hyperbaric Oxygen Therapy on Kidney Function The benefits of Hyperbaric Oxygen Therapy on kidney function can be seen from the mechanism of action of this therapy.The laws of diving physics Boyle, Dalton, and Henry account for the increased diffusion of oxygen plasma.Under normal air pressure conditions, about 97% of oxygen binds to hemoglobin and about 3% dissolves in blood plasma.Whereas in hyperbaric conditions, oxygen dissolved in blood plasma can increase many times with the amount of hyperbaric pressure applied.Oxygen dissolved in plasma can pass through various atherosclerosis blockages, so that ischemic conditions can be resolved. In hypoxic conditions, cells that lack oxygen will carry out anaerobic metabolism or fermentation.In anaerobic metabolism, very little ATP is produced, every 1 glucose molecule only produces 2 ATP.So, if a tissue is hypoxic for a long time, there will be mitochondrial damage, energy crisis and cell death. Hyperbaric Oxygen Therapy can assist in supplying oxygen more quickly at the cellular level (internal respiration), so that metabolism can re-process aerobically.Every one molecule of Glucose that is processed aerobically produces 38 ATP. Oxygen plays an important role in wound healing.It plays a role in the synthesis and maturation of collagen.Collagen serves as the basic matrix of proliferation.Lack of oxygen will interfere with collagen synthesis.In addition, hyperbaric oxygen therapy also helps stimulate angiogenesis by increasing various growth factor components, especially vascular endothelial growth factor (VEGF). That is why under hypoxic conditions, the wound healing process is relatively longer, and even produces imperfect remodeling products in the form of scar tissue or fibrosis. Hyperbaric Oxygen Therapy plays a role in increasing Reactive Oxygen Species (ROS).ROS will stimulate the formation of antioxidants, such as glutathione, melatonin, and so on, in an effort to balance oxidative stress.Controlled ROS has benefits.On the other hand, if ROS is not controlled, free radicals cause various damage to cells and tissues. HBOT therapy provides the benefit of reducing swelling or edema through the mechanism of hyperoxia vasoconstriction.Vasoconstriction due to HBOT does not cause hypoxia.This is because the increase in plasma diffusion and microvascular flow keeps oxygen distribution going well. However, there is one thing that must be understood, the increase and improvement of microvascular blood flow will increase capillary density, so that the ischemia area will experience reperfusion.So that at the beginning of therapy, it is very important to emphasize the principles of safety, comfort, and adaptability. Hyperbaric oxygen therapy is useful as an antimicrobial.Hyperoxia conditions effectively kill anaerobic bacteria through the process of oxidation of membrane proteins and lipids, damaging DNA, and inhibiting bacterial metabolic functions.In addition, HBOT also increases the action of antibiotics such as fluoroquinolones, amphotericin B, and aminoglycosides which use oxygen for transport across cell membranes. Based on a study conducted by Rubinstein et al in 2008 on rats, it was found that preconditioning of HBOT in patients with renal ischemia inhibits the decrease in GFR and increases vasodilation, so that treatment with HBOT may be beneficial in ischemic conditions of acute renal failure. Then in a study conducted by Martin Sedlacek, et al (2021) in thirty-two diabetic patients who had serum levels for 60 days of treatment, the results found no evidence of adverse kidney effects.Patients exposed to the mechanisms of renal injury are more at risk if their kidney function is abnormal at baseline.In addition, the results of a decrease in proteinuria were also found.These data are consistent with data on test animals, as in a study conducted by RJ Ramalho, et al ( 2012) which showed a decrease in BUN, creatinine, and proteinuria values in experimental rats. In a study conducted by Purnama, Tjahaya (2015), with a sample of 12 patients with an average age of 55 years suffering from UKD Wagner classification 1-5, the results of assessing their BUN and serum creatinine profiles showed that hyperbaric oxygen therapy did not affect kidney function.Likewise with the study of Kevin T, Terry (2015), the results of statistical tests obtained p = 0.097, so it can be concluded that there was no significant difference between eGFR before and after receiving HBOT therapy in patients with diabetic foot wounds. In a study conducted by Harison E Laurent, et al (2018), in 35 diabetic patients with UKD who received 30x HBOT therapy sessions (2.4 atm, 90 minutes per session, every day 4-5 days/week).After a month of daily HBOT treatment, significant changes occurred in type 2 diabetes mellitus patients compared with healthy non-diabetic controls, that these changes were responsive to HBOT. Quoted from the United States Food and Drug Administration (FDA), as of July 26, 2021, permits the use of hyperbaric chambers with support from the Undersea and Hyperbaric Medical Society (UMHS).In these guidelines, HBOT can be used to treat metabolic syndromes such as diabetes, diabetic foot wounds, risk of tissue death, severe and extensive burns, and gas gangrene.However, it is still not approved for treatment of the kidney.This does not mean it cannot, but still requires further research. CONCLUSION AND RECOMMENDATION Hyperbaric Oxygen Therapy has benefits in improving the patient's general condition and metabolic syndrome, thereby preventing kidney disease complications.However, HBOT therapy has not made a significant change in kidney function which is characterized by no significant change in the e-GFR value.The effectiveness of HBOT for kidney disease still requires further research with a wider and longer range.
2024-01-23T17:05:57.956Z
2023-12-29T00:00:00.000
{ "year": 2023, "sha1": "b9d9ce8ea30ee97bf023c99a50101df688d7ff34", "oa_license": "CCBY", "oa_url": "https://journal.formosapublisher.org/index.php/fjst/article/download/7530/7416", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d0ceb5fdf57337997809cca38b06cee51733de9c", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
20268453
pes2o/s2orc
v3-fos-license
Farnesylation of Pex19p Is Required for Its Structural Integrity and Function in Peroxisome Biogenesis* The conserved CaaX box peroxin Pex19p is known to be modified by farnesylation. The possible involvement of this lipid modification in peroxisome biogenesis, the degree to which Pex19p is farnesylated, and its molecular function are unknown or controversial. We resolve these issues by first showing that the complete pool of Pex19p is processed by farnesyltransferase in vivo and that this modification is independent of peroxisome induction or the Pex19p membrane anchor Pex3p. Furthermore, genomic mutations of PEX19 prove that farnesylation is essential for proper matrix protein import into peroxisomes, which is supposed to be caused indirectly by a defect in peroxisomal membrane protein (PMP) targeting or stability. This assumption is corroborated by the observation that mutants defective in Pex19p farnesylation are characterized by a significantly reduced steady-state concentration of prominent PMPs (Pex11p, Ant1p) but also of essential components of the peroxisomal import machinery, especially the RING peroxins, which were almost depleted from the importomer. In vivo and in vitro, PMP recognition is only efficient when Pex19p is farnesylated with affinities differing by a factor of 10 between the non-modified and wild-type forms of Pex19p. Farnesylation is likely to induce a conformational change in Pex19p. Thus, isoprenylation of Pex19p contributes to substrate membrane protein recognition for the topogenesis of PMPs, and our results highlight the importance of lipid modifications in protein-protein interactions. The conserved CaaX box peroxin Pex19p is known to be modified by farnesylation. The possible involvement of this lipid modification in peroxisome biogenesis, the degree to which Pex19p is farnesylated, and its molecular function are unknown or controversial. We resolve these issues by first showing that the complete pool of Pex19p is processed by farnesyltransferase in vivo and that this modification is independent of peroxisome induction or the Pex19p membrane anchor Pex3p. Furthermore, genomic mutations of PEX19 prove that farnesylation is essential for proper matrix protein import into peroxisomes, which is supposed to be caused indirectly by a defect in peroxisomal membrane protein (PMP) targeting or stability. This assumption is corroborated by the observation that mutants defective in Pex19p farnesylation are characterized by a significantly reduced steady-state concentration of prominent PMPs (Pex11p, Ant1p) but also of essential components of the peroxisomal import machinery, especially the RING peroxins, which were almost depleted from the importomer. In vivo and in vitro, PMP recognition is only efficient when Pex19p is farnesylated with affinities differing by a factor of 10 between the non-modified and wild-type forms of Pex19p. Farnesylation is likely to induce a conformational change in Pex19p. Thus, isoprenylation of Pex19p contributes to substrate membrane protein recognition for the topogenesis of PMPs, and our results highlight the importance of lipid modifications in protein-protein interactions. A large number of eukaryotic intracellular proteins are posttranslationally modified by the covalent attachment of either 15 or 20 carbon isoprenoids known as farnesyl or geranylgeranyl, respectively. This process (referred to as protein prenylation) affects lipases, kinases, inositol and protein-tyrosine phosphatases, lamins, and most of the small GTPases (1)(2)(3). Protein prenylation was shown to enable reversible association of mod-ified proteins with lipid bilayers and to modulate protein-protein interactions (4 -6). The farnesyl group is attached to the cysteine of the C-terminal motif known as the CaaX box, where "a" indicates aliphatic amino acids and X is usually serine, methionine, glutamine, alanine, or threonine (3). Farnesyltransferase (FTase) 3 consists of two subunits, the ␣-subunit and the ␤-subunit (Ram2p and Ram1p in yeast). The ␣-subunit is shared by the geranylgeranyl transferase (GGTase I), whereas the ␤-subunit is unique for FTase (7). Different and not all exclusive models have been proposed for Pex19p function. First, Pex19p might be an import receptor for PMPs that recognizes its substrates in the cytosol and delivers them to the peroxisomal membrane (15,17,18). This function would be analogous to that of the peroxisomal import receptors Pex5p and Pex7p, which recognize and deliver matrix proteins with PTS1 (peroxisomal targeting signal type 1) and PTS2 to peroxisomes (19). Second, Pex19p might act as a PMP chaperone that prevents newly synthesized PMPs from aggregation and degradation in the cytosol (17,20). Third, Pex19p might act as a PMP membrane insertion factor (14,16). Fourth, Pex19p might be required as an association/dissociation factor of membrane protein complexes (21) and has been reported to be required for the targeting of Pex3p from the ER to the peroxisomal membrane (22). Finally, Pex19p function is dependent on Pex3p, which serves as a docking factor at the peroxisomal membrane (12,(22)(23)(24). All models agree on the importance of PMP recognition for Pex19p function (25). Pex19p shows only a moderate degree of sequence conservation, with less than 20% amino acid identity between yeast and human Pex19p. Its CaaX box, however, has been retained throughout evolution (see Fig. 1). Information on the status and the requirement of Pex19p farnesylation has so far been available only through often conflicting side observations. Mammalian PEX19 was described to be partially farnesylated in CHO-K1 cells (11), but other studies with human fibroblasts challenged the relevance of Pex19p farnesylation (15,26). It was speculated that in Saccharomyces cerevisiae, farnesylation is required for an essential aspect of Pex19p function (12). This notion was recently contradicted (27). Work on other yeasts similarly suggested that farnesylation would be dispensable for Pex19p function (13,28,29). In this study, we determined the in vivo farnesylation status of Pex19p and its dependence on peroxisome induction and on Pex3p. We discovered that Pex19p is fully modified by FTase and investigated whether Pex19p farnesylation is required for PMP recognition and stability. By peptide blots, two-hybrid analysis, and fluorescence polarization titration, we showed that farnesylation increases the affinity for PMPs by a factor of about 10. Last, we provide evidence that the interaction between farnesylated Pex19p and PMPs is achieved through a farnesylation-induced structural change in Pex19p rather than through direct farnesyl-PMP interaction. Our results exemplify the biological relevance of isoprenylation-dependent proteinprotein interactions. EXPERIMENTAL PROCEDURES Oligonucleotides, Plasmids, and Strains-Oligonucleotides and plasmids are listed in supplemental Tables 1 and 2. Plasmids pRAM1, pPC86-PEX19, pPC97-ANT1, pPC97-PEX3, and pPex10-GFP were cloned by introduction of PCR products generated from genomic DNA into the respective vectors, as stated in supplemental Table 2. pPC86-PEX19 C347R was derived from pPC86-PEX19 using primers RE1425/1426 and the QuikChange II kit (Stratagene). S. cerevisiae wild-type (BY4742) and the isogenic knock-out strains ⌬pex19, ⌬pex3, and ⌬ram1 were obtained from the EUROSCARF strain collection (Frankfurt, Germany). The two-hybrid strain PCY2 is described in Ref. 30. The pseudo wild-type, pex19 ⌬C4 and pex19 C347R were generated by integration of PCR amplificates from pUG6 (31) generated with primer pairs RE1526/1529, RE1527/1529, and RE1528/1529 into BY4742. Strains in which the genomic copies of genes express proteins were fused to Protein A have been described previously (32). Standard techniques were used for yeast growth and manipulation (33). Cell Extracts, Subcellular Fractionation, and Immunoprecipitation of Native Complexes-Cell extracts were prepared by glass bead lysis in 50 mM HEPES, pH 7.4, 50 mM NaCl, and protease inhibitors as described (34). Preparation of postnuclear supernatants and gradient fractionation was carried out as described (35). Immunoprecipitation of Protein A-tagged Pex2p was performed as described in Ref. 32. Image Acquisition-Fluorescence microscopic images were recorded on an Axioplan2 microscope (Zeiss, Jena, Germany) equipped with an ␣Plan-FLUAR ϫ100/1.45 numerical aperture oil objective and an AxioCam MRm camera (Zeiss) at room temperature. Samples were fixed with 0.5% agarose. If neces-sary, contrast was linearly adjusted by the "best fit" function of the acquisition software, Axiovision version 4.2 (Zeiss). Protein Expression-GST-Pex19p and GST-Ras1p were expressed in Escherichia coli and purified as described for GST-Pex19p (18). S. cerevisiae FTase was expressed and purified as described for mammalian FTase (36). For expression of GST-Pex19p in S. cerevisiae, strains were grown in SD-ura to midlog phase and induced by 0.5 mM copper sulfate for 2 h. Cells were harvested and resuspended in phosphate-buffered saline containing 1 mM dithiothreitol, Roche Applied Science complete protease inhibitor mixture, and 1 mM phenylmethylsulfonyl fluoride. Cells were broken by glass beads, and the lysate was clarified by centrifugation at 15,000 rpm (SS-34) for 50 min. Glutathione-Sepharose 4B (GE Healthcare) was incubated with the supernatant and washed with phosphate-buffered saline, and the bound protein was eluted by phosphate-buffered saline containing 10 mM glutathione. In Vitro Farnesylation-Pex19p was farnesylated on a column by 10 M purified Ram1p/Ram2p in buffer F (50 mM HEPES, pH 7.4, 5 mM MgCl 2 , 50 mM NaCl, 5 mM dithiothreitol) with 20 M farnesyl pyrophosphate (Sigma) for 30 min at room temperature, washed in buffer F, and eluted by 10 mM reduced glutathione. For limited proteolysis, fluorescence titration, and CD spectroscopy, Pex19p was washed on a column with buffer F and then with 10 mM potassium phosphate buffer (pH 7.4) and cleaved by thrombin (24 units/mg Pex19p) at 4°C for 16 h. Anion Exchange Chromatography and Gel Filtration-Anion exchange chromatography was carried out using a ResourceQ column (GE Healthcare) with buffer F as running buffer. Bound protein was eluted by a linear salt gradient to 500 mM. Gel filtration was carried out on a Superdex 200 16/60 pg column (GE Healthcare) using buffer F as running buffer. Limited Proteolysis-100 g of purified Pex19p were incubated with 20 ng of trypsin at 30°C for 45 min. Samples were taken at different time points. Proteolysis was stopped by adding 5ϫ SDS-sample buffer and incubation at 95°C for 5 min. Samples were analyzed by SDS-PAGE and Coomassie Blue staining. Peptide Blot Assays-PMP blots were synthesized in parallel by the SPOT technique (37). Pex19p in vitro binding assay with peptide arrays was carried out essentially as described (18). Purified GST-Pex19p or farnesylated GST-Pex19p were added to the peptide-containing membranes at 100 g/ml. Monoclonal anti-GST antibodies (Sigma) were used to detect bound Pex19p. Uniformity of spotting was verified by incubating the Pex19p-probed membrane with Pex19p FARN , which yielded a comparable picture (not shown). Fluorescence Polarization Titration-The Pex13p peptide GIFAIMKFLKKILYR was synthesized and labeled with fluorescein isothiocyanate by Biosyntan (Berlin, Germany). Titrations of the fluorescein isothiocyanate-Pex13p peptide with Pex19p and Pex19p FARN were performed in a Fluoromax SPEX II fluorometer equipped with a 1971 autopolarizer (L configuration; Horiba Jobin Yvon, Munich, Germany) at 20°C. Increasing amounts of Pex19p were added to 107 nM peptide in 1.2 ml of 20 mM HEPES, pH 7.4, 150 mM NaCl, 5 mM MgCl 2 . The solution was carefully mixed, and the fluorescence polarization signal (excitation 488 nm, emission 517 nm) was recorded for at least 5 min after each addition. The polarization signal was determined as p ϭ ((VV/VH)/(HV/HH) Ϫ 1)/((VV/VH)/(HV/HH) ϩ 1) from four combinations of the inlet and outlet polarizer (where V represents vertical and H represents horizontal position of the polarization plane; first character inlet, second outlet polarizer). Concentrations of free and bound Pex19p or Pex19p FARN were calculated from the starting concentrations of Pex19p and peptide applied and the amplitude of the titration curve. For both titration experiments, data were fitted using GraFit version 3.0 software (Erithacus) according to an A ϩ B ϭ AB binding model. Circular Dichroism and Secondary Structure Prediction-CD spectra were recorded using a Jasco J-710 spectropolarimeter (Jasco, Grossumstadt, Germany). Far UV spectra were recorded from 190 to 250 nm (10-fold oversampling) at 20°C with proteins at a concentration of 0.2 mg/ml in 10 mM potassium phosphate buffer (pH 7.4) in cylindric quartz cuvettes (Hellma, Müllheim, Germany) with 0.1 cm path length. Secondary structure predictions were calculated using the algorithms CDSSTR, SELCON3, and CONTINLL as implemented in CDPro (38). Miscellaneous-Preparation of yeast whole cell extracts and immunoblotting were performed according to standard procedures. Immunoreactive complexes were visualized using antirabbit or anti-mouse IgG-coupled horseradish peroxidase in combination with the ECL TM system from Amersham Biosciences. The antibodies used have been obtained from commercial sources as the monoclonal anti-GST (Sigma) and antiyeast 3-phosphoglycerate kinase, Pgk1p (Molecular Probes, Inc., Eugene, OR) or described previously (namely Pex19p (12) (45)). The antibodies against Pxa1p and the anti-green fluorescent protein (GFP) were kind gifts from M. Schneider and W. H. Kunau, respectively (Bochum, Germany). The yeast two-hybrid analysis was performed essentially as described (46). ␤-Galactosidase activities were assayed according to the manufacturer's instructions and expressed as mol of chlorophenol red-␤-D-galactopyranoside hydrolyzed/ min/cell (Clontech, Palo Alto, CA). All Pex19p Is Processed by Farnesyltransferase-To determine the level of Pex19p farnesylation in vivo, we analyzed cell lysates of S. cerevisiae wild-type cells by immunoblotting with antibodies directed against Pex19p. It is well known that farnesylation increases the electrophoretic mobility of Pex19p (11). In wild-type cells, Pex19p appeared as a double band, with the slower migrating form being less abundant (Fig. 1A). However, in a knock-out strain of the FTase ␤-subunit Ram1p, both bands shifted to a higher apparent molecular weight region (Fig. 1A). Complementation of the ⌬ram1 knock-out by reintroduction of Ram1p on a plasmid restored the wild-type mobility of the majority of Pex19p (Fig. 1B). Importantly, as shown in Fig. 1A, the non-farnesylated form of Pex19p of a ⌬ram1 mutant (arrowhead) cannot be detected in extracts from wild-type yeast (arrow). These results indicate that (i) Pex19p farnesylation is dependent on Ram1p and that (ii) the complete pool of Pex19p is modified by Ram1p in wild-type cells. Our results further show that the slower migrating form of Pex19p in wild-type cells does not represent non-farnesylated Pex19p but that the molecular weight shift most likely is due to another so far unknown modification. Phosphorylation can supposedly be excluded, since the double band persisted when extracts were treated with calf intestine phosphatase (not shown). Farnesylated and non-farnesylated Pex19p can be distinguished in wild-type cells, when the farnesylation machinery is challenged by overexpression of Pex19p (Fig. 1C). Farnesylation remained unaltered when peroxisome biogenesis was induced by oleate (Fig. 1D). In both glucose-and oleate-grown cells, about 3% of the Pex19p was associated with cellular organelles of a 20,000 ϫ g sedimentation fraction (Fig. 1D), confirming Pex19p as a largely cytosolic protein that is temporarily or loosely associated with membranes. Also, the absence of Pex3p did not interfere with processing of Pex19p (Fig. 1D), indicating that Pex3p is not the recruitment factor for Pex19p farnesylation or subsequent farnesyl-dependent processing steps at the ER. Thus, farnesylation of Pex19p is complete and stable and has been maintained throughout evolution from yeast to humans, indicated by the nearly complete conservation of the farnesylation site (Fig. 1E). In all cases, the kanMX4 marker was used to select for integration into the genome. CKQQ, the terminal four amino acids of wild-type S. cerevisiae Pex19p (CaaX box). For generation of the pseudo-wild type ( wild-type), the kanMX4 marker was introduced after the STOP codon of wild-type PEX19. In the pex19 C347R mutant, the cysteine of the farnesylation site was genomically replaced by arginine. In the pex19 ⌬C4 strain, the farnesylation site CKQQ was removed by inserting a STOP codon followed by kanMX4 after PEX19 base pair 1038, corresponding to amino acid 346. B, appearance of Pex19p modification in genomic pex19 farnesylation mutants. The indicated strains were grown on glucose and oleate medium and analyzed by immunoblot with the antibodies indicated. PGK1p was used as loading control. C, growth assay on oleate liquid medium. Strains were precultured in synthetic medium (SD) with 0.3% glucose, washed, and inoculated at 0.05 A 600 units/ml in 0.1% oleate and 2% ethanol medium. At the indicated time points, 1-ml samples were taken, sedimented by centrifugation, and washed, and A 600 was determined. Efficient Peroxisome Biogenesis Requires Pex19p Farnesylation-To investigate the requirement for Pex19p farnesylation in vivo and to overcome potential overexpression artifacts associated with complementing plasmids, we constructed a series of genomic mutants using the kanMX4 selection marker (31). We introduced a single point mutation in the Pex19p farnesylation site (C347R), removed the whole CaaX box by deleting the last four amino acids of Pex19p (⌬C4), or integrated the selection marker right after the PEX19 gene to obtain a pseudo-wild type ( wild-type) as a control for possible effects of marker integration on the stability of the transcripts ( Fig. 2A). We confirmed the apparent higher molecular weight of Pex19p in the genomic mutants, indicating that Pex19p was not farnesylated in these strains (Fig. 2B). Notably, all mutants were expressed at wildtype levels, thereby excluding PEX19 overexpression effects that might circumvent the requirement for the farnesyl moiety (12). The pseudo-wild type behaved like the wild-type strain in all of our assays. When grown on oleate as the only carbon source, peroxisomes become essential for growth of yeast, because they are the only site of fatty acid ␤-oxidation (47). To test whether ⌬ram1 or strains expressing the Pex19p mutant versions are capable of utilizing oleic acid as the sole carbon source, we performed growth tests in liquid culture. As shown in Fig. 2C, the growth of the two genomic pex19 farnesylation mutations, C347R or ⌬C4, as well as the ⌬ram1 mutant was significantly reduced compared with the wild-type growth rate, which indicates that these mutants failed to metabolize oleate. On ethanol, all mutants behaved like wild-type (Fig. 2C), which excludes pleiotropic effects and more likely indicates a clear peroxisomal defect due to the lack of Pex19p farnesylation. Defects in Peroxisomal Matrix Protein Import-To investigate the reason for the growth defect on oleate, we analyzed the distribution of GFP-SKL, as a peroxisomal matrix marker protein, by direct fluorescence microscopy (Fig. 3A). As expected, in wild-type cells, GFP-SKL exhibited a punctate staining pattern, which is typical for a peroxisomal localization and indicates a functional import of PTS1-proteins. In contrast, the GFP-SKL was mislocalized to the cytosol in ⌬pex19 cells (Fig. 3A), where no peroxisomal structures are detectable (48). In farnesylation mutants as well as in the ⌬ram1 mutant, different species of cells were observed. Some mutant cells showed a wild type-like punctate pattern (Fig. 3A, top), whereas in others, GFP-SKL was totally misdirected to the cytosol (Fig. 3A, bottom). A third species of farnesylation mutant cells exhibited an intermediate phenotype of punctate peroxisomal structures with a cytosolic background staining, indicative of a partial import defect (Fig. 3A, middle). For further investigation, a sedimentation analysis was performed, and the distribution of catalase (Cta1p) and 3-oxoacyl-CoA thiolase (Fox3p) was analyzed by immunoblotting (Fig. 3B). In wild-type cells, the major amount of both proteins is localized to the organellar pellet fractions. Both peroxisomal marker proteins were cytosolic in samples derived from ⌬pex19 cells as a consequence of the peroxisomal biogenesis defect of FIGURE 3. Matrix protein import is disturbed in farnesylation mutants. A, subcellular localization of the peroxisomal marker GFP-PTS1. Cells expressing peroxisomal targeting signal 1 (SKL) fused to GFP were grown on synthetic medium (SD) with 0.1% oleate for 3 days and analyzed by direct fluorescence microscopy. Bar, 5 m. B, separation of cytosolic and organellar fractions by differential centrifugation. Cell-free postnuclear supernatants of the indicated strains were fractionated by centrifugation into a soluble supernatant and a particular pellet fraction. Equal amounts of obtained total (T), supernatant (S), and pellet (P) fractions were subjected to immunoblot analysis with antibodies raised against peroxisomal catalase (Cta1p) and oxo-acyl-CoA thiolase (Fox3p) or by catalase activity measurements (C). The total activity of ⌬pex19 was set as 100%. this mutant strain (Fig. 3B). Interestingly, the farnesylation mutant pex19 C347R exhibited a significant cytosolic mislocalization of both marker enzymes, which was more pronounced for Cta1p. Thus, the mutant strain is characterized by a partial mislocalization for peroxisomal matrix proteins. The ⌬ram1 mutant did exhibit a similar phenotype concerning the distribution of the marker enzymes with the difference that the overall concentration of the proteins was decreased. We assume that the decreased concentrations of marker enzymes arise from secondary effects. In the ⌬ram1 mutant strain, the general farnesylation defect affects not only Pex19p but also the function of other usually farnesylated proteins. This could affect regulatory processes, which may lead to the reduced protein concentrations. The partial mislocalization of peroxisomal marker proteins in the ⌬ram1 strain was corroborated by enzyme activity measurements of Cta1p (Fig. 3C). Taken together, these results show that the import of PTS1 and PTS2 proteins is partially disturbed in the ⌬ram1 and the Pex19p mutant strains, which is in agreement with a requirement of Pex19p farnesylation for proper peroxisome biogenesis. Defects in PMP Stability-The proposed function of Pex19p is that of a soluble import receptor and/or a chaperone for newly synthesized PMPs (15,17,18,20). Therefore, Pex19p is only indirectly involved in matrix protein import. To investigate the reason for the observed partial matrix protein import defect in the ⌬ram1 and the Pex19p mutant strains, we analyzed the cellular distribution of PMPs by density gradient centrifugation. Immunoblot analysis revealed that in farnesylation mutants, peroxisomal structures are detectable at a density similar to that of wild-type peroxisomes (data not shown), indicating that the targeting of PMPs to peroxisomes and thus peroxisome biogenesis is not generally disturbed in the farnesylation mutants. Next, we analyzed the stability of PMPs, which is known to be reduced in a ⌬pex19 deletion strain (48) and asked whether Pex19p farnesylation would affect the steady-state concentration of PMPs (Fig. 4A). As expected, the ⌬pex19 strain exhibited a significantly reduced steady-state level of all PMPs tested (Fig. 4A, lane 2). The ⌬ram1 and the Pex19p farnesylation mutant strains showed no significant difference from the wild type concerning the presence of the components of the docking complex, Pex13p and Pex14p, as well as Pex15p and Pex3p, the class II PMP. However, the amounts of the most abundant peroxisomal membrane protein and proliferation factor, Pex11p, of the peroxisomal ABC transporter Pxa1p, and importantly also of the RING finger peroxins Pex2p and Pex10p were as reduced in the farnesylation mutants as in the ⌬pex19 deletion strain. The detection of Pex2p and Pex10p was achieved by genomic integration of Protein A at the C terminus of both proteins. To corroborate the presumed instability of the proteins, we monitored the amount of a GFP fusion of Pex10p, which was expressed under the control of the MET25 promoter in the mutant cells (Fig. 4B). The cellular concentration of the fusion protein was reduced in farnesylation mutants, and the addition of methionine to culture media repressed the expression of the fusion protein, showing the control of the MET25 promoter. These results indicate that the observed altered steady-state concentration of the PMPs in farnesylation mutants is not caused by a differential transcriptional regulation but is supposed to be due to a reduced stability or increased turnover. The RING finger peroxins Pex2p and Pex10p are components of a larger assembly, the importomer of the peroxisomal protein import machinery (32). Since the steady-state concentration of the RING finger peroxins was drastically reduced in the farnesylation mutants, we investigated whether this is also reflected by the protein composition of the importomer. The importomer was isolated by affinity chromatography of Pex2p, genomically tagged with ProtA (Fig. 4C). The bait protein, Pex2p-Protein A, was isolated in reduced amounts in farnesylation mutants as well as in the ⌬pex19 strain because of the lower steady-state concentration (Fig. 4, A and C). The amounts of the two other RING finger peroxins, Pex10p and Pex12p, were drastically reduced in the eluates of the farnesylation mutant strains compared with wild type, whereas the amounts of the docking components Pex13p and Pex14p were less affected (Fig. 4C). Since the RING peroxins are required for peroxisomal matrix protein import, their virtual absence in the importomer can explain the observed mislocalization of matrix proteins to the cytosol and the defect in growth on oleic acid medium of the farnesylation mutants. Expression and in Vitro Farnesylation of Yeast Pex19p-To study the function of the farnesyl group of Pex19p in vitro, we expressed S. cerevisiae Pex19p in E. coli and purified the recombinant protein (Fig. 5A, lane 6). The apparent molecular mass on SDS-PAGE is about 45 kDa with a predicted molecular mass of 39.7 kDa (Fig. 5A). To distinguish between effects of the high charge to hydrophobicity ratio (the predicted pI being 4.1) and effects of protein shape, we estimated by gel filtration the molecular mass of the native protein to be 46.8 kDa (Fig. 5C). This result suggested that Pex19p is not a protein of globular shape, in agreement with a structural analysis of human Pex19p that identified an unstructured N-terminal domain (20) as well as with our analysis using the GLOBE algorithm (B. Rost; available on the World Wide Web) predicting a non-globular shape for Pex19p. Next, the purified protein was farnesylated by recombinant yeast FTase (Fig. 5A). For comparison, Ras1p was also expressed, purified and farnesylated in vitro. For both proteins, farnesylation of Pex19p increased their electrophoretic mobility, and the lipid modification was dependent on the presence of FTase and farnesyl pyrophosphate (Fig. 5A, lanes 1 and 5). The farnesylation was confirmed by electron ionization mass spectrometry (Fig. 5B). The observed molecular mass shift of 208 Da clearly shows the addition of the farnesyl moiety. Since in this case the farnesylation reaction was not complete, the non-farnesylated form was detected in both spectra. The additional peak at higher molecular weight is thought to be due to a contamination in both samples. The farnesylated Pex19p was also analyzed by gel filtration and anion exchange chromatography (Fig. 5C). From gel filtration, the molecular weight of farnesylated Pex19p was calculated to be 58.8 kDa. This is more than could be expected from the addition of a 0.20-kDa farnesyl group. Moreover, upon separation by anion exchange chromatography, the farnesylated Pex19p exhibits a shift in the elution profile, which might be due to a change in the surface charge of the protein. To further investigate the farnesylation effect on the conformation of Pex19p, we performed a limited proteolysis of both forms of Pex19p with trypsin. (Fig. 5D). Both forms exhibited a different fragmentation pattern, which cannot only be explained by the increased mobility caused by the farnesylation. Taken together, these data indicate that Pex19p might undergo a structural change upon farnesylation. Evidence for a Farnesyl-induced Increase in ␣-Helical Content-In addition, we directly compared Pex19p and Pex19p FARN by CD analysis (Fig. 5E). Qualitative inspection of the spectra indicated a decrease in residual ellipticity at wavelengths above 205 nm and an increase at lower wavelengths, indicative of an increase in (␣) structured domains upon farnesylation. For quantitative analysis, we employed the algorithms SDSSTR, SELCON3, and CONTIN/LL of the CDPro suite (38) with various basis data sets. The largest basis data set, SMP53, included 40 soluble proteins and 13 membrane proteins (49,50). The CONTIN/LL algorithm (38,51) indicated an increase in helical content by 10% and a decrease in unstructured domains by 7.3% upon introduction of the farnesyl moiety (Fig. 5E). All changes were qualitatively robust against change of algorithm as well as against change of data sets (Table 1). Secondary structure predictions calculated from the primary sequence by PHD (52) or SOPM (53) predicted an ␣-helical content of 43 or 54%, close to our averaged experimentally predicted value of 52% (Table 1). From secondary structure prediction, we concluded that Pex19p has an ␣-helical content in the range of 50% and that this content increases upon farnesylation, possibly due to structuring of the otherwise disordered N terminus (20). Farnesylation of Pex19p Strongly Enhances Binding to Peroxisomal Targeting Motifs in PMPs-Prenylation of proteins is known to affect membrane association as well as proteinprotein interaction (5). We tested whether farnesylation would affect the binding of Pex19p to its binding sites in PMPs, which have been demonstrated to be part of the signal sequence responsible for peroxisomal membrane targeting (18). To that end, farnesylated and non-farnesylated GST-Pex19p were added to identical cellulose membranes, which contained an array of synthetic 15-mer peptides scanning the Pex19p binding regions with the core binding sequence at the central positions (18). Sites were chosen according to the prediction of our algorithm and included a selection of both novel and already characterized regions. Immunological detection of bound Pex19p revealed that lipid-modified Pex19p bound to the same peptides as did Pex19p but apparently with higher affinity (Fig. 6A). The previously identified sites in Pex13p, Pex11p, and Pex25p (18) bound significantly more Pex19p FARN . A second predicted site in Pex11p (amino acids 49 -74) was efficiently recognized only by the modified form of Pex19p. Several other binding sites predicted for the PMPs Pex12p, Pex17p, Pex27p, and Ant1p were also verified by the in vitro Pex19p binding assay. Importantly, all identified sites bound Pex19p FARN more avidly than the unmodified protein. As a control, the N-terminal 31 amino acids of Pex13p failed to interact with either version of Pex19p, demonstrating that the farnesyl moiety did not cause nonspecific binding of Pex19p to random peptides. These results were complemented by two-hybrid analysis. We tested interactions of two-hybrid constructs expressing wild-type PEX19 or a CaaX box mutation (Pex19p C347R ) together with Ant1p, Pex11p, or the central domain of Pex13p (amino acids 173-258). In all cases, Pex19p-PMP interaction was strongly reduced if Pex19p could not be farnesylated (Fig. 6B). Different affinities cannot be attributed to varying expression levels of wild-type and mutated Pex19p, as shown in the inset of Fig. 6B. Also for Pex3p, which is a class II PMP (17,54) and interacts with a distinct domain of Pex19p (23, 55), Pex19p farnesylation was required for efficient interaction (Fig. 6C). To quantify the farnesylation dependence of Pex19p-cargo interaction, we analyzed the interaction of recombinant Pex19p and Pex19p FARN with a fluorescently labeled Pex13p peptide by fluorescence polarization titration (Fig. 6D). The dissociation constants for Pex19p to its binding peptide in Pex13p were fitted by an A ϩ B ϭ AB model and resulted in a K D of 64 nM for non-farnesylated Pex19p. Farnesylation reduced the K D to 7.6 nM (Fig. 6D). In summary, peptide blots, two-hybrid assays, and fluorescence polarization titration indicate an about 10-fold reduced PMP affinity of non-farnesylated Pex19p. DISCUSSION An Evolutionarily Conserved Farnesylation Site-We show here that in S. cerevisiae, all cellular Pex19p is processed by FTase, irrespective of peroxisome induction. The Pex19p farnesylation site is conserved in all species (Fig. 1E), with PEX19 from trypanosomes being the only exception so far (56). This deviation is in concert with a strong conservation of the PEX19 farnesylation site in eukaryotes, since trypanosoma mark one of the earliest branching points in the eukaryotic lineage (57). Considering the low overall sequence conservation of Pex19p, the preservation of the farnesylation sites throughout kingdoms (Fig. 1E) indicates that the lipid modification is an essential component of the Pex19p protein. Farnesylation sites have originally been described as "CaaX" (Ca 1 a 2 X) boxes with "aa" being small aliphatic amino acids and X being any amino acid (3). Pex19p from S. cerevisiae has an unusual farnesylation motif, with lysine and glutamine in the a 1 a 2 positions not being "small aliphatic amino acids." Crystallographic inspection of mammalian FTase has revealed that there are virtually no restrictions on the a 1 site, whereas Gln in the a 2 site is "forbidden" (58). Although the human Pex19p CaaX motif (CLIM) is a "classical" one, this apparent conflict cannot be resolved by reference to differences between human and yeast FTase, because there are at least two human CKQQ proteins, one of which is described as farnesylated (59 -61). The conservation of the CKQQ motif in the Pex19p proteins of other yeasts (Fig. 1E) makes us confident that the unusual CKQQ motif is also modified in these yeasts. However, Pex19p from Y. lipolytica (28) and Pichia pastoris (13) were reported not to be farnesylated. This notion was based on point mutations in the CaaX motif that did not cause a mobility shift of Pex19p. It is possible that farnesylation was not detectable in these experiments. The Pex19p double band observed by immunoblot analysis (Fig. 1A) was anticipated to represent farnesylated and non-farnesylated forms of Pex19p; the concurrent mobility shift of both bands in a ⌬ram1 mutant now excludes this possibility. The second band could point to a second posttranslational modification. Farnesylation before Membrane Recruitment-A bimodal localization is suggested for Pex19p with a few percent of the Pex19p protein at the peroxisome and the bulk amount in the cytosol (12,24). Recent data suggest that Pex19p is first recruited to the ER (22). FTase is probably cytosolic (62), but methyltransferase, which catalyzes methylation of the farnesylated cysteine, has been localized to the ER in yeast and mammals (63). Yeast CaaX proteases that cut off the last three amino acids of the CaaX box sequence are also localized to the ER (64). We could not find differences in farnesylation levels under conditions that affect peroxisome biogenesis: growth on oleate or the absence of Pex3p (Fig. 1). Thus, we have to place Pex19p farnesylation epistatically upstream of recruitment to Pex3p. It will be interesting to learn whether Pex19p recruitment (22) is associated with subsequent processing steps at the ER. A Role for Pex19p Farnesylation in Peroxisome Biogenesis-Having shown that Pex19p is fully farnesylated in vivo, we demonstrate that the endogenous concentration of Pex19p did not suffice to sustain normal peroxisome biogenesis when farnesylation was prevented. The ⌬ram1 mutant as well as the Pex19p-CaaX box mutants were unable to grow on oleic acid medium (Fig. 2C), which is typical for mutants affected in peroxisome function (65). This was not due to a general growth defect, since the mutants grew normally on other non-fermentable carbon sources like ethanol (Fig. 2C). In contrast to pex19⌬ cells, the mutants still contained peroxisomes, indicating that the nonfarnesylated Pex19p still maintained part of its function in peroxisome biogenesis. The PEX19-defective human cell line PBD399 represents complementation group 14 (complementation group J in Japan) and expresses a truncated version of Pex19p with a 44-amino acid deletion at the C terminus (11,15,26). This underscores the importance of the farnesylation motif, although it is unclear whether the truncated Pex19p is stably expressed in this patient cell line. On the other hand, introduction of plasmid-borne Pex19p with a disabled CaaX box largely complements the biogenesis defect of a ⌬pex19 strain (12, 13, 15, 26 -29). This discrepancy might be explained by overexpression effects in plasmid-based expression, which increases the local concentration at its place of action and thereby compensates for the 10-fold lower affinity to its substrates. However, the yeast farnesylation mutants exhibited a partial import defect for peroxisomal matrix proteins (Fig. 3), which might impair peroxisome function and thus explained the growth defect on oleic acid medium. The import defect is assumed to be caused indirectly by a defect in PMP targeting or stability. This assumption is corroborated by the observation that mutants defective in Pex19p farnesylation are characterized by a significantly reduced steady-state concentration of prominent PMPs (Pex11p, Ant1p) but also of essential components of the peroxisomal import machinery, especially the RING peroxins (Fig. 4), which were almost depleted from the importomer. A selective instability of the RING peroxin Pex2p has also been observed in Yarrowia lipolytica pex19 mutant cells (28). Several other peroxins, such as Pex13p and Pex14p, were rather stable in the Pex19p farnesylation mutant cells (Fig. 4). These proteins might either be intrinsically more stable prior to insertion or be targeted to the peroxisomal membrane more efficiently (i.e. even when the affinity to Pex19p is reduced), thereby circumventing the absolute requirement for farnesylated Pex19p. Similarly, the selective absence of the RING peroxin Pex2p in a Y. lipolytica pex19⌬ strain might indicate that targeting of this PMP is most critical, whereas others can target to peroxisomes even without Pex19p, although very inefficiently in this organism (28). Clearly, more work is required to substantiate such a hypothesis. A Role for Pex19p Farnesylation in Substrate Recognition-We provide three lines of evidence that Pex19p farnesylation is crucial for efficient binding of PMPs. First, we detected in peptide blots sampling PMP binding sites that recombinant and completely farnesylated Pex19p (Pex19p FARN ) bound significantly stronger than the non-farnesylated protein to its binding sites (Fig. 6A). Second, in two-hybrid assays, abrogation of Pex19p farnesylation strongly reduced PMP interaction (Fig. 6, B and C) (12). Third, quantification of the interaction of Pex19p and Pex19p FARN with a Pex13p peptide by fluorescence polarization titration yielded K D values that differed by a factor of about 10 (Fig. 6D). Peptide scans and two-hybrid analysis are entirely different ways for the analysis of protein-protein interactions. In peptide scans, PMP peptides are synthesized on a solid surface and probed with recombinant in vitro farnesylated Pex19p, which allows only a limited degree of PMP secondary structure. On the other hand, in the two-hybrid assay, both proteins are expressed in the same cell and are expected to assume their native conformation. Finally, fluorescence polarization accurately measures the protein-peptide interaction in solution. The K D of 7.6 nM for Pex19p FARN suggests that PMPs are tightly bound by Pex19p. In vivo evidence exists that human Pex19p is similarly dependent on farnesylation for efficient PMP recognition. Deletion of the CaaX box drastically reduced the affinity for several PMPs, including Pex13p, in yeast two-hybrid assays (16). Consistently, the farnesylation-dependent Pex19p-PMP interactions could not be detected in a bacterial two-hybrid system, where Pex19p cannot be farnesylated (66). However, the interaction of human Pex19p with Pex3p was not affected in a farnesylation mutant (16,66), which might indicate that the interaction of Pex19p with Pex3p, which is a type II PMP and functions as a docking factor for the peroxisomal targeting of Pex19p (23), is somewhat different from the typical PMP recognition. This assumption is also corroborated by the domain structure of Pex19p with an N-terminal domain interacting with Pex3p and a C-terminal region recognizing type I PMPs (17,55). Evidence is also provided for the assumption that the farnesyl moiety does not directly contribute to the Pex19p binding to its target proteins. First, farnesylated Pex19p bound to the same sites in PMPs as did non-farnesylated Pex19p (Fig. 6A); if the lipid tail directly interacted with PMP binding sites, we would expect only Pex19p FARN to recognize these sites. Second, Pex19p showed increased affinity to all PMPs tested, making an interaction based on an exclusive farnesyl-PMP recognition unlikely. Furthermore, Pex19p farnesylation also enhanced the interaction with Pex3p (Fig. 6C), which is likely to bind Pex19p by a different mode (17,55). Pex3p is described to represent proteins whose targeting is not dependent on Pex19p (class II PMPs) (17,54). Finally, our data point to a conformational change upon farnesylation (Fig. 5, C and D). Secondary structure predictions indicated an increase in ␣-helical content (Fig. 5E). We assume that this structural change allows Pex19p to recognize PMPs more efficiently. In conclusion, we have shown in this work that farnesylation of the endogenous, fully farnesylated Pex19p is required for efficient PMP interaction, which in turn is essential for proper PMP topogenesis. Our results can be reconciled with all currently proposed roles for Pex19p function (import receptor, chaperone, insertion factor, association/dissociation factor; see Introduction), since these models agree on the physiological relevance of the Pex19p-PMP interaction (25). Our data do not preclude involvement of the farnesyl moiety of Pex19p also in membrane association, which, however, still needs to be analyzed.
2018-04-03T05:10:58.563Z
2009-05-18T00:00:00.000
{ "year": 2009, "sha1": "1a0b92831e853c5773e1f359dea7d7cc6dbe49b9", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/284/31/20885.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "1fdc2977bb3badfd5cf70e38e87b0d72f65e3019", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology", "Chemistry" ] }
260706723
pes2o/s2orc
v3-fos-license
The effectiveness of theory-based smoking cessation interventions in patients with chronic obstructive pulmonary disease: a meta-analysis Background Smoking cessation can effectively reduce the risk of death, alleviate respiratory symptoms, and decrease the frequency of acute exacerbations in patients with chronic obstructive pulmonary disease (COPD). Effective smoking cessation strategies are crucial for the prevention and treatment of COPD. Currently, clinical interventions based on theoretical frameworks are being increasingly used to help patients quit smoking and have shown promising results. However, theory-guided smoking cessation interventions have not been systematically evaluated or meta-analyzed for their effectiveness in COPD patients. To improve smoking cessation rates, this study sought to examine the effects of theory-based smoking cessation interventions on COPD patients. Methods We adhered to the PRISMA guidelines for our systematic review and meta-analysis. The Cochrane Library, Web of Science, PubMed, Embase, Wanfang, CNKI, VIP Information Services Platform, and China Biomedical Literature Service System were searched from the establishment of the database to April 20, 2023. The study quality was assessed using the Cochrane Collaboration's risk assessment tool for bias. The revman5.4 software was used for meta-analysis. The I2 test was used for the heterogeneity test, the random effect model and fixed effect model were used for meta-analysis, and sensitivity analysis was performed by excluding individual studies. Results A total of 11 RCTs involving 3,830 patients were included in the meta-analysis. Results showed that theory-based smoking cessation interventions improved smoking cessation rates, quality of life, and lung function in COPD patients compared to conventional nursing. However, these interventions did not significantly affect the level of nicotine dependence in patients. Conclusion Theory-based smoking cessation intervention as a non-pharmacologically assisted smoking cessation strategy has a positive impact on motivating COPD patients to quit smoking and improving their lung function and quality of life. Trial registration PROSPERO registration Number: CRD42023434357. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-16441-w. Introduction Chronic Obstructive Pulmonary Disease (COPD) is a heterogeneous lung disease characterized by persistent respiratory symptoms and airflow limitation caused by airway and/or alveolar abnormalities, as defined by the 2023 Global Initiative for Chronic Obstructive Lung Disease (GOLD) [1].In China, the overall prevalence of COPD is 8.6%, with a rate of 13.7% in the population over 40 years old [2].Smoking is a major risk factor for COPD, with smokers having 10.92 times the risk of developing COPD compared to non-smokers [3].Additionally, smoking COPD patients have more respiratory symptoms than non-smokers and higher mortality rates [4].Smoking cessation is considered the most effective and cost-effective strategy for preventing and treating COPD [5].For COPD smokers, it is important to adopt effective methods to control their smoking behavior [6].However, smoking cessation is challenging, and conventional approaches may not be effective for all patients.Although conventional smoking cessation methods such as telephone hotlines [7], medication [8], and comprehensive interventions [9] have been shown to improve patients' smoking cessation rates and lung function to some extent, patients' smoking cessation behavior is highly influenced by their health knowledge and behavior change. Therefore, some scholars have attempted to use theoryguided interventions to improve COPD patients' smoking cessation rates, achieving good results.Currently, the theories related to the management of smoking cessation in COPD include "timing theory" [10], "theory of planned behavior" [11], "the 5A nursing model" [12], and "cognitive-behavioral theory" [13].The timing theory was proposed by Canadian scholars Cameron et al [10].According to this theory, targeted intervention should be implemented according to the disease stage of patients, emphasizing the importance of understanding the different stages of the disease, focusing on the patients themselves, increasing their confidence in treating the disease, improving their current negative behaviors and emotions, and ultimately achieving a positive health outcome [14,15].The planned behavior theory was proposed by Ajzen [11], who believed that individual behavior is mainly influenced by individual behavioral intentions, including attitudes, subjective norms, and perceived behavioral control.Attitude refers to the positive or negative evaluation and experience of behavior; subjective norms refer to the social pressure felt when adopting behavior; and perceived behavioral control refers to self-efficacy and control over behavior [16,17].The 5A nursing model [12] includes five components: assess, advise, agree, assist, and arrange.The aim is to improve patients' self-efficacy and self-management skills [18,19].Cognitive-behavioral theory is an integration of cognitive theory and behavioral theory that utilizes methods to change negative cognitions, beliefs, and behaviors [13].Cognitive-behavioral interventions involve selecting theories related to cognition and/or behavior, considering individual, behavioral, and environmental factors, and designing intervention plans based on the individual's understanding of behavior change and available resources.This approach promotes the formation of healthy behaviors and corrects negative ones [20].Theory-based smoking cessation interventions are designed to provide patients with the knowledge, skills, and support necessary to quit smoking successfully [21].By understanding these theories, healthcare providers can design interventions that are tailored to the individual patient's needs and increase the likelihood of successful smoking cessation [22]. Currently, there has yet to be a systematic evaluation or meta-analysis of the effectiveness of theory-based smoking cessation interventions in COPD patients.Therefore, this study aims to synthesize randomized controlled trials of theory-based smoking cessation interventions in COPD patients and evaluate their effectiveness and impact on patients through meta-analysis, providing evidence-based medicine for their clinical application. Aims Our aim was to evaluate the effectiveness of theory-based smoking cessation interventions in patients with COPD. Design We followed the Cochrane Collaboration's Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [23].The review protocol is registered on the PROSPERO database (Registration No: CRD42023434357). Literature search Two researchers searched for RCT studies published in the Cochrane Library, Web of Science, PubMed, Embase, Wanfang Knowledge Service Platform, CNKI, VIP Resource Integration Service Platform, and China Biomedical Literature Database.The search terms included chronic obstructive pulmonary disease*/chronic obstructive lung disease*/COPD, smoking/smoking cessation/ smoking intervention, theory/model/theoretical.We conducted the search by combining subject terms and free words, and expanded our search by tracing the references included in the study in a snowball manner.The retrieval deadline for this search is from the establishment of the database up until April 20, 2023. Study selection The inclusion and exclusion criteria were formulated according to the Population, Intervention, Comparison, Outcome, Study design (PICOS) framework.Inclusion criteria: (i) the study participants met the diagnostic criteria for COPD of the Chinese Medical Association Respiratory Disease Society (2021 revised edition) [24] and also met the relevant criteria for tobacco dependence in the Chinese Clinical Smoking Cessation Guidelines (2015 edition) [25]; (ii) the intervention was based on theoretical smoking cessation methods; (iii) the outcome indicators: at least one of smoking cessation rate, nicotine dependence level, lung function, quality of life, clinical composite symptom score, and number of clinical symptom exacerbations; (iv) the study type was a randomized controlled trial.Exclusion criteria: Exclusion criteria: (i) duplicate publications; (ii) there were no relevant outcome indicators; (iii) literature with incomplete data and outcome index data that cannot be transformed and used; (iv) literature of low quality (based on a Cochrane Collaboration risk of bias assessment quality grade of C). Quality assessment The Cochrane Collaboration's risk of bias assessment tool (RoB 2.0) [26] was used to evaluate the methodological quality of the included studies.Involving seven items: (i) random sequence generation, (ii) allocation concealment, (iii) blinding of participants and personnel, (iv) blinding of outcome assessment, (v) incomplete outcome data (loss to follow-up or withdrawal), (vi) selective reporting, (vii) other biases.High-risk, low-risk, and unclear were used to evaluate the risk of bias for each item.If all of the above criteria are fully met, the study quality level is A, indicating a low possibility of various biases occurring.If some of the above criteria are met, the study quality level is B, indicating a moderate possibility of bias occurring.If none of the above criteria are met, the study quality level is C, indicating a high possibility of bias occurring.In the event of disagreement between the two researchers, a third-party researcher should be consulted to reach a consensus. Data extraction Two researchers independently screened articles, extracted data, and cross-checked them.The data were extracted according to the designed extraction strategy, which included: (i) basic information of the included studies, including title, first author, publication year, abstract, and source of the literature; (ii) study characteristics, including sample size, age of the experimental and control groups, and intervention measures; (iii) outcome indicators, including observation indicators, measurement tools or assessment criteria, measurement values, and research conclusions. Data synthesis and analysis RevMan5.4 software was used for meta-analysis.The heterogeneity test was performed using the I 2 test.If P>0.1 and I 2 <50%, heterogeneity was considered acceptable, and the fixed effect model was selected; if P≤0.1 and I 2 ≥50%, indicated that there was heterogeneity among studies, and the random effect model was selected.A sensitivity analysis was conducted to identify sources of heterogeneity.The effect size of count data was expressed as odds ratio (OR) with a 95% confidence interval (CI), while continuous data were expressed as mean difference (MD) or standardized mean difference (SMD) with a 95% confidence interval (CI). Literature search outcomes We searched 431 relevant articles in the database and obtained one article by reading the references to related studies.The EndNote software was applied to remove 207 duplicate literatures.156 articles were excluded based on reading the titles and abstracts, as they included non-randomized controlled trials, inconsistent research subjects, and poor correlation.Further reading of the full text was re-screened to exclude 58 papers with the same data, outcome indicators that did not match, data that could not be translated into application, and lower quality.Ultimately, we included 11 articles [27][28][29][30][31][32][33][34][35][36][37] in our analysis, consisting of 9 Chinese-language articles [27][28][29][30][31][32][33][34][35] and 2 English-language articles [36,37].A total of 3830 patients were included, including 1989 in the experimental group and 1841 in the control group.The literature screening process and results are shown in Fig. 1. The basic characteristics of studies 11 RCTs published between 2013 and 2023 were included in the meta-analysis.The studies were based on three different theories, including seven on the timing theory [27][28][29][30][31][32][33], two on the 5A nursing model [34,35], and two on the cognitive-behavioral theory [36,37].One study on the theory of planned behavior [38] was not included in the meta-analysis because it was not an RCT.The basic characteristics of the literature are shown in Table 1. Quality assessment Two researchers evaluated and graded the 11 included studies according to the RTC bias risk assessment tool [26] provided by the Cochrane Collaboration.The results are shown in Table 2 and Fig. 2. All studies were graded B in quality.Ten studies [27][28][29][30][31][32][34][35][36][37] described the generation of randomized sequences, with seven studies [27-30, 32, 35, 37] using random number tables for grouping, one study [31] using odd-even numbering for grouping, one study [34] grouping according to patient preference, and one study [36] mentioning randomization but not specifying the method used.None of the 11 studies had any dropouts or missing data reports, and the experimental and control groups were comparable in terms of baseline levels before the intervention (P > 0.05).This suggests that the methodological quality of the included literature is fair, the risk of bias is low, and the credibility of the evidence is high. Meta-analysis results and sensitivity analysis Smoking cessation rates Ten studies [27-29, 33, 35-37] were evaluated for smoking cessation rates.Four studies [27,28,30,32] reported smoking cessation rates at one month after the intervention, and nine studies [27,[29][30][31][32][33][35][36][37] reported smoking cessation rates at six months after the intervention.Fewer studies reported smoking cessation rates at three and twelve months after the intervention, so they were not included in the metaanalysis.The heterogeneity test was conducted, I 2 =48% and P=0.03, and the heterogeneity was acceptable.A fix-effects model was used for analysis, which showed that smoking cessation interventions at different intervention times were more effective in increasing smoking cessation rates than the control group [OR=4.04,95%CI (3.23, 5.06), P<0.001, Fig. 3]. Table 2 Risk of bias summary Quality grade: B is medium quality A random-effects model was used for analysis, which showed that the effect of theory-based smoking cessation interventions on lung function was better in the experimental group than in the control group [MD=0.51,95% CI (0.28, 0.74), P<0.001, Fig. 5].Sensitivity analysis was performed by excluding individual studies, and the results still showed significant heterogeneity, indicating that the heterogeneity was stable. A random-effects model was used for analysis, which showed that the effect of theory-based smoking cessation Fig. 3 Forest plot of smoking cessation rate interventions on quality of life was better in the experimental group than in the control group [MD=-4.87,95% CI (-6.34, -3.40), P< 0.001, Fig. 6].Sensitivity analysis was performed by excluding individual studies, and the results still showed significant heterogeneity, indicating that the heterogeneity was stable. Clinical symptom score Two studies [28,34] reported clinical symptom scores, which are not suitable for meta-analysis because of the paucity of literature.Both studies [28,34] showed that the clinical composite symptom scores were significantly lower in the experimental group than in the control group (P <0.05). Frequency of clinical symptom exacerbation Two studies [33,34] reported the frequency of clinical symptom exacerbation, which was not suitable for meta-analysis due to the small number of studies.The two studies [33,34] both showed that the frequency of clinical symptom aggravation in the experimental group was significantly lower than that in the control group (P<0.05). Discussion This study conducted a meta-analysis of data from 11 randomized controlled trials to assess the effectiveness of smoking cessation interventions in patients with COPD.This meta-analysis demonstrated that based on timing theory [10], 5A nursing model [12], and cognitive behavioral theory [13] smoking cessation interventions significantly improved smoking cessation rates, lung function, and quality of life in COPD patients.However, these interventions did not significantly affect nicotine dependence levels. The timing theory proposes that smoking cessation strategies should be targeted based on the disease stage of COPD patients.This approach emphasizes understanding the different stages of the disease, improving negative behaviors, and increasing patients' confidence to quit smoking [27][28][29][30][31][32][33].The 5A nursing model involves individualized assessment, setting goals, and providing help and regular follow-up to change COPD patients' cognition of the disease and the harm of smoking so that they can establish correct health beliefs [34,35].Cognitive behavioral theory emphasizes the importance of addressing patients' smoking-related thoughts and behaviors for successful smoking cessation [36,37].Healthcare providers can develop interventions by targeting the specific needs of patients at each stage of the disease, identifying the underlying causes of their smoking behavior, and selecting an appropriate rationale.The goal is to help COPD patients develop effective strategies to quit smoking and manage their disease symptoms.This study provides valuable insights into the effectiveness of theory-based smoking cessation interventions for COPD patients. Theory-based smoking cessation interventions can improve the smoking cessation rate of COPD patients The findings of this study suggest that theory-based smoking cessation interventions can improve smoking cessation rates in patients with COPD.Given the strong association between COPD and smoking, it is crucial to address smoking cessation as a key component of COPD management [41].Previous studies mainly used smoking cessation drugs to relieve withdrawal symptoms or used auxiliary methods to improve the success rate of smokers who wanted to quit, but not all patients were willing to accept or needed to use smoking cessation drugs to quit successfully [42,[41][42][43][44].The positive impact of theory-based smoking cessation interventions on smoking cessation rates can be attributed to their emphasis on understanding patients' individual needs, motivations, and barriers to quitting smoking, as well as providing tailored support and strategies to overcome these challenges.By addressing the psychological aspects of smoking behavior and incorporating behavioral change theories, these interventions can help patients develop the necessary skills and confidence to successfully quit smoking.The use of theory-based interventions is particularly promising because it allows for a more systematic and evidence-based approach to smoking cessation.It is more conducive for patients to form a strong desire to quit smoking and take action to bring about more effective and sustainable smoking cessation effects for patients.The sensitivity analysis showed that the heterogeneity among the studies included in the meta-analysis was acceptable, indicating that the evidence results were relatively reliable. The effect of theory-based smoking cessation interventions on nicotine dependence levels is uncertain Nicotine dependence, also known as tobacco dependence, is a chronic disease [45].A considerable number of COPD patients, know the harm of smoking and have the intention to quit, but because they are addicted to smoking, it is difficult to quit, which means that their degree of tobacco dependence has not improved and they still have a high risk of relapse after discharge [46].The lack of significant effect on nicotine dependence levels may be due to several factors, including the relatively short duration of the interventions and follow-up periods in the included studies, as well as potential differences in the measurement and reporting of nicotine dependence levels across studies.For patients, in addition to providing professional and scientific help throughout the smoking cessation process, better results can be achieved by combining drug control and encouraging family members to provide adequate emotional support throughout the process.It is recommended that future studies be guided by theory and combined with pharmacological control to investigate the improvement effect. Theory-based smoking cessation interventions improve lung function and quality of life in COPD patients Lung function is the gold standard for diagnosing and evaluating the severity of COPD, which can objectively reflect the degree of airflow restriction or obstruction in patients [47].Due to the intake of a large amount of nicotine, tar, and some radioactive substances, COPD smokers have a serious impact on their lung health, which not only causes inflammatory changes but also threatens the lung function of the body's respiratory system [48].As the duration of smoking increases, the lung function of patients also decreases, which further triggers a series of lung diseases and reduces their quality of life [49,50], so it is urgent to control their smoking behavior. The improvement in lung function observed in this meta-analysis is consistent with previous research showing that smoking cessation can lead to significant improvements in lung function and reduce the risk of COPD exacerbations.By helping patients quit smoking, theory-based interventions may contribute to slowing down the progression of COPD and improving patients' overall respiratory health.The observed improvement in quality of life is also an important finding, as COPD is known to have a significant impact on patients' physical, emotional, and social well-being.By addressing both the physical and psychological aspects of smoking behavior, theory-based interventions may help improve patient's overall well-being and quality of life. Limitations Several limitations of this study remain: (i) Due to language limitations, only publicly available Chinese and English literature was searched, which may result in incomplete literature collection; (ii) The included studies did not mention allocation concealment and blinding methods, resulting in medium-quality research quality, which may affect the reliability of the results to some extent.It is hoped that subsequent relevant research will further improve the rigor of allocation concealment and blinding methods to achieve higher quality levels.(iii) Currently, most studies only report short-term effects of theory-based smoking cessation interventions on COPD patients. Conclusion The findings of this study demonstrated that implementing theory-based smoking cessation interventions in conventional healthcare can have a positive effect on the smoking cessation rate, lung function, and quality of life of COPD patients.It is recommended that these interventions be widely implemented in clinical practice.Further investigation is required to confirm these findings due to the limitations in the standardization and homogeneity of the included studies. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Fig. 1 Fig. 1 Flow chart of literature screening Table 1 Basic characteristics of the included literature T: test group, C: control group; ①quit rate; ②nicotine dependence; ③lung function; ④quality of life; ⑤clinical symptom score; ⑥frequency of clinical symptom exacerbation
2023-08-09T13:39:11.940Z
2023-08-09T00:00:00.000
{ "year": 2023, "sha1": "a139e696065f8fb91000db92eed9a94502e723ca", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-16441-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "063dbb1c75c18dcfd4bf9b87a76825945bb50879", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268654552
pes2o/s2orc
v3-fos-license
The Failure of the Bank of the Commonwealth: An Early Example of Interest Rate Risk This Economic Commentary describes the collapse and subsequent bailout of the Detroit-headquartered Bank of the Commonwealth in 1972. Commonwealth failed because it invested heavily in long-duration, fixed-rate municipal securities in the mid-1960s in a bet that interest rates would decline. Instead, with the beginning of the Great Inflation of 1965–1980, rates rose. Liquidity problems then ensued, and the bank approached failure. Unable to find an acquirer because of Michigan’s banking restrictions, regulators instead bailed out the bank because of fears of contagion. This article also compares the collapse of Commonwealth with the spring 2023 failures of Silicon Valley Bank and First Republic. In particular, I discuss structural changes in banking that impacted the speed of the runs and the pools of potential acquirers. Introduction In 1970, the Bank of the Commonwealth was in severe trouble.The bank, headquartered in Detroit, Michigan, had grown fast since 1964, roughly tripling in size to $1.5 billion in assets.It had invested in long-duration municipal securities under the belief that interest rates would decline, as they had in previous cycles, and with the aim of earning a large capital gain.Instead, long-term interest rates increased, and the market value of the securities dropped.Furthermore, the recession of 1969-1970 reduced the bank's income.As the market became aware of its weakening condition, wholesale funding dried up.Sales of its securities to meet liquidity needs would have forced Commonwealth to recognize the losses, thus making it insolvent.Faced with Commonwealth's impending failure, unable to find an acquiring bank because of state branching restrictions, and unwilling to allow a bank of Commonwealth's size to fail, the Federal Deposit Insurance Corporation (FDIC) bailed out the bank in 1972. If this pattern looks familiar, it should.Other than the speed of its failure and the details of how it was resolved, Commonwealth's path looks similar to that of Silicon Valley Bank (SVB).Both tripled in size in just a few years; both invested in long-duration, fixed-rate securities; and both had unstable funding bases.SVB was dependent on uninsured deposits, while Commonwealth was dependent on wholesale sources of funding and price-sensitive outof-market time deposits. 1Each failed during a period of increased inflation and interest rates that followed a long period of low inflation and low rates.Finally, in both cases, uninsured depositors were protected.In Commonwealth's case, it was through the use of the discount window to keep the bank operating until the FDIC recapitalized it; in SVB's, it was through use of explicit guarantees of uninsured depositors by federal regulators. Edward S. Prescott is a senior economic and policy advisor at the Federal Reserve Bank of Cleveland.The author thanks Grant Rosenberger for excellent research assistance and Paola Boel for helpful comments. The views authors express in Economic Commentary are theirs and not necessarily those of the Federal Reserve Bank of Cleveland or the Board of Governors of the Federal Reserve System or its staff. The first goal of this Economic Commentary is to provide a more complete picture of Commonwealth's rise and fall than exists in the literature.The main analyses of Commonwealth in the economic literature is the description by Irvine Sprague (1986), who was on the FDIC's board of directors when Commonwealth was bailed out, and the analysis of the origins in the 1970s of too-big-to-fail policies in the United States by Nurisso and Prescott (2017, 2020).The second goal is to compare the 1972 failure of Commonwealth to the spring 2023 failures of SVB and, to a lesser extent, First Republic. 2,3As during the 2021-2023 period, the late 1960s were marked by increased inflation and tight monetary policy, so comparisons of bank failures across these two periods is of particular interest. The Bank of the Commonwealth The Bank of the Commonwealth was a bank that operated in Detroit, Michigan.In the early 1960s, it was a minor bank and conservatively run.In 1964, it was acquired by Donald H. Parsons, who was at the center of a network of partnerships that acquired banks, mainly in Michigan.He used the partnership structure to get around Michigan laws that limited bank branching to within 25 miles of a bank's headquarters and that forbid bank holding companies (Sprague, 1986). 4hile the partnerships mainly operated banks, the partners were also involved in commercial real estate and other projects under the umbrella of COMAC, a company that Parsons and some of his partners created in 1967.COMAC operated like a management consulting company that provided management services mainly to the various banks controlled by Parsons, but also to real estate projects controlled by people in the bank partnerships and to a few outside firms (Gies, 1975).While COMAC was also a partnership, Parsons was chair, and, according to Gies (1975), COMAC embodied Parsons' management goals.Parsons, at his peak, directed a network of 19 banks, including two overseas banks, through the various partnerships that owned the banks and COMAC (Gies, 1975). Parsons' partnerships would finance the acquisition of these banks by taking a note from a large bank, investing the funds into an acquired bank, and then repaying the note with income from the same acquired bank.Once Parsons' partnership acquired the bank, Parsons' strategy was to grow it fast and increase both its return and risk.On the asset side, his banks reduced their shares of Treasuries and cash and increased their shares of tax-exempt municipal securities, including lower-rated securities and loans (Rose, 1968).On the liability side, Parsons' banks funded their growth by using time and savings deposit incentive programs and wholesale borrowing, mainly in the fed funds market, but in some cases also from the Eurodollar market (Gies, 1975). The largest and most important bank in this network was the Bank of the Commonwealth.In 1964, prior to Parsons' gaining control, Commonwealth had $540 million in assets.After Parsons acquired it, Commonwealth grew rapidly, reaching $1.5 billion in assets in 1970. 5Parsons also changed the mix of assets held by the bank.Holdings of Treasury securities dropped from 40 percent to around 5 percent, loans increased from around 40 percent of assets to around 70 percent, and municipal securities increased from 7 percent to 23 percent.In contrast, for the commercial banking sector as a whole, Treasury holdings dropped only from 18 percent to 10 percent of domestic assets, and municipal holdings grew only slightly, increasing from 10 percent to 11 percent of domestic assets. 6or 1960 through Commonwealth's bailout in 1972, Figure 1 shows the growth in Commonwealth's assets, and Figure 2 shows the change in the composition of these assets. Source: Call Reports Note: Call Reports were quarterly from 1960:Q1 through 1963:Q3, after which they are semiannual until 1973:Q1.All three vertical lines reflect the last Call Report filed before the event.For example, the formal bailout of Commonwealth was done early in 1972, so the "Bailout" line is the December 31, 1971, Call Report.The biggest risk in Commonwealth's portfolio was the duration of its municipal securities portfolio.The interest rates on most municipal securities and loans were fixed, so the value of these obligations would drop if rates increased.Figure 3 A second factor behind Commonwealth's strategy was the treatment of capital losses and capital gains by the federal tax code.While the federal tax code taxed corporate capital gains at a lower rate than corporate income, roughly 25 percent versus 50 percent in the 1960s, capital losses were treated differently for commercial banks.Net realized capital losses could be expensed against income, which was taxed at the high corporate income tax rate of roughly 50 percent. 7And, while income from municipal obligations was tax free, if interest rates went up, a bank could still sell the securities, incur the capital losses to reduce current taxes, and reinvest the proceeds in similar tax-exempt securities at their new lower price to receive roughly the same tax-free income as before.As long as the bank had positive income, the capital losses would be expensed and reduce taxable income.Commonwealth took advantage of this asymmetry by purchasing securities in order to take advantage of cyclical fluctuations in interest rates (Rose, 1968).Of course, this strategy does not work if the bank has losses, and this situation became a problem for Commonwealth. 8 According to Irvine Sprague, who was on the FDIC board of directors in 1972 when the FDIC bailed out Commonwealth, Commonwealth's pattern, which was repeated at other banks, was to sell off the safe, steady, and staggered federal securities in the bank's portfolio and load up on low-grade, long-term municipal securities that bore higher interest rates.After almost a decade at low relatively stable levels, interest rates had been drifting upward in the late 1960s, and COMAC was trying to lock them in.COMAC believed the rise was cyclical and that rates were ready to fall.If that happened, all those high-yielding municipals in the 5 percent tax-free range would surge in value and the client banks could sell them at a princely profit. . . . As early as January 1968, representatives of the Chicago Fed met with Commonwealth's board and expressed concern to the directors about the course bank management was following.Parsons, as Commonwealth chairman, strongly defended their policy, saying it would produce vast capital gains for the bank.He said the projections of Dr. Gies showed that a downturn in interest rates would occur between July 1, 1968, and July 1, 1969.(Sprague, 1986, pg.59) Source: Commonwealth annual reports Note: Commonwealth did not report the maturity in its 1970 Annual Report, and a copy of the 1971 report is unavailable.1965 1966 1967 1968 1969 1972 Years Unfortunately for Commonwealth, Gies' forecast was wrong. Between July 1, 1968, and July 1, 1969, the yield on 20-year Treasury securities at a constant maturity actually increased from 5.35 percent to 6.29 percent. 9Similarly, short-term rates increased over this period, as well.What had happened was that because inflation continued to increase in this period, the Federal Open Market Committee tightened monetary policy in an attempt to reduce inflation (see Figure 4).Figure 5 At the time, all securities were accounted for at book value, so the losses would only get recognized through earnings if a bank sold its underwater securities.Unfortunately, Commonwealth was faced with selling the securities because of loan losses during the 1970 recession and liquidity problems.Part of Parsons' strategy for Commonwealth and the other banks he controlled was to fund their growth with a mix of high-interest certificates of deposit and wholesale borrowing from other banks in the fed funds market.Figure 6 shows the liability mix of Commonwealth and the growth in borrowing in the fed funds market.What the figure does not show, however, is that much of the time deposits were being funded by interest rate sensitive depositors who did not live in Detroit (Rose, 1968).Furthermore, Regulation Q interest rate caps were not raised in 1969 (Cook, 1978; Gilbert, 1986), a situation which made it harder for Commonwealth to raise funds with price-sensitive time deposits. 12These reasons prompted Commonwealth's difficulty in raising funds.As a result of this challenge, Commonwealth applied to open a branch in the Bahamas to raise funds in the Eurodollar market, but the Federal Reserve Board denied the application on March 31, 1970.Furthermore, in its denial, the Board noted that "the general character of management and the bank's financial history and condition, including the liquidity and capital positions, mitigate against approval" (Sprague, 1986, pg.62).As Sprague (1986) reports, it was very unusual for the Federal Reserve Board to issue such a scathing statement about the character of an applicant.After the release of this statement, Commonwealth's demand deposits declined by around 18 percent from January 1, 1970, to April 30, 1970 (Gies, 1975).As Figure 6 illustrates, Commonwealth initially replaced these lost deposits with borrowing in the fed funds market, but that source of funding declined in the second half of 1970 and then was replaced by discount window loans. Commonwealth's ongoing problems meant that it could no longer continue. Resolving Commonwealth As Commonwealth's decline continued, in July of 1970 the Federal Reserve used the threat of removing access to the Fed's discount window in order to force Parsons and several of his partners to resign as directors and officers of the bank and to agree to a cease and desist order in which, among other things, Commonwealth had to cut ties with COMAC, stop paying dividends, and reduce its size (Sprague, 1986). With Commonwealth being kept alive via Fed lending, the next step was to determine how to deal with the bank.For regulators, the preferred way of handling a failing bank is to find a strong bank to acquire it.Unfortunately, there was no such bank available in Commonwealth's case.Michigan's banking laws did not let out-of-state banks operate in Michigan, so only a Michigan bank could acquire it.Furthermore, Michigan's banking laws also prevented banks from opening a branch more than 25 miles from its headquarters, so only a Detroit-based bank could legally acquire Commonwealth.According to Sprague (1986), the three largest banks in Detroit had a 77 percent share of deposits, and letting one of them acquire Commonwealth's 10 percent share would have increased the concentration within the Detroit banking market to an unacceptable level (Sprague, 1986). In the absence of an acquisition, this meant that Commonwealth would fail.When a bank fails, it is not put into bankruptcy, but, instead, into FDIC resolution.The two primary ways the FDIC resolves a failed bank is to provide assistance to an acquiring bank (a purchase and acquisition) or to liquidate it.The criteria set by law for how to resolve a failing bank has changed over time, but, historically, a purchase and acquisition is the most commonly used method.Liquidation typically has been used for only the smallest banks (Horvitz, 1986). Given the concentration in the Detroit market referred to above, the FDIC ruled out a purchase and acquisition, but it was not willing to liquidate the bank.At the time, regulators believed that a $1.2 billion bank-Commonwealth's size in 1972-was too big to fail (Nurisso and Prescott, 2017, 2020).Instead, the FDIC invoked the rarely used essentiality clause of the Federal Deposit Insurance Act of 1950 that let the FDIC assist a bank in order to keep it operating if the FDIC deemed it "essential" to the local community.The FDIC's assistance to Commonwealth was only the second use of this power. 13The FDIC forced Commonwealth to sell much of its municipal securities portfolio and recognize the losses and lent the bank $60 million to replenish its capital. Commonwealth then limped along until it was acquired by Comerica in 1984. Comparison with SVB and First Republic Silicon Valley Bank got into trouble for the same reason Bank of the Commonwealth did.It bought large quantities of longduration, fixed-rate securities in a period of low interest rates and low inflation, and then both rates and inflation increased. A second similarity was that both banks had unstable funding bases.For SVB, the unstable funding came from its uninsured deposits, which were 94 percent of its domestic deposits at the end of 2022. 14A third similarity was that SVB's uninsured depositors ended up being protected by financial regulators, just like Commonwealth's, albeit by different means.Since the Great Depression, uninsured depositors of a failing large bank have rarely lost their funds (Horvitz, 1986; Stern and Feldman, 2009), and despite the many changes to banking law since Commonwealth's failure, the outcome for uninsured depositors at these two banks was the same. One difference between the two banks was the degree and speed of the withdrawals and how that affected the failing bank's resolution.Historically, most banks have enough insured and stable deposits that a combination of discount window lending and other sources of lending can keep a bank operating until a solution is found.This was the case with Commonwealth and for the other too-big-to-fail bailouts of the 1970s and 1980s (Nurisso and Prescott, 2017, 2020).As discussed above, Commonwealth did have several sources of unstable funding, but it also had some stable funding.As indicated in Figure 6, the bank retained a sizeable amount of its demand, time, and savings deposits after 1970.While the Call Report provides no information on the extent to which its deposits were FDIC insured, presumably a sizable fraction of them were insured.As a result, Commonwealth's run played out over time and gave regulators more time to resolve it. In contrast, most of SVB's deposits were uninsured and, as is well-documented, many were held by a small number of depositors who were highly connected to each other and could quickly initiate withdrawals (Board of Governors, 2023).As a result, the speed and size of the run on SVB was so fast and so large that there was not time to borrow from the discount window, let alone to find a buyer.Instead, regulators shut it down and used the systemic risk exception contained in the Federal Deposit Insurance Corporation Improvement Act of 1991 to protect uninsured depositors. 15stead, the resolution of Commonwealth looks more like that of First Republic, at least in the sense that there was more time to find an acquiring bank.First Republic was a bank based in San Francisco, California, that specialized in catering to a wealthy clientele.Like Commonwealth, First Republic invested in long-duration, fixed-rate assets, though its investments were primarily in residential mortgages.Forty-six percent of its assets were in first-lien, 1-4-family residential mortgages, and most of these mortgages had a long duration.About 47 percent of these mortgages had a rate that was fixed for five to 15 years, and another 31 percent had a rate that was fixed for at least 15 years.First Republic looked much like a savings and loan in the 1970s, but unlike those institutions, it had a large fraction of uninsured depositors: roughly two-thirds of its deposits were uninsured. 16hen First Republic experienced a run starting in March 2023, several large banks lent to it to keep it operating, and this lending gave the FDIC time to find an acquirer.On May 1, 2023, the FDIC resolved First Republic by arranging a purchase and acquisition in which JP Morgan bought all of First Republic's deposits and most of its assets, and the FDIC contributed funds to the purchase. 17The assisted purchase by JP Morgan raises another similarity with Commonwealth.As discussed earlier, concentration concerns along with Michigan banking law limited the pool of acquirers for Commonwealth.In the case of First Republic, the pool of acquirers was slightly limited because the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 prevents a bank from acquiring another bank if the acquisition would give the acquiring bank more than a 10 percent market share of deposits nationwide.Indeed, the eventual acquirer, JP Morgan, already exceeded this threshold before the acquisition.There is an exception to the law, however, if the acquired bank is failing, and the exception was used in this case (Eisen and Ackerman, 2023). A difference between Commonwealth and both SVB and First Republic is the source of interest rate risk.Unlike Commonwealth's, both SVB's and First Republic's interest rate risk exposure came primarily from residential mortgages. 18n SVB's case, the risk was from holding mortgage-backed securities, while for First Republic it was from directly holding residential mortgages.In the late 1960s, however, the commercial banking sector did not hold many residential mortgages.Instead, most were held by the thrift industry, that is, savings and loan associations, mutual savings banks, and savings banks, rather than by commercial banks.Unlike today, the thrift industry in the 1960s was sizable, holding half of the assets of the commercial banking industry.Thrifts were required by law and regulation to mainly hold residential mortgages, many of which were fixed rate and of a long duration at the time.Indeed, the thrift industry held around 57 percent of residential mortgages during the late 1960s, while commercial banks only held about 15 percent. 19Unrealized losses on the thrifts' residential mortgages, along with regulatory forbearance and then deregulation in the early 1980s, led to the Savings and Loan Crisis, which played out over two decades.For histories of that important event, see Kane (1989) and White (1991). Conclusion As in the recent episode of inflation, the inflation of the late 1960s was preceded by a long period of low interest rates and low inflation.In both periods, some commercial banks extended the duration of their assets, betting that rates would decrease or at least not increase.Instead, rates increased, and several of these banks failed.The details of how Commonwealth, SVB, and First Republic were resolved differ as a result of changes in banking law, bank structure, and payments technology.Nevertheless, in all three cases the response of regulators was similar, and uninsured depositors were protected.10.At the time, banks used book accounting for securities, and the Call Reports contain no information on the market value or duration of securities.However, some banks reported the market value of their securities in their annual reports, and Commonwealth was one such bank. 11. Prior to 1970, Commonwealth Annual Reports did not report market value separately for municipal, Treasury, and other securities.However, by 1967 municipal securities were about 80 percent of the total book value of its security portfolio, and this percentage does not drop significantly until 1972, when as part of the FDIC's bailout, Commonwealth liquidated much of its municipal security portfolio. 12. The Federal Reserve's Regulation Q originated in the Banking Act of 1933, which gave the Federal Reserve authority to put ceilings on commercial bank time and savings deposits.Regulation Q ended in 1986.For the history of this regulation, see Gilbert (1986). 13.The "Essentiality Doctrine" refers to Section 13(c) of the Federal Deposit Insurance Act, which allowed the FDIC to provide assistance to keep a bank open if the FDIC finds that the failing bank was "essential to provide adequate banking service in its community."For more on the history of this doctrine, see Horvitz (1986) or Nurisso and Prescott (2017, 2020).This doctrine was repealed in the Federal Deposit Insurance Corporation Improvement Act of 1991. 15.This act created the systemic risk exception that allows the FDIC to waive least cost resolution if certain statutory requirements are met (Congressional Research Service, 2023).It was also used for Signature Bank. 16. Author's calculations from 12/31/2022 Call Report.17. See Federal Deposit Insurance Corporation (2023) for more details.The lending by other banks to keep First Republic afloat also has earlier precedents.In 1983, when Seafirst got into trouble from energy lending, Seafirst was kept afloat by loans from a consortium of banks until an acquirer could be found (Brimmer, 1984; Nurisso and Prescott, 2017, 2020). 18. Commonwealth did hold residential mortgages so had some interest rate risk from this source, as well.At the end of 1970, residential mortgages were 17 percent of assets (author's calculations from 12/31/1970 Call Report).Unfortunately, neither the Call Report nor Commonwealth's annual reports provide any information on the duration of these loans. 19. Author's calculations from the Flow of Funds accounts (now called the Financial Accounts).The Flow of Funds defines "Savings Institutions," what is referred to in this Economic Commentary as the "thrift sector," as savings and loans, mutual savings banks, and federal savings banks. Figure Figure 1.Total Assets Held by CommonwealthOver Time Figure Figure 3. Average Maturity of Commonwealth's Municipal Securities Portfolio Figure 6.Composition of Commonwealth's Liabilities Over Time   Demand deposits   Time and savings deposits reports the average duration of Commonwealth's municipal security portfolio.It nearly doubled from 12 years and 11 months in 1965 to 23 years and 4 months in 1968.Commonwealth's strategy was to bet that interest rates would decline, and if this happened, Commonwealth would earn a large capital gain on the securities. Inflation and Short-and Long-Term Interest Rates Figure 5. Unrealized Losses on Commonwealth's Securities Portfolio Sources: Inflation, Bureau of Labor Statistics; interest rates, Board of Governors of the Federal Reserve System H.15; all series retrieved from FRED Notes: Inflation is the year-over-year percentage rate of the CPI.The short-term interest rate is the federal funds effective rate.The long-term interest rate is the end-of-month market yield on Treasury securities at a 10-year constant maturity, quoted on an investment basis. Endnotes 1.The securities on which SVB took most of its losses were mortgage-backed securities.For details on SVB's history and failure, see Board of Governors of the Federal Reserve System (2023).2.For information on the failure of First Republic, see Federal Deposit Insurance Corporation (2023).3.The other large domestic bank to fail in the spring of 2023 was Signature Bank.Signature failed because it was associated with the crypto industry and had a large fraction of uninsured depositors who ran it when Silvergate and SVB, both of which had some connections to the crypto and tech industry, were run (New York State Department of Financial Services, 2023).4.Until the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, most US states imposed numerous limits on interstate and intrastate banking.For an overview of these restrictions, seeMengle (1990).While the main way to get around these restrictions was through corporate ownership of multiple banks by a bank holding company, an alternative structure was for an individual or a small group of individuals to own multiple banks.The latter is usually referred to as "chain banking"(Hall, 1965).Data on the extent of chain banking is limited, but Parsons' network was the only chain of banks controlled by partnerships in Michigan at the time(Golembe, 1969).5.By modern standards, Commonwealth was not a large bank.If scaled by the growth in total commercial banking assets, it would be only a $53.3 billion asset bank as of December 2022.Still, at the time it was the forty-seventh largest commercial bank in the United States.
2024-03-24T15:15:55.984Z
2024-03-25T00:00:00.000
{ "year": 2024, "sha1": "3ecda9324348cab5114753e1f273044a0fbb0afd", "oa_license": "CCBYNC", "oa_url": "https://www.clevelandfed.org/-/media/project/clevelandfedtenant/clevelandfedsite/publications/economic-commentary/2024/ec-202406-failure-of-bank-of-the-commonwealth/ec202406.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e568e687e10a403a33c42bbea3a6873ff9a71d14", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
257434360
pes2o/s2orc
v3-fos-license
Treatment rates and healthcare costs of patients with fragility fracture by site of care: a real-world data analysis Summary In a characterization of treatment rates and healthcare costs among patients with an osteoporotic-related fragility fracture overall and by site of care, costs were high and treatment rates were low. Purpose Osteoporotic fractures can be debilitating, even fatal, among older adults. The cost of osteoporosis and related fractures is projected to increase to more than $25 billion by 2025. The objective of this analysis is to characterize disease-related treatment rates and healthcare costs of patients with an osteoporotic fragility fracture overall and by site of fracture diagnosis. Methods In this retrospective analysis, individuals with fragility fractures were identified in the Merative MarketScan® Commercial and Medicare Databases among women 50 years of age or older and diagnosed with fragility fracture between 1/1/2013 and 6/30/2018 (earliest fracture diagnosis = index). Cohorts were categorized by clinical site of care where the diagnosis of fragility fracture was made and were continuously followed for 12 months prior to and following index. Sites of care were inpatient admission, outpatient office, outpatient hospital, emergency room hospital, and urgent care. Results Of the 108,965 eligible patients with fragility fracture (mean age 68.8), most were diagnosed during an inpatient admission or outpatient office visit (42.7%, 31.9%). The mean annual healthcare costs among patients with fragility fracture were $44,311 (± $67,427) and were highest for those diagnosed in an inpatient setting ($71,561 ± $84,072). Compared with other sites of care at fracture diagnosis, patients diagnosed during an inpatient admission also had highest proportion of subsequent fractures (33.2%), osteoporosis diagnosis (27.7%), and osteoporosis therapy (17.2%) during follow-up. Conclusion The site of care for diagnosis of fragility fracture affects treatment rates and healthcare costs. Further studies are needed to determine how attitude or knowledge about osteoporosis treatment or healthcare experiences differ at various clinical sites of care in the medical management of osteoporosis. Background Osteoporosis is a skeletal disease characterized by the loss of bone mass and the deterioration of bone microarchitecture, wherein bone strength is compromised and affected patients are predisposed to an elevated risk of fracture [1].These fractures, also known as fragility fractures, typically occur in wrists, hips, and vertebrae, can often be debilitating, put patients at an increased risk for a subsequent fracture, and can even be fatal among older adults [2].Globally, women over the age of 50 have a 9.8 to 22.8% lifetime risk of fragility fractures and fractures will occur among 1 in 3 [3].The Women's Health Initiative Observational study projected the number of fractures as similar to or higher than breast cancer, stroke, and cardiovascular 42 Page 2 of 12 disease events combined among women aged 50-79 in the USA [4].The Bone Health and Osteoporosis Foundation (BHOF; formerly the National Osteoporosis Foundation) estimates 3 million fractures and $25.3 billion in direct healthcare costs per year by 2025 [5]. Due to undertreatment and disease mismanagement, osteoporosis and related fractures present a substantial cost burden to the healthcare system.Osteoporotic fracture is a top driver of hospitalization-related costs among US womenmore costly than breast cancer, myocardial infarction, and stroke [6].One study estimated the national cost of osteoporosis and related fractures to be $22 billion [7], and that cost is expected to escalate to more than $95 billion by 2040 [8]. Fracture prevention and earlier osteoporosis diagnosis are essential to initiation of adequate treatment; however, osteoporosis remains underdiagnosed among fragility fracture patients [9].Frequency of osteoporosis diagnosis varies by site of care, and we hypothesize diagnosis patterns similarly differ across provider specialty type [10].Undertreatment is due in part to underdiagnosis among these patients [11].Bisphosphonates have been widely used to treat bone diseases since the 1970s and are well established as the firstline treatment for osteoporosis.However, poor adherence is common with oral bisphosphonates.Non-persistent patients remain at elevated risk for fracture [12].Low persistence is due in part to complex dosing instructions and fear of side effects [6,12,13]. Osteoporosis is treated by a range of clinicians in a variety of settings [14].Although there is a high degree of consistency and agreement regarding osteoporosis treatment guidelines, recommendations, and practice among clinicians, there are also significant differences.For example, the American College of Physicians recommends against bone density monitoring during the 5-year pharmacologic treatment period for osteoporosis in women, whereas the American Association of Clinical Endocrinologists recommend bone density monitoring every 1-2 years [15,16].Currently, there is a lack of research describing the relationship between the site of care where a patient is diagnosed with a fragility fracture with healthcare resource utilization, healthcare costs, osteoporosis diagnosis, osteoporosis treatment patterns, and subsequent fragility fracture rates in the following year. Objective To characterize baseline demographic characteristics and clinical conditions and the 12-month patient journey following a fragility fracture.Treatment rates and healthcare costs of individuals with fragility fracture were reported by the site of care where they were diagnosed. Study design and data source This observational cohort study was conducted using de-identified data from the Merative MarketScan® Commercial Claims Database and the Medicare Supplemental and Coordination of Benefits Database.The commercial claims database contains the inpatient, outpatient, and prescription drug experience of employees and their dependents, covered under a variety of fee-for-service and managed care health plans, including approximately 89 million lives from 2012 to 2018.The Medicare database contains the healthcare experience of retirees with Medicare supplemental insurance paid for by employers, including 5.5 million lives between 2012 and 2018.These databases provided detailed cost, use, and outcomes data for healthcare services performed in both inpatient and outpatient settings.Data were extracted using International Classification of Diseases, 9th and 10th Revision, Clinical Modification (ICD-9-CM and ICD-10-CM) codes, Current Procedural Terminology (CPT) 4th edition codes, Healthcare Common Procedure Coding System (HCPCS) codes, and National Drug Codes (NDCs).These de-identified data were fully compliant with US patient confidentiality requirements set forth in the Health Insurance Portability and Accountability Act of 1996. Patient selection and site of care cohort assignment Women aged 50 years of age and older with a fragility fracture (index date = date of diagnosis of first fracture) were identified in the commercial and Medicare Databases during January 1, 2013 through June 30, 2018.Fragility fracture, osteoporosis, and other clinical conditions were identified by ICD-9-CM/ICD-10-CM diagnosis or CPT procedure coding.To determine eligibility, patients had at least 12 months of continuous enrollment and pharmacy benefits prior to the index date (baseline period) and at least 12 months of continuous enrollment and pharmacy benefits following the index date (follow-up period).Patients with Paget's disease of the bone, osteitis deformans, osteogenesis imperfecta, hypercalcemia, cancer, or conditions categorized in ICD-9-CM/ICD-10-CM as "other osteopathy" during the baseline were excluded. Individuals with fragility fracture were categorized into cohorts based on the site of care at diagnosis.Sites of care of interest were identified a priori by the coauthors who treat and study osteoporosis and fragility fracture: inpatient, outpatient office, outpatient hospital, emergency room (ER), federally qualified health center (FQHC), rehabilitation facility, nursing facility, urgent care, patient home, rural health clinic, and assisted living facility.Detailed data was not reported for cohorts with less than 30 individuals. Individuals were also categorized into cohorts based on the index physician specialty.The index physician specialty was the physician specialty that made the diagnosis of fragility fracture.Specialties of interest on the first fragility fracture diagnosis were family medicine, internal medicine, obstetrics/gynecology (OB-GYN), orthopedics, geriatrics, rheumatology, endocrinology, and emergency medicine. Patient characteristics Patient demographic characteristics included age, region, and health plan measured on index date.Clinical characteristics, including the Deyo-Charlson Comorbidity Index (DCI) [17], were reported during the 12-month baseline period. Outcomes All-cause and disease-related healthcare utilization and costs were measured during the 12-month follow-up period.The index date was included in the follow-up period; therefore, healthcare utilization and costs of the index event are captured in the post-index averages of services and treatment.Disease-related healthcare utilization and costs corresponded to medical claims with a diagnosis code for osteoporosis or osteopenia, a diagnosis or procedure code for fragility fracture (defined by the previously described algorithm [18]), medical claims with administration (HCPCS codes) for osteoporosis therapies, or outpatient pharmacy claims (NDC codes) for osteoporosis therapies.This study reports all-cause and disease-related healthcare utilization and costs for inpatient, ER, and outpatient health care settings, as well as pharmacy utilization and costs. Healthcare costs are reported in 2018 constant US dollars, adjusted using the Medical Care component of the Consumer Price Index.Healthcare costs were measured using the financial fields on administrative claims in the MarketScan Databases. Proportions of patients with any osteoporosis treatment during the follow-up period were reported.Osteoporosis treatments covering multiple classes of anti-resorptive and bone forming agents included denosumab (RANKL inhibitor), alendronate (bisphosphonate), ibandronate (bisphosphonate), risedronate (bisphosphonate), zoledronate (bisphosphonate), raloxifene (selective estrogen receptor modulator), and teriparatide (parathyroid hormone analog) and were measured in the 12-month follow-up period.The time to subsequent fracture was measured as the number of days from the index date to the earliest fracture diagnosis during the follow-up period.Anatomical site of fracture was defined by diagnosis of a fragility fracture during the 12-month follow-up period and type of fracture (hip, vertebra, and non-hip non-vertebral) was also reported.Repeat fractures defined as those that occurred more than 90 days following the index fracture of the identical fracture type were also reported. Bone density scans were measured in the baseline and follow-up periods.Bone density scans included procedure codes that describe dual-energy X-ray absorptiometry (DEXA), bone density studies on one or more sites, ultrasound bone density measurement and interpretation, and single energy x-ray absorptiometry (SEXA) bone density studies. Statistical analysis Mean and standard deviation (SD) were reported for continuous variables, while frequencies and percentages were reported for categorical variables.All data analyses were conducted using WPS version 4.1 (World Programming, UK). Study population Of the 108,965 eligible patients with fragility fracture, most were diagnosed during an inpatient admission, outpatient office visit, or outpatient hospital visit (42.7%, 31.9%,24.0%; Fig. 1).All other sites of care identified less than 2% of the fragility fracture groups.The largest cohort of patients with fragility fracture was aged between 50-64 (48.8%), with an average age of 68.8 years (Table 1).Patients with fragility fracture diagnosed during an inpatient admission were older on average (75.0 years) compared with the overall group (68.8 years); meanwhile, most patients indexed in all other settings were between ages 50 and 64.Most patients had an Exclusive Provider Organization (EPO) health insurance plan (49.9%).The largest proportion of patients originated from the South (35.4%).Ten percent of the fragility fracture cohort had a diagnosis of osteoporosis during baseline (Fig. 1).Average baseline all-cause healthcare costs were $18,146 (SD $45,537; Table 1).The mean Deyo-Charlson comorbidity index score was 0.9 (SD 1.4), and the most common comorbidities included hypertension (52.9%), dyslipidemia (40.0%), and respiratory diseases (36.6%;Table 1). Fragility fracture rates, sites of fragility fracture during follow-up, and bone density scan utilization During the follow-up period, rate of a subsequent fragility fracture was high (26.6%;Table 2).The most common type of fracture during the follow-up period was non-hip non-vertebral (20.2%;Table 2), with a particularly high number of fractures among patients who indexed in inpatient, ER, and urgent care settings (19.7%, 68.3%, and 35.3%, respectively; Table 3).The wrist/radius-ulna site was the most common site among the non-hip non-vertebral fractures.Patients diagnosed during an inpatient admission (N = 46,507) were more likely to have a subsequent hip (13.3%) or vertebral fracture (4.5%), compared with those diagnosed at any other site (Table 3). Diagnosis and treatment of osteoporosis To understand the patient journey after diagnosis of fragility fracture, physician specialty at time of diagnosis, and the physician specialty for subsequent care are reported in Table 2. Approximately 20% of all individuals with fragility fracture (N = 108,965) were diagnosed with osteoporosis during the follow-up period and the rate was notably higher among those diagnosed in an inpatient setting, 27.7% compared with other sites of care (9.5-15.7%)(Table 4).Patients whose index fragility fracture was diagnosed in an inpatient setting also had the highest proportion of osteoporosis therapy during follow-up (17.2% vs 8.6-12.9% in outpatient settings).Among the subset of all fragility fracture patients treated with osteoporosis therapy during follow-up (N = 15,342), most were treated with oral bisphosphonates (alendronate 45.6%, ibandronate 11.2%, risendronate 7.6%) despite being a high-risk group for a subsequent fracture (Table 4).Among patients with osteoporosis treatment, the proportion of days covered (during the 12-month follow-up period) was 51.9% and the mean time from index date to therapy initiation was 109 ± 0.2 days (and generally consistent across settings with the exception of urgent care where time to treatment was only 48 ± 0.2 days; Table 4).Among individuals who utilized denosumab for osteoporosis treatment (N = 2,564), their mean time to treatment initiation was lengthier at 157 ± 94.7 days.The lowest treatment rates occurred among patients in the urgent care, outpatient office, and ER hospital cohorts (8.6%, 10.9%, and 10.9%, respectively; Table 4). Of the 108,965 individuals with fragility fracture, most (37.3%) were diagnosed by an orthopedist on the index date, followed by a family practice physician (16.9%;Table 2).Similarly, subsequent care from the index physician specialty was most common among the orthopedists (65.3%) and family practice physicians (48.0%).When subsequent care was obtained from a different physician specialty from the index provider, orthopedists were the most common specialist (18.9%).Among index physician specialty, patients whose index fragility fracture diagnosis was made by rheumatologists and geriatricians had the highest osteoporosis treatment rates (31.8% and 23.4%, respectively), while Healthcare utilization and costs The mean annual healthcare costs among patients with fragility fracture were $44,311 (± $67,427).Annual healthcare costs were highest for those diagnosed in an inpatient setting ($71,561 ± $84,072; Table 5).Among patients with at least one inpatient admission, hospitalization costs were lowest for patients with fragility fracture diagnosed in an ER ($26,003 ± $29,304) and highest for those diagnosed at urgent care ($147,725 ± $323,368) (Fig. 2).Outpatient costs (office visits) were generally lowest for those diagnosed at urgent care and highest for those diagnosed in an inpatient setting (Fig. 3).Mean healthcare costs were lowest for patients whose index fracture diagnosis was made by an orthopedist ($30,538 ± $53,202).Mean inpatient costs were highest for those diagnosed by an internal medicine physician ($40,489 ± $64,441).Mean outpatient pharmacy costs were lowest for patients whose fragility fracture diagnosis was made by a geriatrician ($3251 ± $4517) and highest for those diagnosed by a family medicine physician ($4108 ± $10,502).The mean annual disease-related healthcare costs among fragility fracture patients were $9784 (± $16,086), or 22.1% of overall healthcare costs (Table 5).Patients diagnosed in outpatient hospitals or ERs had a higher proportion of disease-related healthcare costs (45.8% and 45.0%), compared with inpatient admissions, outpatient office visits and urgent care (17.0%, 20.5%, 15.3%, respectively). Mean annual disease-related costs were highest for those diagnosed by a geriatrician ($16,078 ± $27,510) or endocrinologist ($16,397 ± $23,915) and lowest for those diagnosed by an orthopedist ($7690 ± $12,866).Higher costs among those diagnosed by a geriatrician or endocrinologist were driven by the larger proportion of patients with a diseaserelated inpatient admission (12.8% and 15.9%, respectively) and a larger proportion of patients with a disease-related ER visit (36.2% and 40.9%, respectively). Discussion This claims-based analysis of postmenopausal women with fragility fracture provides insight into the demographic characteristics, clinical conditions, treatment patterns, healthcare costs and utilization during the year following fragility fracture overall and stratified by site of care of fracture diagnosis.It was found that 26.6% of the patients had a subsequent fragility fracture, while rate of osteoporosis treatment and diagnosis was notably low (19.6% with diagnosis and 14.1% with treatment).The inpatient setting was the most common site of care of fragility fracture diagnosis (42.7%), and this cohort was older, sicker (e.g., higher DCI score, higher baseline costs), more likely to have a subsequent fragility fracture, more likely to a severe hip or vertebral type of subsequent fragility fracture, had a higher rate of osteoporosis diagnosis and a higher rate of treatment compared with patients diagnosed with fragility fractures in outpatient sites of care.Women diagnosed with fragility fracture in the inpatient setting incurred the highest healthcare costs ($71,561 ± $84,072) during follow-up which may be attributable to their higher rate of subsequent fractures (26.6%).The lower prevalence of subsequent fractures in the follow-up period might be due to the younger age of patients diagnosed with fragility fracture in the outpatient settings [19].Hip fractures are among the costliest fracture site and are frequently followed by surgery and lengthy rehabilitation [20,21].Results from this study support the need for earlier osteoporosis screening (leading to earlier diagnosis and treatment) to potentially prevent initial and subsequent fractures (particularly those requiring hospitalization) leading to increased burden to both patients and costs to society. The high rate of subsequent fractures among older women, in general, is supported by several studies of Medicare and commercial populations [22][23][24].In a claims analysis among female Medicare beneficiaries 65 years of age and older, 10-31% had a subsequent fracture within 1-5 years [22].Consistent with our analysis, the majority of these subsequent fractures were non-hip/non-vertebral which emphasizes the need for physical therapy aimed at preventing falls leading to subsequent NHNV fractures.Among older men and women enrolled in Medicare, there was a 2.5 greater risk of fracture within 12 months among those with a history of fracture [23].Prior hip fracture was identified among 29% of women aged 50 + diagnosed with a hip fracture between 2008 and 2013 with commercial and Medicare Advantage plans [24].The low osteoporosis treatment rate after fragility fracture diagnosis found in this study is also consistent with prior literature [25][26][27].In the current analysis, patients diagnosed with fragility fracture in the inpatient setting had the highest proportion of osteoporosis treatment initiation during the follow-up period; however, it was still only 27.7%.These results are similar to a claims-based study by Solomon et al. using data from 2002 to 2011 which reported that 24.0% of patients diagnosed with a fragility fracture during an inpatient admission were treated with osteoporosis therapy during the 12 months following hospital discharge. In that analysis, it was found that 70% of patients were treated with oral bisphosphonates, 0.3% with denosumab, and 2.6% with teriparatide.Results from the current and more recent analysis show that even among this highest risk cohort (i.e., those diagnosed with fragility fracture during a hospitalization) that treatment rates remain low, and of those who do receive treatment most are prescribed an oral bisphosphonate despite non-oral (and more potent) options available. In clinical practice, fragility fractures are an indicator of an osteoporosis; however, less than a quarter of individuals Table 4 Osteoporosis therapy measured during the 12-month follow-up period; overall and stratified by site of care at fragility fracture diagnosis 1 The percentages calculated for the subsequent rows are calculated out of the total number of patients with any osteoporosis therapy 2 Any osteoporosis therapy includes the following drugs: alendronate, denosumab, ibandronate, raloxifene, risedronate, teriparatide, zoledronic acid 3 PDC is defined as the number of days covered by the reported days' supply of a pharmacy claim or the days of clinical benefit of an outpatient medical claim, divided by 365 days with fragility fracture were diagnosed with osteoporosis during the follow-up period and only approximately 10% were diagnosed with osteoporosis during the year prior to fracture [28].The low rate of osteoporosis diagnosis is likely due to lack of recognition and awareness of the underlying disease (leading to undercoding on healthcare claims).Bone density scans are also indicative of an osteoporosis diagnosis; however, we observed low utilization of these scans as well.A literature review of Canadian practice patterns observed similarly low osteoporosis diagnosis rates among individuals with fragility fracture [29].This lack of disease awareness contributes to underdiagnosis of osteoporosis that undermines efforts for appropriate treatment [30,31].The majority of patients with fragility fracture were, as expected, diagnosed with the initial fracture by an orthopedist.However, few patients (43.3%) with the fragility fracture had subsequent care with their index physician specialty provider.Among those fragility fracture patients in the orthopedics cohort, less than 10% were seen by family medicine or internal medicine and less than 3% were seen by rheumatology and endocrinology specialties.This suggests that patients are not receiving the subsequent care they need for the longterm management of osteoporosis. There are several strengths to the analyses presented here.First, this study used retrospective claims data, which provides a large, heterogeneous patient population.Unlike clinical trials that are subject to strict inclusion criteria and surveys that are subject to small groups and memory biases, this study of real-world claims captured medication utilization data from a broad group of osteoporosis and fragility fracture patients in clinical practice.It should be noted, however, that this was not a comparative study.Differences in baseline characteristics varied by site of care cohorts and results were not adjusted for baseline differences.Claims studies are subject to several potential limitations.These data were subject to data entry errors or miscoding.Claims data can identify that a medication was dispensed, but not that the medication was administered or taken as prescribed.This analysis was performed among patients with commercial or Medicare Supplemental insurance, and therefore may not be generalizable to those with other insurance types or without insurance coverage.Finally, patients were not necessarily newly diagnosed with fragility fracture in our sample given that a full patient history was not accessible. Conclusion Patients with a fragility fracture had a high rate of subsequent fractures and high cost of care, especially for those requiring hospitalization, so screening and prevention are important to avoid the burden to patients and cost to society.Further, patients diagnosed with fragility fracture in the outpatient settings were younger and had the lowest rate of osteoporosis diagnosis and treatment rates following fracture.Targeting all patients with fragility fracture and particularly those diagnosed in the outpatient setting is of utmost importance for earlier screening, treatment, and fall prevention therapy to potentially avoid hospitalizations and subsequent fractures and to improve patient quality of life.Understanding initial engagement in care, diagnosis, and subsequent sites of care might identify opportunities to decrease subsequent fractures and halt the growing healthcare costs experienced by an aging population. 42 Page 12 of 12 permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/. Table 1 Demographics and clinical characteristics among patients with fragility fracture (overall and stratified by site of care at diagnosis) Abbreviations: DCI, Deyo-Charlson Comorbidity Index; EPO, Exclusive Provider Organization; HMO, Health Maintenance Organization; POS, Point of Service; PPO, Preferred provider organization; SD, Standard Deviation Table 2 Fragility fracture characteristics during baseline, on index, and follow-up periods (overall cohort) Table 3 Fragility fracture outcomes during the 12-month follow-up period stratified by site of care 1 at diagnosis Table 5 All-cause & disease-related healthcare utilization and expenditures during the 12-month follow-up period; overall and stratified by site of care at fragility fracture diagnosis Emergency Room; SD, Standard Deviation 1 Average costs of ER visits calculated for just those with at least ER visit2Average costs of outpatient office visits calculated for just those with at least one outpatient office visit3Average costs of inpatient admissions calculated for just those with at least one inpatient admission
2023-03-11T15:50:23.414Z
2023-03-11T00:00:00.000
{ "year": 2023, "sha1": "bb9a26834888a9f6a728c95b75e197f38c6c919c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11657-023-01229-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3300a41f2ef3327d5a4b904d551ae668c1c82fc6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233969442
pes2o/s2orc
v3-fos-license
Using the Boundary Element Method to Simulate Visco-Elastic Deformations of Rough Fractures In many engineering applications, such as tribology and rock mechanics, it is very important to understand the deformation of rough fractures to evaluate the safety and profitability of the project. Since a lot of materials can be characterized as visco-elastic materials, it is very significant to simulate the visco-elastic deformation of rough fractures. This chapter focuses on using the boundary element method to simulate visco-elastic deformations of rough fractures. First, the principles and procedures of the above-mentioned method will be introduced. Then, one example will be given in detail. This example investigates the effect of surface geometry on visco-elastic deformations of rough rock fractures under normal compressive stresses. The rock fracture surfaces are assumed to be self-affine, and synthetic rough surfaces are generated by systematically changing three surface roughness parameters: the Hurst exponent, root mean square roughness, and mismatch length. The results indicate that by decreasing the Hurst exponent or increasing the root mean square roughness or increasing the mismatch length, the fracture mean aperture increases, and the contact ratio (the number of contacting cells/total number of cells) increases slower with time. Finally, the limitations and possible future research directions will be briefly discussed. Introduction A lot of natural and engineering materials can be categorized as visco-elastic materials, such as rock, elastomers, and rubbers. In engineering applications, it is very important to understand and simulate the visco-elastic deformation of rough fractures. For example, in hydrocarbon extraction, we need to accurately simulate the visco-elastic deformation of rock fractures to predict production rates. In biomedical devices, we need to simulate the visco-elastic deformation of artificial joints to evaluate safety and effectiveness. Due to the geometrical complexity of rough fractures and the time-dependent properties of engineering materials, it is extremely difficult to obtain closed-form mathematical solutions. Thus, numerical models are required to simulate the time-dependent behavior of rough fractures. The boundary element method (BEM) has been extensively used in solving rough surface contacting problems for distinct advantages compared with the traditional finite element method (FEM). First, it only requires discretization and calculation on the boundaries of the calculation domain, which is two-dimensional. On the contrary, FEM requires discretization and calculation for the whole calculation domain. As a result, to achieve the same stress calculation resolution, BEM requires much fewer numbers of elements and therefore, much less calculation time. In addition, since all the approximations are limited to the boundary, BEM has better stress calculation accuracy compared with FEM. In recent years, researchers have been combining the BEM and fast numerical algorithms to achieve more efficient numerical simulations for contact problems. Stanley and Kato [1] published the first paper using the fast Fourier Transform (FFT) method to calculate the elastic deformation of rough surfaces under normal stresses. The FFT method makes the BEM simulation more efficient because FFT turns complicated convolution into simple matrix multiplication. Later, Polonsky and Keer [2] proposed the conjugate gradient (CG) method and combined it with the FFT method to further improve the efficiency. Liu et al. [3] improved the drawbacks of the FFT method proposed by Stanley and Kato [1]. Then, the CG and FFT methods have been applied to simulate plastic and visco-elastic deformations of rough fractures. Jacq et al. [4] and Sahlin et al. [5] considered perfect plasticity to simulate deformations of rough metal surfaces; and Wang et al. [6] considered strain-hardening plasticity. For visco-elasticity, Chen et al. [7] first used the CG and FFT method to simulate visco-elastic deformations of rough fracture surfaces. They simulated three loaddriven scenarios: rigid sphere indenting into PMMA surface, contact area evolution under constant load, and contact area evolution under harmonic cyclic load. Spinu and Cerlinca [8] applied different cut-off values for contact pressure to account for the plastic deformation of contacting asperities. However, it appears that there is not much work that systematically simulates the visco-elastic deformation of rock fracture surfaces. Kang et al. [9] reported that for Musandam limestone fractures, the effect of mechanical compression on rock fracture time-dependent deformation is non-negligible, and should be systematically investigated. In addition, previous articles suggest that the fracture surface geometry has a significant effect on fracture time-dependent deformation. Therefore, we should systematically study the effect of surface geometry on rock fracture visco-elastic deformations. Brown [10] proposed a simple probabilistic model to describe rock fracture surface geometry. In his model, the rock fracture surface geometry can be completely described by three key parameters: the Hurst exponent, the root mean square (RMS) roughness, and the mismatch length scale. In this research, his model will be used to generate synthetic fracture surface pairs, and the three key parameters will be changed systematically. The numerical method proposed by Chen et al. [7] will be used to simulate the visco-elastic deformation of synthetic fracture surfaces. This chapter is organized as follows. Section 2 introduces and explains the principles and procedures of the numerical method. Section 3 provides a detailed example. The method for generating synthetic rough surfaces is introduced, and the effect of surface geometry parameters on the creep deformation is shown and discussed. Section 4 mentions the limitations of this method. Section 5 summarizes the findings. Method for calculating fracture elastic deformation Before explaining the method for visco-elastic deformation calculation, it is essential to introduce the method for elastic deformation calculation. The author has developed an in-house numerical code, which is similar to the algorithm proposed by Polonsky and Keer [2]. In this section, only the key mathematical concepts will be shown; the details can be found in their work [2]. It is worth noting that only the compressive stress (stress normal to the fracture surface) is considered; the shear stress (stress parallel to the fracture surface) is not considered. First, the aperture (surface gap between two rough surfaces) distribution h (x,y) needs to be defined: where h 0 (x,y) is the initial aperture distribution, u e (x,y) is the elastic deformation of fracture surfaces, and δ is the rigid body displacement between two surfaces under applied normal stress. Here, compressive stress and fracture closure are defined as positive. The boundary conditions are expressed as: px ,y ÀÁ > 0 and h x, y ðÞ ¼ 0 where p(x,y) is the contacting stress (normal to the surface) acting on location (x,y). Eqs. (2) and (3) indicate that the contacting stress is larger than zero at contacting regions, and is zero at non-contacting regions. Boussinesq and Cerrutti [11] stated that the vertical displacement u e (x,y) can be calculated as: where p(x 0 ,y 0 ) is the normal pressure acting on location (x 0 ,y 0 ), K is the influence matrix, which represents the normal displacement at location (x, y) caused by unit normal pressure acting on location (x 0 ,y 0 ), and u e (x,y) is the elastic displacement at location (x, y). The influence matrix K can be expressed as: where G is the shear modulus, and v is the Poisson's ratio. As mentioned in the introduction section, it is difficult to obtain the analytical solution for rough surface deformation under normal stress. However, the numerical solution can be obtained. To obtain the numerical solution, the fracture surface area needs to be discretized into rectangular grids: where x i ,y j are x and y coordinates, respectively; N and M are total number of grids in x-and y-direction, respectively; and Δx and Δy are the grid dimensions in x-and y-direction, respectively. After discretization, the aperture distribution function and boundary conditions can be expressed as: Love [12] first discretized Eqs. (4) and (5) as: where As mentioned before, Stanley and Kato [1] first the FFT method to solve Eq. (11) to make the calculation more efficient. The FFT method turns complicated convolution into simple matrix multiplication. By using the FFT method, Eq. (11) becomes: where IFFT represents the inverse of Fourier transform. The FFT method reduces the number of operations from N 2 *M 2 to N*M*log(N*M) [1]. Therefore, when N and M are large, the FFT method can greatly reduce the calculation time. Method for calculating fracture visco-elastic deformation As described before, Chen et al. [7] first combined the FFT and CG method to simulate visco-elastic deformations of rough fractures. The author has developed an in-house numerical code, which is similar to the algorithm described by Chen et al. [7]. In this section, only the key mathematical aspects will be introduced; the rest can be found in their work [7]. In this simulation, the rock materials are assumed to be linear viscoelastic. Therefore, is it essential to introduce the concept of linear viscoelasticity first. For linear viscoelastic materials, the stress/strain response scales linearly with the strain/stress input, and the behavior follows the rule of linear superposition. Mathematically, the stress/strain at time t can be expressed as: where J(t) and E(t) are the creep compliance function and the relaxation modulus function, respectively. J(t) represents the time-dependent strain change with a step change in stress, and E(t) represents the time-dependent stress change with a step change in strain. Based on Eq. (17), the Boussinesq and Cerrutti equation can be modified to represent linear viscoelasticity by adding the creep compliance function: In Eq. (18), the creep compliance function J(t-τ) replaces the term 1/2G. Rearranging Eq. (18) gives: Eq. (19) cannot be solved analytically for rough fracture surfaces. However, if the time integration term can be de-coupled with the pressure integration term, Eq. (19) will become a linear equation system, and can therefore be solved numerically. To de-couple the time integration term, the time duration t is discretized into N t time steps. The time interval is uniform, and is termed as Δt. The time interval is assumed to be sufficiently small that the pressure distribution field within each time interval does not change. In addition, based on the fundamental theorem of calculus, the term ∂p(x 0 ,y 0 , τ)dτ/ ∂τ can be substituted by a finite difference p(x 0 ,y 0 , τ +dτ) -p(x 0 ,y 0 , τ). Therefore, Eq. (19) becomes: where α =1,2, … ,N t . In addition, within each time interval, the pressure distribution field does not change. Therefore, the pressure distribution field can be removed from the integration term: Eq. (21) indicates that the time integration term is de-coupled with the pressure integration term. The pressure integration term can then be discretized, similar to Eq. (11). From Eqs. (4), (5), and (11), the Boussinesq equation can be discretized as: Therefore, Eq. (21) can be discretized as: To implement FFT, Eq. (24) can be decoupled into two equations: and Eq. (26) can be solved by the FFT method, similar to Eqs. (13) and (14): Within each time step, Eqs. (8)-(10), (15), (25), and (27) are solved using the CG method. The pressure distribution field is obtained and stored. Then, a new time step will be added (α will be increased by one), and the new deformation and pressure fields will be solved based on the historical pressure fields. Figure 1 summarizes the main calculation algorithm based on the above mathematical concepts. Model validation Before simulating visco-elastic deformations of rough rock fractures, it is essential to validate the numerical code against analytical solutions. In this research, the analytical solutions provided by Radok and Lee [14] will be used for validation. In their solutions, a rigid spherical indenter is indented into a flat visco-elastic surface; and the visco-elastic models for the flat surface are the Maxwell and Standard Linear Solid (SLS) model. Figure 2 illustrates the geometry setup for the analytical solution, and Figure 3 shows the concepts of the Maxwell and SLS model. The Maxwell model consists of a dashpot and a spring. The dashpot represents viscosity, with a viscosity of η; the spring represents elasticity, with a shear modulus of G. Under constant stress σ 0 , the strain can be obtained: Eq. (28) implies that under constant stress, the strain rate does not change with time. The creep compliance can be expressed as: Geometry setup for the analytical solution (Kang et al. [13]). R is the radius of the spherical rigid indenter, P is the total load, δ is the indentation depth, t is the time duration, and a(t) is the radius of the contacting region. Another parameter, the relaxation time T, is defined as: In the numerical simulation, Eq. (29) will be implemented into Eq. (27), and the displacement and pressure field will be solved as described in Sections 2.1 and 2.2. For the geometry setup shown in Figure 2, the analytical solution for the contacting region radius and pressure field can be obtained: and where p is the pressure field, t is the total time duration, υ is the Poisson's ratio, and r is the distance from the center of the contacting region. The SLS model consists of one dashpot and two springs. The dashpot represents viscosity, with a viscosity η; the two springs represent elasticity, with a shear modulus of G 1 and G 2 , respectively. Under constant stress σ 0 , the strain can be obtained: The creep compliance J(t) is expressed as: The relaxation time T is defined as: In the numerical simulation, Eq. (34) will be implemented into Eq. (27), and the displacement and pressure field will be solved as described in Sections 2.1 and 2.2. For the geometry setup shown in Figure 2, the analytical solution for the contacting region radius and pressure field can be obtained: where p is the pressure field, t is the total time duration, υ is the Poisson's ratio, and r is the distance from the center of the contacting region. Figures 4 and 5 compare the numerical and analytical solutions for the SLS and Maxwell models, respectively. The solid lines are the numerical solutions obtained by the author, and the dashed lines are the analytical solutions solved by Johnson [15]. In Figures 4 and 5,r h is the contacting region at time zero, p h is the maximum contacting pressure at time zero, and T is the relaxation time. Figures 4 and 5 indicate the deviation between the numerical and analytical results is less than 10%. Therefore, the numerical code can be used to simulate the visco-elastic deformations of rough fractures. For the two validation cases, the numerical simulation accuracy is not strongly dependent on the total number of elements, but on the time interval Δt. The deviation between numerical and analytical solutions will be smaller if the time interval Δt is reduced. Brief introduction of Brown's (1995) model In this chapter, synthetic fracture surface pairs are generated based on Brown's model [10]. Brown's probabilistic model assumes that the surface is self-affine, and the surface height distribution follows Gaussian distribution [10]. The surface geometry can be completely described by three parameters: the Hurst exponent H, the mismatch length λ c , and the root mean square roughness RMS. Mathematically, a self-affine surface is defined as: where H is the Hurst exponent, z is the surface height, and ε is a constant for scaling at the x-direction. The H value is between 0 and 1, and it describes local roughness. A smaller H value corresponds to a rougher local surface profile. The H value can be obtained from the power spectrum of surface height. The power spectrum of a surface can be obtained by decomposing the surface profile into a series of sinusoidal waves via Fourier transform, and each sinusoidal wave has its own amplitude A, wavelength λ, and phase. Figure 6 shows the schematic of the decomposition process. The power (A 2 ) is defined as the square of the amplitude A; and the plot of power versus the wavelength number (the inverse of wavelength, which is 2π/λ) is defined as the power spectrum. Figure 7 shows the schematic of power spectrum. In Figure 7, the q has an upper bound and a lower bound. For the lower bound, q min =2π / λ L , where λ L is the surface dimension; for the upper bound, q max =2π/λ 1 , where λ 1 is the surface measurement resolution. The second parameter is the mismatch length, λ c . As illustrated in Figure 6, each wave component has its own wavelength λ. Glover et al. [16] and Brown [10,17,18] stated that for most natural rock joints, the two surfaces have relative shear displacements. At long wavelengths, the wave components match well; at short wavelengths, the wave components are not identical. Based on the above observation, Brown [10] proposed a parameter: critical wavelength λ c , which is also called the mismatch length scale. Brown [10] assumed that above the mismatch wavelength, the decomposed wave components of two surfaces match perfectly; they have the same amplitudes, wavelengths, and phases. On the contrary, below the mismatch wavelength, the decomposed wave components of two surfaces do not match; they have the same amplitudes and wavelengths, but the phases are independent. Figure 8 illustrates the concept of the mismatch wavelength. The third parameter is the root mean square roughness, RMS. It represents the absolute scale of surface asperity elevation. Mathematically, the RMS is defined as: Schematic of a power spectrum (Kang et al. [13]). 11 Using the Boundary Element Method to Simulate Visco-Elastic Deformations of Rough… DOI: http://dx.doi.org /10.5772/intechopen.96229 where C is the power, q is the wavelength number, and σ is the RMS value. When generating the synthetic surface, the surface heights are normalized by its own RMS value, σ ini , and then multiplied by the designated RMS value, σ des : where z ini is the initial surface height and z des is the surface height after linear scaling. In this chapter, only the key mathematical concepts of Brown's [10] model is introduced; other details can be found in [10]. Generated synthetic surface pairs Brown [10] measured the Hurst exponent H, mismatch length λ c , and RMS for 23 natural rock joints. His measurement results imply that the H value is normally between 0.5 and 1.0; the normalized λ c value (λ c /fracture profile length) is normally between 0.02 and 0.2, and the normalized RMS value (RMS/fracture profile length) is normally between 0.005 and 0.015. Based on the above conclusion, seven synthetic fracture surface pairs are generated, with different H, λ c , and RMS values. Table 1 summarizes the parameters of the seven synthetic surface pairs. It is worth noting that surface pair No. 2 is the reference surface pair. Table 1 shows that between surface pairs 1, 2, and 3, the H value is varied; between surface pairs 2, 4, and 5, the λ c value is varied; between surface pairs 2, 6, and 7, the RMS value is varied. For each surface pair, the aperture distribution field can be plotted. Figure 9 plots the aperture fields for surface pairs 1, 2, and 3; Figure 10 plots the aperture fields for surface pairs 2, 4, and 5, and Figure 11 plots the aperture fields for surface pairs 2, 6, and 7. Based on Figures 9-11, we have the following observations: 1.According to Figure 9, when H increases, the average and standard deviation of the aperture decreases; 2.According to Figure 10,whenλ c deceases, the average and standard deviation of aperture decreases; 3.According to Figure 11, the average and standard deviation of aperture scales linearly with the RMS value. Table 1. The parameters of the seven synthetic surface pairs. Table 2 summarizes the mean and standard deviation of aperture for each surface pair. In the numerical code, each calculated aperture field (shown in Figures 9-11) is considered as the initial aperture field. Creep simulation results for the Maxwell model The author uses the Maxwell model to calculate the visco-elastic deformation of seven synthetic surface pairs. The mechanical properties of Vaca Muerta Shale measured by Mighani et al. [19] are used as the input parameters, and those properties are summarized in Table 3. Figure 11. Aperture fields for different RMS values (Kang et al. [13] Before showing the results, two parameters are introduced: macroscopic stress σ and contact ratio: 1. The macroscopic stress σ = total force applied to the fracture/fracture surface area; 2. Contact ratio = 100 * (the number of grids in contact/total number of grids). 13 show the mean aperture and contact ratio evolving with time for seven synthetic surface pairs, respectively. The total time duration is 2τ, and the macroscopic stress σ = 10 MPa. The initial changes of the mean aperture and contact ratio correspond to fracture elastic deformation. Figures 12 and Based on Figures 12 and 13, several conclusions can be drawn: Table 4. Effect of surface parameters on the mean aperture and contact ratio. 2. As RMS increases, the mean aperture increases, and the contact ratio increases slower with time; 3. As λ c increases, the mean aperture increases, and the contact ratio increases slower with time. 4. Under current macroscopic stress, time duration, and surface parameters, the contact ratio is generally less than 9.5%. Table 4 summarizes the effect of surface parameters on the mean aperture and contact ratio. Figure 14 shows the contact region and local contacting stress evolution of surface pair 3 before and after the creep stage. The macroscopic stress is 10 MPa and the creep time duration is 2τ. The colored regions and white regions correspond to the contacting regions and non-contacting regions, respectively. The color bar scale is 2000 MPa. After the creep stage, the area of contacting regions becomes larger, and the local contacting stress reduces. However, even after the creep stage, the contact ratio is still less than 9.5%. Under the same time duration, if η is reduced, the contact area will increase more rapidly. Creep simulation results for the SLS model The author also uses the SLS model to calculate the visco-elastic deformation of seven synthetic surface pairs. The mechanical properties of Vaca Muerta Shale measured by Mighani et al. [19] are used as the input parameters, and those properties are summarized in Table 5 Table 6. Effect of surface parameters on the mean aperture and contact ratio. Figure 16. Contact ratio changing with time (Kang et al. [13]). The time duration is normalized by τ. Figures 15 and 16 show the mean aperture and contact ratio evolving with time for seven synthetic surface pairs, respectively. The total time duration is 5τ, and the macroscopic stress σ = 10 MPa. The total time duration is extended from 2τ to 5τ to show the time-decaying creep rate. The initial changes of the mean aperture and contact ratio correspond to fracture elastic deformation. Based on Figures 15 and 16, several conclusions can be drawn: 1. As H decreases, the mean aperture increases, and the contact ratio increases slower with time; 2. As RMS increases, the mean aperture increases, and the contact ratio increases slower with time; 3. As λ c decreases, the mean aperture increases, and the contact ratio increases slower with time. 4. Under current macroscopic stress, time duration, and surface parameters, the contact ratio is generally less than 7.0%. 5. Under current macroscopic stress, time duration, and surface parameters, the creep rate decreases significantly with time. This is mainly because the SLS model assumes an exponentially decaying creep rate. Table 6 summarizes the effect of surface parameters on the mean aperture and contact ratio. Figure 17 shows the contact region and local contacting stress evolution of surface pair 3 before and after the creep stage. The macroscopic stress is 10 MPa and the creep time duration is 5τ. The colored regions and white regions correspond to the contacting regions and non-contacting regions, respectively. The color bar scale is 2000 MPa. After the creep stage, the area of contacting regions becomes larger, and the local contacting stress reduces. However, even after the creep stage, the contact ratio is still less than 7.0%. Under the same time duration, if η is reduced, the contact area increase will increase more rapidly. considered so the results can be more realistic. Last but not least, the effect of shear stress can be simulated to make the results more applicable.
2021-05-08T00:02:55.378Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "bfad62dc1f854a70de23ff124e1affe1f7571702", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/75422", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b9526e686fda6fcd62910c06f94d59b420e64b43", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
56106289
pes2o/s2orc
v3-fos-license
The pc-scale radio structure of MIR-observed radio galaxies We investigated the relationship between the accretion process and jet properties by ultilizing the VLBA and mid-infrared (MIR) data for a sample of 45 3CRR radio galaxies selected with a flux density at 178 MHz $>16.4$ Jy, 5 GHz VLA core flux density $\geq$ 7 mJy, and MIR observations. The pc-scale radio structure at 5 GHz are presented by using our VLBA observations for 21 sources in February, 2016, the analysis on the archival data for 16 objects, and directly taking the measurements for 8 radio galaxies available in literatures. The accretion mode is constrained from the Eddington ratio with a dividing value of 0.01, which is estimated from the MIR-based bolometric luminosity and the black hole masses. While most FRII radio galaxies have higher Eddington ratio than FRIs, we found that there is indeed no single correspondence between the FR morphology and accretion mode with eight FRIIs at low accretion and two FRIs at high accretion rate. There is a significant correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio. Various morphologies are found in our sample, including core only, single-sided core-jet, and two-sided core-jet structures. We found that the higher accretion rate may be more likely related with the core-jet structure, thus more extended jet. These results imply that the higher accretion rates are likely able to produce more powerful jets. There is a strong correlation between the MIR luminosity at 15 $\mu$m and VLBA 5 GHz core luminosity, in favour of the tight relation between the accretion disk and jets. In our sample, the core brightness temperature ranges from $10^{9}$ to $10^{13.38}$ K with a median value of $10^{11.09}$ K indicating that systematically the beaming effect may not be significant.... INTRODUCTION It has been long found that the Fanaroff-Riley type I radio galaxies (FRIs) are edge-darkened, while Fanaroff-Riley type II radio galaxies (FRIIs) are edge-brightened (Fanaroff & Riley 1974). For a given host galaxy luminosity, FRIs have lower radio luminosities than FRIIs (Owen & Ledlow 1994). The SAMPLE To systematically study the relationship between the accretion mode and the pc-scale jet properties, we choose a sample from 3CRR 1 catalogue (Laing et al. 1983). There are 173 sources in the 3CRR catalogue, including 43 quasars, 10 broad-line radio galaxies, and 120 narrow-line radio galaxies. The original 3CRR catalogue has a flux limit of 10 Jy at 178 MHz, and is the canonical low-frequency selected catalogue of bright radio sources. From 3CRR sample, the MIR observations have been well studied for a well-defined, radio flux-limited sample of 50 radio galaxies with a flux density at 178 MHz > 16.4 Jy, and 5 GHz VLA core flux density ≥ 7 mJy (e.g. Ogle et al. 2006;Haas et al. 2005;Leipski et al. 2009, etc). The MIR emission enable us to explore the existence of hidden quasars, thus we use this subsample in our study. We carefully searched the VLBI observation for all these 50 objects, and Jets in radio galaxies 3 found that 27 sources have already been observed with VLBA. We observed the ramaining 23 targets with VLBA at 5 GHz. In two objects, the poor uv data preclude us to make good images. Moreover, in three of the 27 sources with VLBA observation from archive, the VLBA data are not useful to make final images. After excluding these five sources, the final sample consisits of 45 radio galaxies with MIR detections and VLBA observations either by us or from archive. The essential information of the sample are list in Table 1, in which 30 sources belong to FRIIs, 11 sources FRIs, and the remaining 4 sources are core-dominated source (Laing et al. 1983). Fomalont et al. (2000), and Worrall et al. (2004), respectively; Columns (4) and (5): redshift and FR types I and II, C represents the coredominated source; Column (6): black hole mass; Column (7): luminosity distance; Columns (8) -(11): the VLA core flux density at 5 GHz, and the 178 MHz flux density, and mid-infrared flux desity at 15 µm ( a -at 24 µm), and the bolometric luminosity; Columns (12) -(13): phase calibrators for phase-reference observations, and its separation to the source. DATA COMPILATION In this work, the VLBA and MIR data are essential to study the relationship between the accretion mode and pc-scale jets in radio galaxies, which are complied from our observations and archive data. VLBA observations and data reduction The VLBA observations of our sample consists of three groups. In the first group, we performed VLBA observations at C-band with a total observing time of 20 hours for 23 sources in three blocks for scheduling convenience on Feb. 13, 14, and 15, 2016 (program ID: BG239). In two of these 23 sources, we are not able to make images due to poor uv data quality, thus this group finally consists of 21 objects. Among these 21 sources, thirteen radio galaxies can be self-calibrated with observing time of 30 mins for each target, while for the remaining eight sources, the phase-referencing is required with on-source time of 40 mins individually. These sources and the related phase calibrators are list in Table 1. Group two has 16 radio galaxies, of which the VLBA observational data can be downloaded from NRAO archive 2 (see program ID in Table 1). For the rest eight sources, the third group, the measurements of jet components can be directly obtained from literatures (Fomalont et al. 2000;Worrall et al. 2004). The data reduction was performed for the sources in groups one and two. Data are processed with AIPS in a standard way. Before fringe fitting, we correct for Earth orientation, remove dispersive delay from ionosphere, and calibrate the amplitude by using system temperature and gain curve. Phase calibration is followed in order by correcting for instrumental phase and delay using pulse-calibration data, removing the residual phase, delay and rate for relative strong targets by fringe fitting on source itself. For weaker targets, phase-referencing technique is taken by applying the residual phase, delay and rate solutions from phase-referencing calibrator to the corresponding target in interpolating method. Imaging and model-fitting were performed in DIFMAP and the final results are given in Table 2, in which the measurements of jet components directly adopted from literatures are also given for eight sources. Tentatively, we assume the brightest component to be radio core in this work. The VLBA radio images for each object are shown in Figure 1 and Figure 2, for groups one and two, respectively. All images are at 5 GHz, except for 3C 208, in which 8 GHz data is used since there is no 5 GHz data available. (2): components, C represents the radio core; Column (3): FR types I and II, C represents the core-dominated source; Column (4): flux density; Columns (5) - (6): component position, and its position angle; Column (7): major axis; Column (8): axial ratio; Column (9): brightness temperature. Accretion mode FRIs have lower radio luminosities than FRIIs for a similar host galaxy luminosity (Owen & Ledlow 1994). FRIs and FRIIs have shown clear dividing line in the radio and optical luminosity plane, which can be re-expressed as a line of constant ratio of the jet or the disk accretion power with the Eddington luminosity. This implies the accretion process plays a more important role in FRIs and FRIIs dichotomy than a different environment (Ghisellini & Celotti 2001). Quasars hidden by dusty gas will re-radiate their absorbed energy in the infrared. Ogle et al. (2006) investigated the MIR emission using the Spitzer survey of 3C objects, including radio galaxies and quasars, selected by the relatively isotropic lobe emission. They argued that most of the MIR-weak sources may not contain a powerful accretion disk. It is likely that in the nonthermal, jet-dominated AGNs, the jet is powered by a radiatively inefficient accretion flow or black hole spin-energy, rather than energy from accrtion disk. Two different central engines are recognized for FRIs or FRIIs in their study, with the dividing value of the luminosity at 15 µm of 8 × 10 43 ergs s −1 . The sources with the luminosity above it are suggested to contain a radiatively efficient accretion flow. Instead of a fixed dividing luminosity, the accretion mode is investigated from the Eddington ratio L bol /L Edd in this work, in which L bol and L Edd are the bolometric and Eddington luminosities, respectively. The black hole masses of 17 sources are collected from various literatures (McLure et al. 2006;Wu 2009;McNamara et al. 2011;Mingo et al. 2014). For the rest 28 radio galaxies, the black hole masses were estimated by using the relationship between the host galaxy absolute magnitude at R band (M R ) and black hole mass provided by McLure et al. (2004), in which, the M R was calculated from the R magnitude in the updated online 3CRR catalogue. In this work, the bolometric luminosity L bol is calculated from mid-infrared luminosity either at 15 or at 24 µm, using the relation in Runnoe et al. (2012), log L bol = (10.514 ± 4.390) + (0.787 ± 0.098) log(νL ν,15µm ) (2) log L bol = (15.035 ± 4.766) + (0.688 ± 0.106) log(νL ν,24µm ) in which, a spectral indice of α ν = −1 is used for k-correction. We adopted a conventional value of L bol /L Edd = 10 −2 to separate radiatively efficient or inefficient accretion mode (e.g., Hickox et al. 2009). The relationship between the VLBA core luminosity at 5 GHz and the Eddington ratio is presented in Figure 3. The rest frame 5 GHz luminosity is estimated from the VLBA 5 GHz or 8 GHz (for 3C 208) core flux density using a spectral indice of α = 0. While most FRII radio galaxies have higher Eddington ratio than FRIs, we found that there is indeed no single correspondence between the FR morphology and accretion mode. The eight out of thirty FRIIs (26.7%) may have low accretion rate with L bol /L Edd < 10 −2 , and the rest 22 objects (73.3%) are at high accretion mode with L bol /L Edd ≥ 10 −2 . In contrast, two out of eleven FRIs (18.2%), and 81.8% FRIs are at radiatively efficient and inefficient accretion mode, respectively. There is a significant correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio, with a Spearman correlation coefficient of r = 0.820 at ≫ 99.99 per cent confidence. This implies that the higher accretion rate are likely able to produce more powerful jets. The correlation between the MIR luminosity at 15 µm and VLBA 5 GHz core luminosity is also investigated in Figure 4. The luminosity at 15 µm in six sources were estimated from 24 µm using a spectral indice of α ν = −1. A significant correlation is found between two parameters with a Spearman correlation coefficient of r = 0.849 at ≫ 99.99 per cent confidence. After excluding the common dependence on redshift, the partial Spearman rank correlation method (Macklin 1982) shows that the significant correlation is still present with a correlation coefficient of r = 0.635 at ≫ 99.99 per cent confidence. The linear fit gives log L core,5GHz = (0.951 ± 0.083) log(νL ν,15µm ) − (0.263 ± 3.655) While in the flux-limited low-frequency radio survey like 3CRR sample, the low-frequency emission is mostly dominated by the lobe, which, however, is normally located at the jet end, thus represents the past jet activity. In contrast, the MIR, and especially the pc-scale VLBA core emission are instantaneously and comtemporarily from the central engine. The strong correlation strongly indicates the tight relation between the accretion disk and jets, as found in various works (e.g., Cao & Jiang1999;Gu et al. 2009). In the framework of unification scheme of AGNs, FRIs are unified with BL Lac objects (BL Lacs), and FRIIs with flat-spectrum radio quasars (FSRQs) (Antonucci 1993;Urry & Padovani 1995). The blazars consists of BL Lacs and FSRQs, and are characteristic of strong beaming effect due to the jets pointing towards us with small viewing angles. The jets in FSQRs are found to have stronger power and higher velocity than those in BL Lacs (e.g., Gu et al. 2009;Chen 2018). On the other hand, the Eddington ratios of BL Lacs are systematically lower than those of radio quasars with a rough division at L bol /L Edd ∼ 0.01, which imply that the accretion mode of BL Lacs may be different from that of radio quasars (e.g., Xu et al. 2009). The radio galaxies used in this study has its own advantages in avoiding the strong contaminaion of jet beaming effect on the VLBA core emission, since the jet viewing angle are usually large in radio galaxies. Our results of higher accretion rate likely associated with stronger jet are generally in agreement with the unification scheme. Pc-scale Radio Morphology It can be clearly seen from the high-resolution VLBA 5 GHz images in Figures 1 and 2 that there are various morphologies in our sample sources, including 10 core only, 29 one-sided core-jet, and 6 twosided core-jet structures. The two-sided core-jet structure is found in 3C 33, 3C 38, 3C 338, 3C 452 (see Figures 1 and 2), 3C 147, and 3C 286 (Fomalont et al. 2000). In this work, we will not distinguish the latter two categories, instead we call them all as core-jet structure. The radio morphologies was further studied with the source fraction of the specified structure in 17 radido galaxies with inefficient accretion flow and 28 efficient ones. At low Eddington ratio (< 10 −2 ), we found that six out of seventeen (35.3%) exhibit core only structure, and the remaining sources (64.7%) have core-jet morphology. In contrast, core only and core-jet present in 3 (10.7%) and 25 (89.3%) sources, respectively, at high Eddington ratio (≥ 10 −2 ). It thus seems that the higher accretion rate may be more likely related with the core-jet structure. For a similar distribution of viewing angles likely presents in our sample of radio galaxies, radio morphology perhaps can reflect some jet information like strength and speed in different accretion models. A core-jet radio morphology likely indicates the source jet moving at higher speed with relatively powerful. However, a naked core may indicates a relatively weaker jet with lower speed. Based on our analysis, we found that the radiatively inefficient accretion flow may perhaps be also inefficient in producing powerful jets moving at lower speed, while the radiatively efficient one shows higher probability on forming strong jets with higher speed. This is consistent with the correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio shown in Figure 3. In a broader framework, this is also consistent with the radio-quiet populations. LINERs and Seyferts can be analog to two accretion systems (Kewley et al. 2006). LINERs seem to have radio cores more optically thick than those of Seyferts, and their radio emission is mainly confined to a compact core or base of a jet, thus it is likely that the radiatviely inefficient accretion flow is likely to host a more compact VLBI pc-scale core, than that of radiative efficient one. The pc-scale VLBA projected linear size l of sources is estimated as the largest distance among radio components for core-jet sources, while directly as the major axis for core-only galaxies. The distribution of pc-scale VLBA size for all sources is presented in Figure 5, except for eight objects, in which the size is not available in literatures. There is a broad range with most sources in 1 -100 pc, and the jet extends to about 300 -400 pc in several core-jet objects. We find a significant correlation between the linear size and the Eddington ratio with a correlation coefficient of r = 0.671 at ≫ 99.99 per cent confidence (see Figure 5). This indicates that the higher accretion rate may have more extended jet, again supporting our results of more powerfully jets in higher-accretion system. Brightness Temperature From the high-resolution VLBA images, the brightness temperature of radio core T B in the rest frame can be estimated with (Ghisellini et al. 1993) in which z is the redshift, S ν is core flux desity at frequency ν, and θ d is the angular diameter, θ d = √ ab with a and b being the major and minor axes, respectively. There is an important parameter Doppler factor δ, which can be restricted by in which T ′ B is the intrinsic brightness temperature. The core brightness temperature distribution diagram is presented in Figure 6. In our sample, the core brightness temperature ranges from 10 9 to 10 13.38 K with a median value of 10 11.09 K (see also in Table 2). Most sources are in the range of 10 10 − 10 12 K, less than the inverse Compton catastrophic limits 10 12 K (Kovalev et al. 2005). Therefore, systematically the beaming effect may not be significant in our sample, although it may not be trivial in some cases, for example, in 3C 263, the source with the highest T B . In comparison, the VLBA core brightness temperatures of blazars typically range between 10 11 and 10 13 K with a median value near 10 12 K, and can even extend up to 5 × 10 13 K (Kovalev et al. 2005(Kovalev et al. , 2009. These results are basically in agreement in the framework of unification scheme of AGNs, with FRIs/FRIIs and BL Lacs/FSRQs. The strong beaming effect results in high brightness temperature of the radio cores in blazars, while it is less pronounced in radio galaxies because of large jet viewing angles. We have analyzed the correlation of the brightness temperature and the Eddington ratio in Figure 6. There is no correlation between two parameters, and the distribution of T B is similar at high and low accreiton rate. Compared with VLA data We collected VLA 5 GHz flux density for our sources from 3CRR catalogue, then we compared the VLBA with VLA flux density. The flux ratio of VLBA core to VLA core, and ratio of VLBA total to VLA core, are plotted with the Eddington ratio in Figure 7. The flux ratio between VLBA and VLA can in principle give information on the source compactness, since they represent the source structure at different scales, with normally the former at pc-scale, and the latter at kpc-scale. There are no correlations between the flux ratio and the Eddington ratio. The flux ratio covers more than one order of magnitude, and there is no systematical difference between the high and low accretion regimes. It's interesting to see that the VLBA core flux density is higher than VLA core in many sources. This can be most likely due to variability. This is even more pronounced when considering the VLBA total flux density. In this case, the VLBA total flux is higher than VLA core in majority of objects, implying the variability may be common in our sample. Core/lobe Flux Density Ratio In comparison of VLBI pc-scale core flux desity with 178 MHz flux density, we would investigate the present status of core radio activity. It might be possible that those sources with weak MIR dust emission are just recently at radiatively inefficient accretion model, while the large scale radio morphology was produced by past radiatively efficient accretion model. Therefore, their core/lobe flux density ratio are expected to be low. In previous works (e.g., Ogle et al. 2006), the core and lobe luminosity ratio is indeed less in MIR-weak FRIIs than in MIR-luminous FRIIs at VLA. The ratio of VLBA core to 178 MHz flux density is plotted with the MIR luminosity and the Eddington ratio in Figure 8. While a MIR luminosity at 15 µm of 8 × 10 43 erg s −1 is adopted to distinguish the MIR-weak and MIR-luminous sources in Ogle el al. (2006), we further use the Eddington ratio in recognizing the accretion mode. The flux ratio of VLBA core to 178 MHz covers about two orders of magnitude, and there is no single dependence of the flux ratio on the either the Eddington ratio or MIR luminosity. Similar behaviours are seen in the panels of the flux ratio with MIR luminosity and Eddington ratio. Considering solely the high and low accretion rate regime, there is no correlation between the radio flux ratio and MIR luminosity/Eddington ratio. The distribution of the flux ratio at high accretion rate is broader than that at low rate, which mainly concentrated on lower flux ratio and does not extend to very high values. Interestingly, the FRIIs with low MIR luminosity (below 8×10 43 erg s −1 ) or low accretion rate (L bol /L Edd < 10 −2 ) are exclusively at the lower end of the distribution of radio flux ratio. In contrast, two MIR-luminous or highly accreting FRIs are all at high end. It is not impossible that the location of these sources are due to the recent shining or weakening of the central engine (i.e., both accretion and jet), resulting a higher or lower VLBA core luminosity, thus a lower or higher flux ratio of VLBA core to 178 MHz. SUMMARY We investigated the role of the accretion model in creating the VLBI jets by ultilizing the VLBA and MIR data for a sample of 45 3CRR radio galaxies. The accretion mode is constrained from the Eddington ratio, which is estimated from the MIR-based bolometric luminosity and the black hole masses. While most FRII radio galaxies have higher Eddington ratio than FRIs, we found that there is indeed no single correspondence between the FR morphology and accretion mode with eight FRIIs at low accretion and two FRIs at high accretion rate. There is a significant correlation between the VLBA core luminosity at 5 GHz and the Eddington ratio. We found that the higher accretion rate may be more likely related with the core-jet structure, thus more extended jet. These results imply that the higher accretion rate are likely able to produce more powerful jets. There is a strong correlation between the MIR luminosity at 15 µm and VLBA 5 GHz core luminosity, in favour of the tight relation between the accretion disk and jets. In our sample, the core brightness temperature ranges from 10 9 to 10 13.38 K with a median value of 10 11.09 K indicating that systematically the beaming effect may not be significant. The exceptional cases, FRIs at high and FRIIs at low accretion rate, are exclusively at the high and low end of the distribution of the flux ratio of VLBA core to 178 MHz flux density. It is not impossible that the location of these sources are due to the recent shining or weakening of the central engine (i.e., both accretion and jet). Fig. 3: The VLBA core luminosity at 5 GHz versus the Eddington ratio. The asterisks are for FRIs, the triangles for FRIIs, and crosses represent core-dominated sources. The Eddington ratio L bol /L Edd = 0.01 is shown as the dotted line to distinguish the high and low accretion rate. Fig. 4: The VLBA core luminosity at 5 GHz versus the MIR luminosity at 15 µm. The solid line is the linear fit. The dotted line is νL ν,15µm = 8 × 10 43 erg s −1 , used in Ogle et al. (2006) to distinguish the MIR-weak and MIR-luminous radio galaxies.
2018-04-26T10:09:21.000Z
2018-04-26T00:00:00.000
{ "year": 2018, "sha1": "3b16644a116e3abea1bd616015bb7f17971ec8d0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.09969", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3b16644a116e3abea1bd616015bb7f17971ec8d0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240358912
pes2o/s2orc
v3-fos-license
PERSONAL PROFILE AND SOCIAL ORIENTATIONS IN THE INTERPERSONAL RELATIONSHIPS AMONG STUDENTS IN THE FIELD OF HEALTH CARE The PURPOSE of this study is to investigate the level of assertive behavior and selfesteem in students, teaching nurses and midwives, and to shape their personal profile and social orientations in interpersonal relationships. MATERIAL and METHODS: Were investigated 44 students studying in the nursing and midwifery profession for the level of the assertiveness and the behavior in interpersonal relationships. The research was carried out by a test method, with the following questionnaires specifically applied: Personality scale for the study of assertivenessJ. Tindall, Scale for the study of global self-esteem"M. Rozenberg, Test for interpersonal relationshipsT. Leary, were also studied. RESULTS: The results do not differ statistically significant from the normative sample, i.e. students studying to be nurses and midwives randomized to participate in this study demonstrated a level of assertiveness and self-esteem within the norm. However, there is a slight tendency for higher results in the group of midwives in terms of assertiveness INTRODUCTION Today, nurses and midwives are moving away from the traditional subordinate role, increasingly recognizing that they need to behave in an affirmative manner. Global nursing defines assertive behavior as an invaluable element of successful professional practice. Assertive nurses are able to make suggestions in a direct and convenient way, know how to give and receive criticism, respect the rights of others and act responsibly in nursing situations, ____________________________ assessing problems through a thought process focused on solving them (9). There are differences in the interpretation of the concept of assertiveness. Some authors understand the concept as self-sufficiency and self-confidence (1), others associate it with understanding, respect and acceptance of others (2)(3)(4). Summarizing the various concepts Peneva, I. defines assertiveness as a "complex multicomponent personality construct" and derives the basic elements such as "the presence of self-confidence, self-esteem and respect for others, the ability to actively defend their interests and openly state their goals, intentions and feelings, without harming the interests of others" (5). Assertiveness is perceived as healthy behavior and is one of the main components of effective communication of health care professionals in interaction with patients, colleagues and other health professionals. . There is evidence in the literature that nurses lack assertive skills and are accepted in passivity, and obedience to other medical professionals, with almost no data on assertiveness in midwives. According to these data, they rarely disagree with the opinions of other medical professionals and are unable to provide constructive criticism. This lack of self-confidence leads to reduced communication efficiency and compromised patient care (6). Assertiveness means speaking up for one's interpersonal freedoms or as required by one's role responsibilities to engage others in finding viable, stable solutions. Assertiveness is a learnable skill rather than a personality characteristic (7). Other studies on assertive behavior focus on potential barriers and factors that prevent this behavior. To them they refer some traditional training, fear, the working atmosphere, and some hierarchical structures within the hospitals, which express mixed feelings about the usefulness of assertive trainings. Factors that promote confidence include knowledge, experience, wearing a uniform (8). The present study is guided by Timothy Leary's theory of interpersonal relationships, according to which in relationships with other people a person most often shows two tendencies: to dominate or obey, to show friendliness or hostility to them. The study is based on the idea that the two factors of dominanceobedience, friendliness -hostility determine the general impression of man in the process of interpersonal perception (10). In a previous study of stress during training of students of medical specialties were identified our factors that directly or indirectly hinder their learning and development and offer a model for promoting adaptation (11,12). The study was also provoked by Hildegard Peplau's theory of interpersonal relationships in nurses, according to which any interaction with patients becomes an important therapeutic opportunity for nurses towards them (13). The focus of the theory is primarily on the therapeutic process between nurses and their patients, rather than on pathology. The ability of nurses to carefully guide the individual phases described by Peplau, the ability to have therapeutic conversations and maintain professional relationships with their patients, largely depends on self-affirming behavior and personality traits that are part of the personality of the health care professional. These two theories also emphasize the importance of the caregiver's ability to understand his or her own behavior in order to help others (13). In order to provide professional and competent care for patients, a relationship of trust with the patient must be developed. Extremely important for patients is nursing care, which concerns somatic disease, but also of particular importance for the healing process is the emotional appearance of medical staff (14). Based on this theory, one can derive the expectations for the skills that the nurse / midwife must possess, namely to be confident in her work in order to provide competent care for her patients. To optimize his personal psychological resources for reflection, recognition, empathy, social skills and effective communication (14). The issue which the present study focuses on is related to the study of the level of assertiveness and self-esteem, as well as the peculiarities of the behavior in interpersonal relationships in students studying in the nursing and midwifery profession. The study aims to trace the trends in the state of assertiveness and self-esteem and their mental qualities in students studying in the field of "Health Care". The secondary goal of the study is to establish the features of the traits that are part of the personality of the health care professional and social orientations, giving information about his behavior in interpersonal relationships. METHODS Several databases of publications related to assertiveness in nurses and midwives have been studied, as well as relevant factors expressing confidence in the clinical environment, as well as articles assessing assertiveness and interpersonal behavior in nursing students and midwives. The research was carried out by a test method, with the following questionnaires specifically applied:  "Personality scale for the study of assertiveness", ed. by J. Tindall, Bulgarian adaptation -I. Peneva (Peneva I., 2012 The questionnaires were attached to a sample of students studying at the Department of Health Care at SWU "Neofit Rilski". The sample included 44 first-and second-year students majoring in Nursing and Midwifery. After the selection of the respondents, informed consents were obtained and data were collected on the average values in the two groups of specialties and search for differences in the level of assertiveness, the level of global self-esteem and their behavior in interpersonal relationships, as well as some correlations between variables. (16) The data processing was performed through the statistical system SPSS-19.0. The hypotheses were tested by applying the statistical procedures T-test for one sample, T-test for two independent samples, one-factor stepwise regression analysis. Pearson-correlation coefficients were used to determine the relationship between confidence, specialty, and internship level. A correlation was also sought between the different sectors (octants), which include the peculiarities of the traits in the behavior (16). The obtained statistical data are presented in tables and are displayed in the text, using the following notations: M -value of the arithmetic mean; SD -value of the standard deviation; t-empirical value of t-Student's criterion; df -degrees of freedom of data distribution; p-significance level of t-Student's criterion. Table 1 presents the results of the test for testing assertiveness in nursing students and the percentage distribution by individual factors ( Table 1). The results of the analysis of the percentage distribution by individual factors represent the fact that most students demonstrating an average level of expression on all factors. A small proportion of students have a high level (2.3%) of factor I. This means that there are students who are extreme in defending personal and consumer rights. They disagree, oppose, criticize and challenge allegations, object, resent. Impressive is the low percentage (6.8%) of people with a low level of factor II -"Confidence and initiative" and the overall test. Students in this group still have difficulty expressing an opinion, insisting, initiating a conversation and maintaining it. They have difficulty making decisions, do not trust their own judgments and are not active. However, the majority of students in the sample have medium levels (79.5%) and high levels (20.5%) by factor II. Table 2 presents the results of the survey of students on the indicator selfassessment to establish differences in the surveyed indicators between students and the normative sample ( Table 2). The low levels in factor 1 and the overall test are less than those in the normal data distribution. These students have difficulty considering themselves worthy of respect, do not treat themselves well, and are generally dissatisfied with themselves. More than half of the students showed an average level on both factors and the overall test, i.e. these data do not differ from the results in the normative sample. 27.8% of them have a high level of factor 1 and the overall global self-assessment test. These students tend to perceive themselves as losers, feel useless, are not proud of themselves and would like to be respected more. RESULTS The main percentage distributions in the main dimensions of the Timothy Leary test within interpersonal relationships are as follows: friendliness (81.8%) -hostility (18.1%), as well as dominance (56.8%) and obedience (43.1%). Impressive is the prevailing orientation friendliness, which is striving to close a variation of trust and care. Domination is a way for the "strong individual" to express his strength and at the same time to show concern and protection. Impressive is the low level of orientation towards trust (9.1%), as well as the high level of orientation towards anxiety (61.4%). At the heart of the trust orientation are emotional structures that initially appear as roles in the system of mother-child relationships. Anxiety is another side of the trust system related to the protection system. The high level of anxiety orientation among students studying to become a nurse and midwife could be related to the lack of experience in the field of medical care, to being in an environment with both high requirements for training and clinical training, which unlock defence systems. One-factor step-by-step regression analysis is used to check the degree of influence of the sequence of the training course on the assertiveness. The order of the course is taken as a predictor in the analysis, and the total score on the assertiveness test is taken as a result. No significant differences were found in any of the assertiveness factors. The results show that assertiveness is not affected by the order of the students' course. The specificity of the sample, consisting only of the first and second year of study, determines this fact. The short period of training does not provide enough opportunity to achieve significant personal development in the direction of assertiveness. To check the degree of influence of the sequence of the training course on the selfassessment, the results show that there is no statistically significant influence of the training experience on the self-assessment in relation to the overall score. No significant differences were found in any of the factors of self-esteem. There is a tendency to increase the orientation towards aggression in second-year students. This side of the trust system depends on the degree of threat (threatening situation) with which students activate their defences system. To check the degree of influence of the type of specialty on assertiveness and selfassessment was applied for two independent samples T-test. Table 4 presents the data obtained during statistical processing by applying a T-test to two independent samples. This would establish differences between students in the two majors in terms of assertiveness and self-esteem ( Table 4). There are no statistically significant differences between nursing students and midwifery students, as for assertiveness (t The midwives show better self-esteem, which could be explained by longer training in the recent past compared to nurses, as well as the statement that "... a midwife can work as a nurse, but a nurse can't work as a midwife". Another important point of the midwife's work is related to the fact that they work mainly with healthy and not sick people (for whom duration of care and empathy in suffering is required), as well as the visible result, which is most often positive (birth of a child). Another important point in the midwife's work is related to the fact that they work mainly with healthy people, not sick people (who require duration of care and empathy for suffering), as well as with the visible result, which is most often positive (birth of a child) Table 5 presents the results of a statistical T-test procedure for two independent samples to establish differences between students on the orientation scales of the Timothy Leary test ( Table 5). Table. 4. T-test-assertiveness and self-esteem between students-nurses and students Statistically significant differences between the group of student nurses and the group of student midwives are indicated in terms of concern (M mid.= 7.81; SD mid .= 2.683; M nurse = 6.42; SD nurse = 1.441; t |44| = 2.219; p= 0.032). Account differences in the total score, the values of the average in student midwives are greater than the average values in students' nurses. When analyzing data between the two groups in terms of trust (M mid . = 8.36; SD mid = 2.639; M nurse. = 5.84; SD nurse = 1.772), the same trends are reported as the reported differences are statistically significant (t |44| = 3,800; p = 0,000). Differences in the total score were reported as the score values in the midwifery group were higher than the values in the nursing group. After applying a hypothesis testing procedure, statistically significant differences were found in the two groups of subjects on the indicator anxiety (M mid .= 9.76; SD mid = 2.064; M nurse . = 8.42; SD nurse . = 2.070; t | 44| = 2.135; p = 0.039). There is a clear increase in the values of the general score of the midwives compared to the values of the nurses. BACHEVA M There are no differences between the two groups in terms of the indicator of aggression (M mid . = 7.31; SD mid . = 2.433; M nurse = 6.26; SD nurse = 2.005), however, there is a tendency to more high mean values in midwifery students compared to those in nursing students. Table 6 presents the results by individual sectors (octants) in the discogram of Timothy Leary to establish quantitative relationships in significant characteristics between students in both majors. This set of traits describes the type of interpersonal relationship (Tabl 6). Following the analysis of the data on the individual octants from the Leary test, it was found that there is a clear tendency for predominantly low (authoritarian, selfish, aggressive, suspicious, submissive, dependent type) and medium levels of expression (friendly, altruistic type) on the individual octants. The content of each octant represents the behavior of people in low and moderate levels and determine the personal profile of students in both disciplines by including characteristics of different types of interpersonal relationships. The style of interpersonal relationships includes the following characteristics: from self-confident, persistent and persistent, good counsellor, mentor and organizer (authoritarian type) to a tendency to rivalry and complacency (selfish type); rectilinear, stubborn, persistent, energetic (aggressive type); realistic in reflection and actions (suspicious type). Modest, compliant, emotionally restrained able to obey. He has no opinion of his own, obedient and honest in his duties (subordinate type). Conformal, trusting, seeking constant trust from others, and their recognition (dependent type). Prone to cooperation and cooperation in solving problems (friendly type). The exception is a result Octant 8-there dominate the higher levels of markedness altruistic type of interpersonal relationships as a set of features includes excessive responsibility, a sacrifice of their interests, striving for help, and compassion to all. The conclusions that can make of this study generally can be submitted: 1. The level of assertiveness and selfesteem nurses and midwives for the most part is not different from the normative sample. 2. Assertiveness and self-esteem are not affected by the sequence of the training course. 3. Midwifery students show higher results in terms of assertive behavior and selfesteem than nursing students, and the reported differences are statistically significant. 4. The majority of students show orientation is mainly towards friendliness and dominance. 5. Most students show an orientation towards trust, concern and anxiety. 6. Many students demonstrate a high level of anxiety orientation. 7. There is a clear tendency for predominant low and medium levels of significance in the individual octants. DISCUSSION The study revealed results that do not differ statistically significant from the normative sample, i.e. students studying to be nurses and midwives randomized to participate in this study demonstrated a level of assertiveness and self-esteem within the norm. However, there is a slight tendency for higher results in the group of midwives in terms of assertiveness and self-esteem. The results of the individual octants from the test for interpersonal relationships build the personal profile of the subjects. Most students in the sample demonstrate adaptive behavior and their profile have moderately pronounced traits of the authoritarian, friendly, aggressive and altruistic type of interpersonal relationships. At the same time, there are no characteristics included in the selfish, suspicious, dependent type of interpersonal relationships. This set of traits, building the profile of the health care professional, can be considered as target units, and stimulate to an appropriate socially significant orientation in behavior, meeting the needs of the two regulated professions. CONCLUSION Summarizing the literature, we believe that students studying in the two regulated professions should be encouraged to act autonomously and as advocates for patients, by generating assertive skills that are embedded in educational programs. Therefore, students need to practice their skills through demonstration, role-plays and experience in practical laboratories, as well as assertive training, so that they can receive support, direction, and feedback from academic nursing. This allows for the practice of these skills in a safe environment that can be observed by teachers, assessed and given feedback (17
2021-11-02T12:02:33.198Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e035e66ebc5f044cc9e1fd4348cea272b212a790", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15547/tjs.2020.s.01.039", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e035e66ebc5f044cc9e1fd4348cea272b212a790", "s2fieldsofstudy": [ "Psychology", "Medicine", "Education" ], "extfieldsofstudy": [] }
54475133
pes2o/s2orc
v3-fos-license
Genomic and Genetic Insights Into a Cosmopolitan Fungus, Paecilomyces variotii (Eurotiales) Species in the genus Paecilomyces, a member of the fungal order Eurotiales, are ubiquitous in nature and impact a variety of human endeavors. Here, the biology of one common species, Paecilomyces variotii, was explored using genomics and functional genetics. Sequencing the genome of two isolates revealed key genome and gene features in this species. A striking feature of the genome was the two-part nature, featuring large stretches of DNA with normal GC content separated by AT-rich regions, a hallmark of many plant-pathogenic fungal genomes. These AT-rich regions appeared to have been mutated by repeat-induced point (RIP) mutations. We developed methods for genetic transformation of P. variotii, including forward and reverse genetics as well as crossing techniques. Using transformation and crossing, RIP activity was identified, demonstrating for the first time that RIP is an active process within the order Eurotiales. A consequence of RIP is likely reflected by a reduction in numbers of genes within gene families, such as in cell wall degradation, and reflected by growth limitations on P. variotii on diverse carbon sources. Furthermore, using these transformation tools we characterized a conserved protein containing a domain of unknown function (DUF1212) and discovered it is involved in pigmentation. INTRODUCTION Species in the order Eurotiales are amongst some of the best characterized fungi. They include the source of life-saving penicillin Penicillium rubens, the model filamentous fungus Aspergillus nidulans, the industrial species and source of citric acid Aspergillus niger, and the human pathogen Aspergillus fumigatus (Galagan et al., 2005;Max et al., 2010;de Vries et al., 2017). While a handful of these species have been extensively studied most have not received a high level of investigation, yet might provide similar benefits or risks to people. Paecilomyces variotii is a ubiquitous thermo-tolerant species that is encountered in food products, soil, indoor environments and clinical samples (Houbraken et al., 2010). Its thermotolerance and ability to grow at low oxygen levels allows it to survive heat treatment and it has been widely isolated as a contaminant of products such as heat-treated fruit juices (Houbraken et al., 2006). Furthermore, it is emerging as an opportunistic human pathogen (Steiner et al., 2013), with cases of P. variotii and the closely related species Paecilomyces formosus infection in immuno-compromised individuals (Torres et al., 2014;Polat et al., 2015;Feldman et al., 2016;Kuboi et al., 2016;Swami et al., 2016;Bellanger et al., 2017;Uzunoglu and Sahin, 2017) and plant disease (Heidarian et al., 2018). While this organism can be detrimental to human health, it also lends itself to diverse industrial applications. P. variotii has been explored as a source of industrial tannase, as its tannase has beneficial characteristics including a high optimum temperature (Battestin and Macedo, 2007a,b). Among its other enzymes with favorable properties for industry are a thermostable glucoamylase (Michelin et al., 2008), a glucose-tolerant β-glucosidase (Job et al., 2010) and an alcohol oxidase that displays stability at high temperature (50 • C) and over a wide pH range (from 5 to 10) (Kondo et al., 2008). Despite the relevance of Paecilomyces species to human activities across the world, no well-annotated genome sequence is currently available for any members in the Paecilomyces genus except for draft genomes of P. formosus (Oka et al., 2014) and P. niveus (Biango-Daniels et al., 2018). Furthermore, methods for genetic manipulation or classical genetics have not been described for Paecilomyces, further limiting our ability to understand gene functions in the genus. Here, we sequenced and annotated the genome of P. variotii [Byssochlamys spectabilis] CBS 101075, which is the type strain of the teleomorphic state (Houbraken et al., 2008), and strain CBS 144490 that was isolated in this study. The genomes have a bi-modal pattern of overall DNA G:C content with alternating stretches of G:C-equilibrated or A:T rich DNA, reminiscent of those found in the genomes of many plant pathogens as a consequence of repeat induced point mutation (RIP) (Testa et al., 2016). RIP is a fungal process in which repetitive sequences are recognized during the sexual cycle and targeted for mutation (Hane et al., 2015). Experimental evidence of RIP -that is a mutagenic process targeted to duplicated DNA sequences that occurs during mating -is limited to fungi of the fungal classes Dothideomycetes [L. maculans (Idnurm and Howlett, 2003;Van de Wouw et al., 2019)] and Sordariomycetes [Fusarium spp., Magnaporthe oryzae, Neurospora crassa, Podospora anserina, and Trichoderma reesei (Selker and Garrett, 1988;Nakayashiki et al., 1999;Graïa et al., 2001;Cuomo et al., 2007;Coleman et al., 2009;Li et al., 2017), reviewed by (Hane et al., 2015)]. In silico sequence analysis suggests that RIP occurs extensively in the fungi [for example a potential activity in the Basidiomycota (Horns et al., 2012)], including species in the Eurotiales like A. niger (Braumann et al., 2008), A. nidulans (Nielsen et al., 2001;Clutterbuck, 2004), Aspergillus oryzae (Montiel et al., 2006), Penicillium chrysogenum (Braumann et al., 2008) and Penicillium roqueforti (Ropars et al., 2012). However, in these species whether those patterns of mutation represent RIP, the natural accumulation of mutations over time, or another mechanism of DNA mutation such as the spontaneous deamination of methylated cytosines (Lindahl, 1993;Lutsenko and Bhagwat, 1999), remains unknown. This point is well illustrated in the case of A. nidulans, a genetic model for many decades yet in which RIP has not been observed despite the in silico evidence (Nielsen et al., 2001;Clutterbuck, 2004). Second, we developed methods for the genetic transformation of P. variotii, including an efficient next-generation-sequencing-based method to identify genes that are mutated in forward genetic screens, and classical genetics in which parents are crossed and their progeny used in genetic segregation analysis. Using these new tools, we characterized two genes of previously unknown function. By combining these methods, we demonstrate RIP activity experimentally for the first time in the Eurotiales, vastly expanding the phylogenetic breadth of the fungi experimentally verified to undergo RIP and thereby suggesting this is indeed a fundamental force that shapes fungal genome evolution. In addition, we compared the plant biomass degrading ability of P. variotii to other Eurotiales, hypothesizing that the active RIP mechanism in this species might reduce gene duplication events and thus limit the expansion of gene families in this species. Consistent with this hypothesis, our analysis revealed the poorest CAZy genome content in P. variotii among the fungal species used for comparison. This, and the identification of a phenotype associated with mutating a gene encoding a protein with a DUF1212 domain, which is at present an enigmatic yet widely conserved domain, highlights how research on P. variotii offers new perspectives to understand the biology of Eurotiales fungi, and fungi more broadly. Wild-Type Strains and Preparation of Growth Media The ex-type strain of Paecilomyces variotii, i.e., strain CBS 101075, was obtained from the Commonwealth Scientific and Industrial Research Organisation culture collection (FRR5219). A second strain was isolated as a contaminant after water damage to the laboratory, having attracted attention because of its ability to inhibit the growth of a plant pathogenic fungus Leptosphaeria maculans. This strain has been deposited at the Westerdijk Institute as CBS 144490. As described below, CBS 10105 (MAT1-1) and CBS 144490 (MAT1-2) are of opposite mating type. An Aspergillus niger strain was isolated from an onion (identification including ITS sequencing, as GenBank MH605508), and used as source of DNA in molecular biology experiments. The strain was deposited to the Westerdijk Institute as CBS 144491. The strains of Eurotiales species used for carbon utilization profiling are given in Table 1. Genome Sequencing of P. variotii Strains Genomic DNA of the two strains was isolated as described previously (Pitkin et al., 1996). The genome of P. variotii strain CBS 101075 was sequenced using the Pacific Biosciences platform. Unamplified libraries were generated using Pacific Biosciences standard template preparation protocol for creating > 10 kb libraries. Five µg of gDNA was used to generate each library and the DNA was sheared using Covaris g-TUBEs to generate sheared fragments of > 10 kb in length. The sheared DNA fragments were then prepared using Pacific Biosciences SMRTbell template preparation kit, where the fragments were treated with DNA damage repair, had their ends repaired so that they were blunt-ended, and 5 phosphorylated. Pacific Biosciences hairpin adapters were ligated to the fragments to create the SMRTbell template for sequencing. The SMRTbell templates were then purified using exonuclease treatments and size-selected using AMPure PB beads. PacBio Sequencing primer was then annealed to the SMRTbell template library and sequencing polymerase was bound to them using Sequel Binding kit 2.0. The prepared SMRTbell template libraries were then sequenced on a Pacific Biosystem's Sequel sequencer using v3 sequencing primer, 1M v2 SMRT cells, and Version 2.0 sequencing chemistry with 1 × 360 sequencing movie run times. The filtered PacBio sub-read data were assembled together with Falcon version 1.8.8 1 , improved with FinisherSC version 2.0 (Lam et al., 2015), and polished with Arrow version SMRTLink v.5.0.0.6792. 2 To aid in gene predictions and annotation, the P. variotii transcriptome was sequenced with Illumina. To generate a diversity of transcripts, mycelia were cultured under four conditions for 4 days without shaking: at two temperatures (30 • C and 37 • C) and in 10% cleared V8 juice pH 6 and potato dextrose broth. RNA was isolated from mycelium using TRIzol reagent (Invitrogen) following the manufacturer's recommendations, and equal quantities of RNA isolated from each mycelium were pooled. Stranded cDNA libraries were generated using the Illumina Truseq Stranded RNA LT kit. mRNA was purified from 1 µg of total RNA using magnetic beads containing poly-T oligos. mRNA was fragmented and reverse transcribed using random hexamers and Superscript II (Invitrogen) followed by second strand synthesis. The fragmented cDNA was treated with end-pair, A-tailing, adapter ligation, and 8 cycles of PCR. The prepared library was then quantified using KAPA Biosystem's next-generation sequencing library qPCR kit and run on a Roche LightCycler 480 real-time PCR instrument. The quantified library was then multiplexed with other libraries, and the pool of libraries was prepared for sequencing on the Illumina HiSeq sequencing platform utilizing a TruSeq paired-end cluster kit, v4, and Illumina's cBot instrument to generate a clustered flow cell. Sequencing of the flow cell was performed on the Illumina HiSeq 2500 sequencer using HiSeq TruSeq SBS sequencing kits, v4, following a 2 × 150 indexed run recipe. Illumina reads were filtered for quality and artifacts, RNA spike-in, PhiX, and N-containing reads, trimmed and assembled into consensus sequences using Trinity version 2.3.2 (Grabherr et al., 2011). The genome was annotated using the JGI Annotation Pipeline, and made publicly available via JGI fungal genome portal MycoCosm (Grigoriev et al., 2014). Strain CBS 144490 HYG1 is a transformant of strain CBS 144490, with a T-DNA inserted into its genome from plasmid pCSB1. This strain was sequenced to represent the genome of CBS 144490. Illumina sequencing of strain CBS 144490 HYG1 was conducted at the Australian Genome Research Facility (AGRF), with 100 bp paired-end reads on an Illumina HiSeq 2500 instrument. The nuclear genome was assembled using Velvet with the k-mer setting at 67 and auto detect for low coverage cut off (Zerbino and Birney, 2008). The mitochondrial genome was assembled using the inbuilt assembler in Geneious version 10.1.3 and annotated along with the mitochondrial genome of strain CBS 101075 using MFannot (Supplementary Figure S1). Phylogenetic Analysis of Strains of Paecilomyces Three gene regions, encoding calmodulin and β-tubulin and the internal transcribed spacers (ITS), were used to build phylogenetic trees between strains. Sequences were those used previously (Samson et al., 2009), with the addition of the corresponding regions of the "P. variotii" number 5 CBS 144490 and CBS 101075 obtained via BLAST searches of the whole genome sequences. Sequences were aligned using MUSCLE (Edgar, 2004) and phylogenetic relationships were inferred using MrBayes (Huelsenbeck and Ronquist, 2001) implemented through Geneious version 11.0.4 using the HKY85 substitution model and 1,100,000 iterations with the sequences from Paecilomyces divaricatus CBS 284.48 set as the outgroups. Generation of Plasmids for Fungal Transformation Using Agrobacterium tumefaciens Plasmids were constructed for the transformation of P. variotii using A. tumefaciens for differing purposes. These plasmids are described in the following eight subsections. (i) Mitochondrial GFP barcode series. The nucleotide sequence corresponding to the first 76 amino acids of the L. maculans citrate synthase gene (Lema_T101280.1) was amplified using primers AU268 and AU269 (Supplementary Table S1) off the genomic DNA of strain M1. Suelmann and Fischer (2000) showed that the corresponding sequence from A. nidulans was sufficient to direct GFP localization to the mitochondria. The coding region of the GFP gene was amplified using primers AU108 and AU68 off plasmid PLAU17 . These two fragments were then cloned into plasmid PLAU2 using Gibson assembly (New England Biolabs). The resultant plasmid was linearized with PmeI and a 20 nucleotide "barcode" was inserted into this site using Gibson assembly. The barcode contained 20 semi-randomized nucleotides (NMNMNMNMNMNMNMNMNMNM; where N is any nucleotide and M is either A or C, as based on Hensel et al. (1995), and appropriate flanking sequence for Gibson assembly was included as a single stranded oligonucleotide AU257 that was made double-stranded via a PCR reaction with primers AU258 and AU259. The pool of resulting fragments was cloned into the PmeI site resulting in a series of plasmids with different barcodes. The sequences of clones in individual plasmids were determined by Sanger sequencing with primer ai076 (Supplementary Table S2). (ii) H2B-CFP. A fusion protein of A. nidulans histone H2B and GFP has previously been shown to be nuclear-localized (Maruyama et al., 2002). The coding region of the histone H2B gene of A. niger strain CBS 144491 was amplified using primers AU492 and AU493 off genomic DNA. The coding region of CFP was amplified using primers AU494 and AU495 off plasmid PLAU41 (a PLAU2-based expression plasmid for CFP, analogous to PLAU17) and cloned into the BglII site of PLAU2 using Gibson assembly. (iii) dspA complementation construct. The dspA gene region was amplified using primers AU463 and AU464 and cloned into the XbaI site of plasmid pMAI2 using Gibson assembly. (iv) prmJ complementation construct. Two fragments corresponding to the gene were amplified with primers AU461 and AU438, and AU437 and AU462 and cloned into the XbaI site of plasmid pMAI2 . (v) A. niger prmJ cross-species complementation construct. The coding region of the A. niger prmJ gene was amplified in two parts; the first using primers FD1212AFPLAU2 and FD1212ER, and FD1212DF and FD1212FRPLAU2 and then combined into the BglII site of PLAU53 using Gibson assembly. (vi) mCherry-tagged DspA. The coding region of the dspA gene was amplified by PCR using the dspA complementation construct as a template with primers AU516 and AU473. The mCherry coding sequence was amplified using primers AU474 and AU517. These two fragments were cloned into the BglII site of plasmid PLAU2 using Gibson assembly. (vii) Mitochondrial GFP in a plasmid conferring resistance to G418. A plasmid expressing mitochondrially localized GFP and G418 resistance was created for co-localization experiments. The coding region of the citrate synthase-GFP fusion was amplified from plasmid CSB1 using primers AU268 and AU68 and cloned into the BglII site of plasmid PLAU53 using Gibson assembly. (viii) leuA gene knockout and complementation. A genomic fragment (1,449 bp) corresponding to the 5 flank of the leuA homolog was amplified from strain CBS 101075 using primers MAI0442 and MAI0443. The 3 flank of the gene (1,439 bp) was amplified with primers MAI0444 and MAI0445. The hygromycin expression cassette of plasmid pMAI17 was amplified using primers MAI0440 and MAI0441. The three fragments were cloned, using Gibson assembly, into plasmid pPZP-201BK (Covert et al., 2001) that had been linearized with EcoRI and HindIII restriction enzymes. P. variotii transformants were generated with this plasmid, as described below, and assayed for their ability to grow on minimal media without leucine. PCR analysis to confirm the successful integration of the knockout construct into the leuA gene was conducted using primer pairs: MAI0023 + MAI0446 and MAI0022 + MAI0447 that amplify from the hph gene into the 5 or 3 flank of the leuA locus, respectively. As a complementation control, the wild type copy of leuA was amplified with primers MAI0442 and MAI0445 and cloned into pPZP-201BK linearized with EcoRI and HindIII. The plasmid and the empty pPZP-201BK were electroporated separately into A. tumefaciens strain EHA105. These two A. tumefaciens strains were co-cultured with two leuA strains of P. variotii for 3 days, then overlaid with minimal medium and cefotaxime. Confirmation of T-DNA Insertion Sites and Verification of Complementation by PCR The T-DNA insertion sites of two mutant strains were confirmed by PCR, i.e., strains AU2_33 and AU1_63. Primers used for AU2_33 were AU446 and ai076 for the intragenic T-DNA and Match2F and Match2R for the intergenic T-DNA. Primers used for 1_63 were AU437 and ai076. The integration of the constructs into the genome, for the complementation of strains, was confirmed by PCR using primers AU446 and AU448 for AU2_33 (dspA) and AU437 and AU439 for AU1_63 (prmJ). Transformation of P. variotii by A. tumefaciens Agrobacterium tumefaciens strain EHA105 was transformed with plasmids by electroporation, as described previously , with selection on LB agar + 50 µg/ml kanamycin. An amount of Agrobacterium cells equivalent to a rice grain was scraped directly off the Agrobacterium transformation plate and suspended in 1 ml of SOC media. P. variotii spores were harvested off V8 agar cultures and suspended in dH 2 O at approximately 10 6 spores per ml. Five hundred µl of fungal spores and 100 µl of Agrobacterium suspensions were pipetted onto the center of a 145 mm petri dish containing 25 ml of solidified induction media (Gardiner and Howlett, 2004). The mixture was spread around the plate and incubated at 22 • C for 3 days and then overlaid with 25 ml of molten CV8 containing 200 µg/ml cefotaxime and either 100 µg/ml hygromycin or 200 µg/ml G418 as appropriate for selection of transformants. Leucine (10 mg/ml) was added to the overlay in the case of the transformation aiming at gene replacement of leuA. Fungal transformants appeared after 5 days and were transferred onto fresh V8 plates containing half the antibiotic concentrations used in the overlay. Barcoding Mutagenesis and NGS to Locate DNA Inserts DNA was extracted from a number of P. variotii transformants that showed growth phenotypes, using a buffer containing CTAB as described previously (Pitkin et al., 1996). The genomic DNA was pooled and sequenced at the AGRF with Illumina sequencing using the same instrument and parameters as strain CBS 144490. Analysis of the next generation sequencing data was conducted in Geneious version 10.1.2. To identify the positions of T-DNA insertions in the genome of a given strain the NGS reads containing the "barcode" from the construct with which that strain was transformed were pulled out (Supplementary Figure S2). Many of these reads extended out from the T-DNA into the sequence adjacent to the T-DNA and this section of the P. variotii genome was then identified using BLAST against the genome sequence. Microscopy A Leica M205 stereomicroscope was used for the examination of mating cultures on agar plates. Fluorescence microscopy was performed using a Leica DM6000 microscope. Cell wall staining was conducted using calcofluor white M2R (0.0004%) and emission was detected using a DAPI filter cube. Images were overlaid using ImageJ software. Genetic Crosses Crossing was conducted as described by Houbraken et al. (2008). Recombination in the progeny was confirmed using genetic markers that were based on PCR amplification of genomic fragments followed by digestion with restriction enzymes. An exception was for the mating type locus where a multiplex PCR resulting in different product sizes was employed. These markers and primers used for amplification are summarized in Supplementary Table S3. Amplification and Sequencing of the hph Gene Conferring Hygromycin Resistance From Progeny of a AU2_33 × CBS 101075 Cross A region of each of the T-DNAs was amplified using primer MAI0022 located at the start of the hygromycin phosphotransferase (hph) open reading frame and a primer specific to the genomic region flanking each of the T-DNA insertion sites (primer Match2R or primer AU439). The resulting PCR product was then used as the template from which to amplify the hph coding region by PCR using primers MAI0022 and MAI0023. This PCR product was sequenced using Sanger chemistry at the AGRF. Southern Blot Analysis of T-DNA Insert Copy Number Approximately 10 µg of genomic DNA was digested with HindIII and separated on a 1% agarose gel by electrophoresis. DNA was blotted onto Hybond-N+ membrane (GE Healthcare) using standard methods. A fragment of the hph gene was labeled with the PCR DIG Probe Synthesis Kit (Roche), as per the manufacturer's directions, hybridized to the blot overnight, and the probe was detected using the DIG wash and block buffer set (Roche) and the DIG Luminescent Detection Kit following the manufacturer's directions. An image of the blot was captured with a ChemiDoc MP (Bio-Rad) using the High Sensitivity Chemi setting. Profiling Fungal Growth on Different Carbon Sources Fungi were grown on Aspergillus minimal medium containing 25 mM monosaccharide or 1% polysaccharide for 2-5 days (depending on the species; Table 1), after which pictures were taken. Growth was compared using D-glucose as an internal reference, so that growth on a specific carbon source relative to growth on glucose was compared between the species. Analysis of Gene Content Gene numbers in different functional categories for the two P. variotii strains were obtained using the "cluster" option from the MycoCosm portal (Grigoriev et al., 2014), comparing the two strains with other species in the Eurotiales as well as Neurospora crassa where RIP is most extensively characterized. As a focused case study, the putative Carbohydrate-Active enzymes (CAZys) were filtered for families known to be involved in plant biomass degradation (de Vries et al., 2017). It should be noted that for families the genes could not always be split by the predicted activity, resulting in some cases in an over-prediction of the number of genes encoding enzymes for the utilization of a certain polysaccharide. Genome Sequence Characteristics of P. variotii Strains CBS 101075 and CBS 144490 The genome of ex-type strain P. variotii CBS 101075 was sequenced using long reads of Pacific Biosciences technology, and genes were annotated using the JGI annotation pipeline (Grigoriev et al., 2014). The mitochondrial genome was annotated separately with MFannot software (Supplementary Figure S1). The P. variotii CBS 101075 genome is approximately 30.1 Mb in total size, 4.53% of which is comprised of repetitive DNA of simple repeats and putative transposable elements (Supplementary Table S4). The genome appears to represent gene-encoding regions completely, as estimated by the presence of 100% of CEGMA genes [the Core Eukaryotic Genes Mapping Approach (Parra et al., 2007)]. Analysis using Benchmarking Universal Single-Copy Orthologs (BUSCO V3; Waterhouse et al., 2018) also indicated a high level of completeness to the genome, with 99.6 and 99.0% of BUSCO genes being present using the Fungi or Eukaryota settings, respectively. Assembly statistics are comparable to other related, recently published genomes (Supplementary Table S4). We identified 9,270 genes in the P. variotii genome (Supplementary Table S4), most of which are complete by having both start and stop codons (98.76%) and have well supported matches in various genomic databases, including NCBI (95.2% of genes) and Pfam (75.02%) (Finn et al., 2016). MCL-based ortholog clustering (Enright et al., 2002) using the genomes in Supplementary Table S4 reveals 8,808 P. variotii genes in orthologous gene clusters, and 462 unique genes. The genome assembly and related data for P. variotii CBS 101075 is available from https://genome.jgi.doe.gov/Paevar1, and the whole genome shotgun project deposited to GenBank as accession RCNU00000000. The genome of a second isolate, CBS 144490 HYG1, was generated using short read technology. A total of 15,229,380 100 bp paired end reads were generated and were assembled into 126 contigs (N50 = 642,740) totaling 32,365,222 bp. This genome was annotated based on that of CBS 101075, and is available from https://genome.jgi.doe.gov/Paevar_HGY_1, and deposited in GenBank under accession RHLL00000000 and in the short read archive as PRJNA497137. Phylogenetic Resolution of Sequenced Strains Within the Paecilomyces Genus A phylogenetic analysis was conducted to confirm the specieslevel taxonomy of strain CBS 144490, and "P. variotii" strain number 5 whose genome was previously sequenced (Oka et al., 2014). The calmodulin and β-tubulin gene regions and ITS separates P. formosus and P. variotii into separate clades (Supplementary Figure S3), in agreement with previous studies (Samson et al., 2009). The regions obtained from the genome sequence of CBS 101075 were identical to those deposited previously for this isolate in GenBank. Strain CBS 144490, isolated in this study, also clearly groups with the other P. variotii strains. However, strain "P. variotii" number 5 (Oka et al., 2014) groups within the P. formosus clade, and not with P. variotii. Agrobacterium tumefaciens Can Be Used for the Efficient Transformation of P. variotii Although the genomes of P. variotii contain a number of interesting genes and other features, testing their function requires methods for genetic manipulation. In the first step for this process, transformation with exogenous DNA was tested using delivery of T-DNA molecules from Agrobacterium tumefaciens. The T-DNA used expressed hygromycin phosphotransferase, GFP with an N-terminal mitochondrial targeting sequence, and contained a "barcode" sequence inward of the right border. Following selection on hygromycin, colonies were examined for GFP fluorescence: the hyphae of all strains (n = 100) had fluorescent tubules consistent with mitochondria, indicating that when using this transformation system 100% of the resultant colonies have integrated the T-DNA construct, including the DNA for expression of GFP, into their genome (Figure 1). Targeted Gene Disruption in P. variotii Is Possible Despite the Multinucleate Nature of Its Conidiospores Many fungi produce multinucleate spores, meaning that after transformation several passaging steps are required to isolate a homokaryotic mycelium. A histone H2B-CFP fusion construct, causing the localization of CFP to the nucleus, was transformed into P. variotii to allow the number of nuclei in the conidiospores to be counted. Most of the spores contained two or more nuclei and some spores containing up to four nuclei (Figure 2). The experiments above indicated that P. variotii could be transformed with DNA. However, whether targeted gene mutations were possible and if mutants could be easily isolated from a population containing multinucleate spores were unknown. To address this, the feasibility of targeted gene disruption via homologous recombination in this species was tested. The leuA gene, encoding α-isopropylmalate synthase required for leucine biosynthesis, was chosen as mutation of homologs of the gene results in leucine auxotrophy in ascomycetes, basidiomycetes and Mucoromycota species (Kohlhaw, 2003;Larson and Idnurm, 2010;Ianiri et al., 2011) that are easy to identify by their inability to growth on media without leucine. Of 25 hygromycin-resistant strains transformed with the leuA knockout construct, which contains approximately 1.5 kb of homologous sequence on either side of the construct for hygromycin resistance, four showed reduced growth on minimal media without amino acids and 21 showed wild type growth rate ( Figure 3A). The growth of the four strains was restored by the addition of leucine to the medium ( Figure 3A). PCR analysis confirmed the correct integration of the hygromycin resistance cassette into the leuA locus in the four leucine auxotrophs (Figure 3B). To confirm that the leucine auxotrophy was due to the gene deletion, two deletion strains were complemented with the wild type copy of leuA. The full length gene was amplified from wild type DNA and cloned into plasmid pPZP-201BK. The pPZP-201BK-leuA and pPZP-201BK empty plasmids were used to transform the two strains using Agrobacterium-mediated delivery of their T-DNAs, with selection on minimal medium without leucine. Colonies were obtained when using the leuA plasmid, but not empty plasmid (data not shown). Rapid Identification of T-DNA Insertion Sites in Barcoded Mutants by Next-Generation Sequencing To assess the potential for forward genetics using insertional mutagenesis of T-DNA molecules delivered from Agrobacterium in P. variotii, approximately 500 transformants were screened for growth or development phenotypes on V8 juice medium and minimal medium. Transformants with such phenotypes were obtained, and seven were further investigated toward identifying the genes disrupted within them. A NGS approach was used in which a pool of DNA from the seven strains carrying a barcode near the right border of the T-DNA was sequenced, to identify the location of T-DNA insertion sites (Supplementary Table S5). Three of the strains were found to each contain at least two T-DNA insertions. No reads containing barcode number 4 could be found and thus the location of the T-DNA is strain AU4_W could not be determined. Three of the strains contained the same barcode sequence (barcode 1). Only two T-DNAs corresponding to barcode 1 were found. However, reads were present which contained barcode 1 and vector sequence extending beyond the right border, so it is likely that one of these strains contains an abnormally integrated T-DNA. Two of the strains in which the T-DNA had clearly inserted within the open reading frame of genes were further studied, namely strains AU1_63 and AU2_33. Three of the strains whose DNA were pooled for sequencing were derived from transformation with the same plasmid so therefore contained the same barcode sequence (#1). PCR was employed to distinguish the insertion events between them, to reveal the presence of the mutation in a gene with a domain of unknown function (DUF1212) in strain AU1_63 (Figure 4). The insert is located approximately in the center of the single exon of the gene, upstream of the region encoding the conserved DUF1212 domain (Figure 4C). We named this gene prmJ after the Saccharomyces cerevisiae homolog PRM10, employing the gene nomenclature used for A. nidulans and other Eurotiales species to P. variotii. The strain AU2_33 contains two insertion sites, one in the coding region of a mitochondrial membrane carrier (delayed sporulation A, dspA) and one that was intergenic. The genes near the intergenic insertion were not further characterized. The intragenic T-DNA insert was located 64 bp into the first exon of the dspA gene ( Figure 5C). Strains AU1_63 and AU2_33 were analyzed by Southern blotting to confirm the number of T-DNA inserts as indicated by the genome sequencing data (Supplementary Figure S4). The single T-DNA insertion in strain AU1_63 was supported by the hph gene fragment hybridizing to a single HindIII fragment of approximately 3.9 kb, while two T-DNAs in strain AU2_33 were indicated as hybridization to two HindIII restriction fragments of ∼6.4 kb and ∼8.3 kb. These sizes are consistent with size predictions based on HindIII sites in the genome sequence data adjacent to the T-DNA insertion sites. Strain AU1_63 Has a Media-Dependent Impairment in Spore Pigmentation, Due to Mutation of a Gene With an Uncharacterized Domain Strain AU1_63 has a pale phenotype on cleared V8 juice (CV8) agar medium because it produces conidiospores that lacked the characteristic yellow pigmentation of P. variotii (Figure 4A). The phenotype was not observed when the strain was cultured instead on potato dextrose medium. A wild type copy of the prmJ gene was amplified and cloned into a plasmid containing a construct conferring G418 resistance, and then transformed into strain AU1_63. Of three strains transformed with the complementation construct, two had a phenotype similar to wild type and one resembled the AU1_63 mutant. However, PCR analysis showed that this non-complementing transformant has not integrated the wild-type copy of the gene into its genome whereas the two other strains with the wild type spore pigmentation had ( Figure 4B). Paecilomyces variotii is heterothallic, and comparison of strains CBS 144490 (MAT1-1) and CBS 101075 (MAT1-2) revealed that each has a distinct gene complement at its MAT locus (Supplementary Figure S5). The pair therefore allows the potential for crossing. The 32 progeny of a cross between mutant AU1_63 and CBS 144490 showed prefect co-segregation of hygromycin resistance with the pale colony pigmentation, as shown in Table S6. Two additional genetic markers, 123A and 123B (Supplementary Table S3), located 1,069,000 bp apart on contig 123 of CBS 144490 were examined in these progeny. These markers demonstrated that recombination events take place during crossing, consistent with meiotic reduction events rather than parasexual reduction in chromosome numbers as can occur in some Eurotiales species. The Mutation in prmJ in Strain AU1_63 Can Be Cross-Species Complemented by the Aspergillus niger prmJ Homolog The DUF1212-containing protein (PrmJ) identified in P. variotii shows strong sequence similarity to homologs from the genus Aspergillus. As a representative example, the alignment of the A. niger homolog (GenBank: EHA28452.1) has 66% identical amino acids with PrmJ of P. variotii. To test the hypothesis that the PrmJ proteins have a conserved function, the coding region of the homologous gene from A. niger was cloned into the constitutive expression plasmid PLAU2 and transformed into the AU1_63 mutant. Five putative transformants were obtained; all showed an increase in colony pigmentation, and one of these transformants was further analyzed (Figure 6A). PCR confirmed that the transformant contained both the mutated copy of the prmJ allele and the introduced A. niger transgene ( Figure 6B). Thus, the A. niger homolog can complement the functions lost in the P. variotii prmJ gene mutant. The DUF1212 Domain Protein Is Not Essential for Mating in P. variotii There is little information about DUF1212 proteins in fungi, other than that the PRM10 gene of S. cerevisiae is transcriptionally induced in response to sexual pheromones (Heiman and Walter, 2000). Of the progeny of the AU1_63 × CBS 101075 cross, eight contained the disrupted prmJ allele and were of the opposite mating type (MAT1-2) to strain AU1_63 (MAT1-1) (Supplementary Table S6). One of these isolates was back-crossed to strain AU1_63, and this combination of strains was able to produce the sexual cleistothecia structures and viable progeny from ascospores (Supplementary Figure S6). Hence, the DUF1212 domain protein is not essential for sexual crossing in P. variotii. Strain AU2_33 Has Delayed Sporulation and Growth Defect Phenotypes Due to Mutation of the Mitochondrial Membrane Carrier DspA Strain AU2_33 showed delayed sporulation on CV8 medium, with spore production beginning at around 3-4 days, in contrast to the wild type that produces spores as soon as the colony begins to expand (Figure 5A). Even after 14 days, the amount of sporulation was reduced. On this medium the radial growth rate was not noticeably reduced. In contrast, on defined minimal medium, the radial growth rate of the AU2_33 mutant was highly reduced as it showed close to no growth. A complementation construct was produced with a wild type copy of the gene, and when transformed into strain AU2_33, the transformants showed a phenotype resembling that of the wild type. As expected, PCR analysis of the two complemented isolates indicated that they contain both a mutant and a wild-type allele in their genomes ( Figure 5B). Transformants of CBS 101075 expressing a DspA-mCherry fusion protein displayed red fluorescence. This co-localized with the green fluorescence of a mitochondrially localized GFP-citrate synthase fusion protein, indicating that this putative carrier protein also localizes to the mitochondrion ( Figure 5D). The T-DNA insertion in strain AU2_33 co-segregated with the delayed sporulation phenotype in 18 out of 20 progeny as assessed by PCR (Supplementary Table S7). Two progeny, 17 and 19, contained the mutant dspA allele yet did not display the mutant phenotype, which might be due to the effect of other genetic rearrangements taking place during crossing. The two T-DNA inserts of mutant AU2_33 displayed genetic linkage co-segregating in 19 of 20 progeny. There was recombination between the T-DNAs and mating type locus, demonstrating the progeny were the result of meiotic events. Intriguingly, all of the progeny from this cross were sensitive to hygromycin. The hph Gene, Conferring Hygromycin Resistance, Is Mutated by Repeat-Induced Point Mutation (RIP) in the Progeny of a Cross Between AU2_33 and CBS 101075 None of the 20 progeny resulting from a cross between mutant AU2_33 and CBS 101075 showed a hygromycin resistance phenotype, despite the T-DNA construct being present in 10 of these progeny as demonstrated by PCR analysis (Figures 7A,B). Therefore the coding region of the hph gene, which confers resistance to hygromycin, in one of the progeny (progeny 3) was sequenced. The open reading frame of the hph gene amplified from both T-DNA insertion copies revealed substitution mutations characteristic of RIP ( Figure 7C). A 780 bp region was sequenced and 141 (18.1%) and 156 (20%) nucleotides were mutated. The mutations were all C to T or G to A. RIPCAL analysis revealed bias toward CpA to TpA dinucleotides and the complementary TpG to TpA mutations that are characteristic of RIP in other fungal species such as N. crassa ( Figure 7D). The P. variotii Genome Features Evidence of RIP The genome sequences of P. variotii have a bimodal GC content, containing long stretches of approximately 50% G:C interspersed by relatively shorter regions of approximately 20% G:C. The example of the first 450,000 bp of CBS 144490 contig 49 is given in Figure 8A. Overall, these AT rich regions constitute approximately 8.49% of the CBS 101075 assembly and 13.8% of CBS 144490 assembly ( Figure 8B). It should be noted that because of the different sequencing strategies -long reads from Pacific Bioscience vs. 100 nucleotide reads from Illumina technologies -these proportions can only be compared broadly. One mechanism by which AT-rich regions can be created is RIP (Testa et al., 2016), which has been shown to defend the N. crassa genome against transposons (Kinsey et al., 1994). We hypothesize that the AT-rich regions identified in the P. variotii genome are due to RIP. In support of this hypothesis, a putative Tf2-type retrotransposon, Tn123, on contig 123 (nucleotide position 231,587-238,677) of CBS 144490 was identified via BLASTx searches (Altschul et al., 1990). BLASTn comparison of this sequence against the two P. variotii genomes revealed sequence similarity between this transposon and a number of the AT-rich regions in both genomes. Furthermore, there was a clear pattern of C → T and G → A mutations that are characteristic of RIP. RIPCAL analysis showed that most of the RIP-like mutations targeted CpA dinucleotides, which is also highly characteristic of RIP [ (Hane et al., 2015); Figure 8C]. This strongly suggested that RIP mutation of retrotransposons including, but not limited to Tn123, is responsible for the formation of at least some of these AT-rich regions. P. variotii Has a Reduced Expansion of Gene Families, Including Polysaccharide Degradation Related CAZy Genes, Relative to Other Eurotiales Species Given the genomic and experimental evidence for the active occurrence of RIP in P. variotii we assessed whether a FIGURE 5 | Strain AU2_33 has a growth and sporulation defect due to mutation of the dspA gene. (A) Sporulation on CV8 was delayed in the mutant AU2_33 at both 3 and 14 days after growth on clear V8 juice medium compared to the wild type CBS 144490 and two complemented strains. The AU2_33 mutant also had impaired growth on minimal medium (MM). (B) PCR analysis of the genotypes of the AU2_33 mutant, wild type and two complemented isolates. (C) The T-DNA insertion is located in the first exon of the dspA gene. Green represents sequence of the T-DNA and red represents sequence lost from the genome in the mutant. (D) Co-localization of mCherry-tagged DspA protein and mitochondrially localized GFP: (i) red fluorescence from the DspA-mCherry fusion, (ii) green fluorescence of citrate synthase-GFP, (iii) blue fluorescence due to calcofluor white staining of the cell wall, and (iv) the merged image. Scale bar = 10 µm. consequence is the limited expansion of gene families in this species. A comparison of P. variotii with other Eurotiales species shows that these strains have the fewest genes (Figure 9). We compared the bi-directional similarity of P. variotii genes against the second closest BLAST match in its own genome. Consistent with RIP P. variotii has fewer genes with close similarity than the comparison species in the genera Talaromyces, Penicillium, Saccharomyces, Schizosaccharomyces and most Aspergilli. However, several Aspergillus species including A. clavatus also had few similar genes, possibly indicating past or current RIP in these species (Figure 10A). Comparative cluster data obtained through the MycoCosm portal (Grigoriev et al., 2014) show that P. variotii along with two other species in the family Thermoascaceae (P. formosus and Thermoascus aurantiacus) contains considerably fewer genes in the 100 most populous gene clusters ( Figure 10B and Supplementary Table S8). For example, examination of secondary metabolite gene clusters shows that the three species in the Thermoascaceae contain fewer secondary metabolite clusters than other species in the Eurotiales ( Figure 10C). P. variotii is particularly depleted in genes encoding polyketide synthases, with strain CBS 101075 containing only six such genes compared to as many as 28 in some of the Aspergillus species examined. Two other striking reductions in gene family numbers were seen for amino acid permeases (cluster 12) and major facilitator superfamilies (clusters 10 and 13). The one exception to the reduction in gene numbers in the Thermoascaceae species examined was an expansion in genes encoding methyltransferases (cluster 11). Comparison of the CBS 144490 genome with that of CBS 101075 revealed a high level of similarity; however, CBS 144490 contains an additional 1.2 Mb of sequence, some of which is made up of repetitive elements, while estimated to have 40 fewer genes overall (Supplementary Table S4). A comparison between genomes revealed that the CBS 144490 strain has 372 genes and the CBS 101075 strain has 450 genes that are unique to each strain and not found in the other. No examples of recent DNA duplications were observed in either genome. In many cases, genes unique to one or the other strain were found as clusters of varying size of such unique genes. The most striking example is the presence of scaffold 108 (151 kb) in CBS 144490 that is absent from CBS 101075. This region includes 52 predicted genes, including a putative non-ribosomal peptide synthase. However, despite these differences to date no in vitro growth differences have been observed for the two strains. Evidence for the lack of expansion of gene families in P. variotii can be seen in the genes encoding CAZys (for plant polysaccharide degradation), as P. variotii had the fewest number of such genes (74 genes) of all tested Eurotiales species ( Figure 10D and Supplementary Dataset S1). In total, this number is most similar to Aspergillus glaucus (92 genes), while significantly higher CAZy gene numbers in all of the other species suggests a better capability for plant polysaccharide degradation. Assimilation Capabilities Are Reduced in P. variotii for Some Carbohydrate Sources To assess if the reduction in gene family numbers has a consequence on biology, the growth of P. variotii on different carbon sources was compared with other Eurotiales species. Overall, P. variotii is less able to use these plant-derived compounds as a carbon source than most other species (Figure 11). The growth profile of P. variotii is also most similar to A. glaucus, consistent with the genome content of CAZys. Growth of P. variotii is particularly poor on cellulose, xylan and inulin, which correlates with the very low number of genes encoding cellulolytic (8 genes), xylanolytic (30 genes) and inulinolytic (2 genes) enzymes compared to other species in the Eurotiales. Talaromyces marneffei has no inulinolytic genes and A. nidulans has the same number as P. variotii: these species also grow very poorly on inulin, as do several others (Supplementary Dataset S1 and Figure 11). Interestingly, P. variotii can produce high levels of invertase when cultivated on agricultural and industrial residues (Job et al., 2010). Growth on commercial cellulose (Avicel) is challenging for most fungi, so it is hard to use it to draw comparative conclusions about species differences, although reasonable radial growth was observed for P. variotii. However, P. variotii has been shown to produce a glucosetolerant β-glucosidase (Job et al., 2010) indicating its ability to release glucose molecules from short oligosaccharides. A clear difference in growth of the species is seen on xylan, but this does not fully correlate with the number of xylanolytic genes. Growth is poor for P. variotii and A. glaucus that have a low number of xylanolytic genes, but also for A. wentii that has a similar number (66 genes) to species that grow well on this substrate. In contrast, good growth was observed for T. marneffei (47 genes), which has a similar number of genes as A. glaucus (41 genes). However, a P. variotii strain showing high xylanase production has been described (de Laguna et al., 2015), suggesting that even with a few genes those enzymatic activities may reach high levels. Paecilomyces variotii grows relatively well on guar gum (galactomannan) even though it has a low number of mannanolytic genes in its genome (10 genes), similar to FIGURE 9 | Paecilomyces variotii has fewer genes than many other species in the Eurotiales. Phylogenetic relationships between the Eurotiales species, with two yeast species as outgroups, were defined from a comparison of 3,374 single copy gene orthologs. The graph shows total numbers of genes in each species and the distribution of the homologs. A. glaucus. Both species grow better on guar gum than on xylan, suggesting that their limited enzyme system is sufficient for degradation of galactomannan. Neither species contains a known endomannanase, but both contain exo-enzymes, β-mannosidases and α-galactosidases that can release the monomeric sugars from galactomannan. A. niger and P. subrubescens, which both have a much more extensive mannanolytic gene system, including several endomannanase encoding genes, grow much better on guar gum. Similarly to guar gum, P. variotii showed good growth on apple pectin despite having the lowest number of pectinolytic genes (25 genes) from the tested species. Exo-polygalacturonases have also been purified from P. variotii cultures further demonstrating its pectinolytic capability (de Lima Damásio et al., 2010;Patil et al., 2012). This is in contrast to poor growth of A. clavatus, which has a reduced number of pectinolytic genes (43 genes) compared to most other Aspergilli, but still almost twice as many as P. variotii. The better growth of P. variotii on apple pectin could be explained by a higher number of GH28 pectin hydrolases (6 genes) compared to A. clavatus (3 genes). This may also explain the poorer growth of A. glaucus on this substrate, as while it has a similar number of pectinolytic genes as P. variotii, it only contains two genes encoding GH28 enzymes. Paecilomyces variotii has been shown to produce thermostable glucoamylase and α-amylase with potential in industrial applications (Michelin et al., 2008;Michelin et al., 2010). Growth of P. variotii on starch was similar to most other species. Despite a somewhat reduced amylolytic gene set (16 genes), P. variotii has all the enzymatic activities for degradation of starch, which likely explains the growth observed on this polysaccharide. DISCUSSION The genus Paecilomyces has received limited attention for functional genomics, despite its role in industry, human disease, and as a commonly encountered saprobe found around the world. This research has generated high quality genome sequence resources, demonstrates that genetic segregation analysis is possible, and shows that gene disruption by either targeted or reverse genetics is highly feasible for gene discovery. Several key points arising from this work are described in the following sections. Using P. variotii, new properties associated with fungal genes of unknown function have been defined. For example, proteins with a DUF1212 are widely conserved in fungi and include Prm10 in S. cerevisiae and NCU00717 in N. crassa. In S. cerevisiae, the gene was found to be up-regulated threefold in response to pheromone and predicted to contain five transmembrane segments (Heiman and Walter, 2000). However, the biological function of these proteins has not been elucidated and no phenotypes found in gene disruption strains. This study reports a pigmentation phenotype associated with disruption of the DUF1212 homolog in P. variotii (Figure 4). Given that the PRM10 gene is up-regulated in response to pheromone in S. cerevisiae (Heiman and Walter, 2000), we assessed whether the protein is required for mating in P. variotii. Crosses between FIGURE 10 | Paecilomyces variotii and related Thermoascaceae have a reduced expansion in gene families. (A) A limited number of highly similar gene duplicates are observed in P. variotii compared to other Eurotiales. For each genome, a self BLASTp was conducted to identify orthologs by reciprocal best hit via BLAST, then the fraction of orthologs at various identity levels were plotted. x-axis: percent identity, y-axis: lineage, z-axis: fraction of all orthologs at a given % identity. Lineages are colored at the genus level, green: Paecilomyces, purple: Talaromyces, blue: Penicillium, dark red: Aspergillus, red: Saccharomyces, yellow: Schizosaccharomyces. (B) Three Thermoascaceae species have relatively fewer genes in the 100 most populous gene clusters in the comparative cluster analysis obtained through MycoCosm. Similarly, these species showed a more restricted set of (C) secondary metabolite genes and (D) genes encoding Carbohydrate-Active enZYmes (CAZys). FIGURE 11 | Growth of P. variotii compared to other Eurotiales species on different plant polysaccharides as the sole carbon source, compared to glucose. Petri dishes containing minimal medium and differing carbon sources were inoculated with different species of Eurotiales and growth photographed after 2-5 days depending on the species. two isolates carrying the DUF1212 mutant allele produced cleistothecia, sexual spores and viable progeny (Supplementary Figure S6). This suggests that the PRM10 homolog (prmJ) is not essential for the sexual cycle of P. variotii. The ability of the A. niger prmJ homolog to complement the pigmentation phenotype of the P. variotii mutant strain implies that the function of this protein is conserved between the two genera ( Figure 6). Identification of a phenotype associated with a Domain of Unknown Function (DUF) protein that has a conserved function in a related species suggests that P. variotii is a species in which to study this protein family in greater detail. A second insertional mutant investigated in detail contained a T-DNA in the dspA gene encoding a mitochondrial carrier family protein. DspA is localized to the mitochondria and mutation of the dspA gene delays sporulation in a manner dependent on the medium composition ( Figure 5). As in the case of the AU1_63 strain, the phenotype of the AU2_33 (dspA mutant) strain is influenced by the composition of the media. On a minimal medium, growth on the strain was highly restricted (Figure 5). This provides a possible direction for future studies into the function of this putative mitochondrial carrier protein. That is, if a compound can be found that when supplemented into the media restores the phenotype of this mutant, that compound might represent the substrate of the carrier. Despite their annotation, not all mitochondrial carrier family proteins are localized to the mitochondria. For example, proteins in this family have been found localized to chloroplasts (Palmieri et al., 2009) and peroxisomes (Jank et al., 1993). Thus, the localization of mitochondrial carrier family proteins cannot be predicted, so must be determined experimentally. We demonstrate, through co-localization with a known mitochondrial protein, that the DspA protein in P. variotii has a mitochondrial localization. An unexpected finding from the genetic segregation analysis of mutant AU2_33, which contains two T-DNAs, was that all of the progeny were hygromycin sensitive despite half of the progeny encoding the hygromycin resistance gene when they were tested by PCR (Figure 7). We traced this loss of resistance to mutation of the hph gene by RIP. Thus, this analysis in P. variotii provides important evidence that Eurotiales fungi have active RIP mechanisms. Analysis of the P. variotii genome sequence shows evidence for past RIP activity, both in its bi-modal G:C content and more conclusively the presence of a putative retrotransposon, Tn123, some copies of which strongly appear to have been affected by RIP (Figure 8). In the majority of genomes analyzed for past RIP activity there is a dinucleotide profile that is biased toward RIPlike CpA mutations (Hane et al., 2015). Analysis of the Tn123 sequences also indicated a strong CpA bias, strengthening our conclusion that the mutated copies of this transposon sequence have been created through RIP mutation (Figure 8). The predicted consequences of RIP are limitations in the expansion of genes by gene duplication. Evidence for this comes from the analysis of gene families when compared with other ascomycete species. As illustrated in Figures 9, 10 and Supplementary Table S8, P. variotii consistently has the lowest number of genes other than N. crassa, where RIP has been demonstrated to occur, and Thermoascus aurantiacus, where little is know about its genetics. While one predicted consequence of RIP should be limitation in the expansion of gene families, this is not always the case: in this analysis the species with the largest number of families, Nectria haematococca, also has an active RIP process (Coleman et al., 2009). Analysis of the ability of P. variotii to degrade polysaccharides suggests that it has much smaller gene set related to the degradation of plant polysaccharides compared to most of the other tested Eurotiales species (Figure 10D). One interpretation of this finding is that RIP mutation has reduced gene duplication and thus the expansion of these gene families. This may represent evidence of the hypothesized evolutionary cost associated with the genome protection afforded by RIP (Galagan and Selker, 2004). A flipside of the evolutionary cost of RIP has also been hypothesized that certain loci within repeat rich compartments may undergo accelerated evolution; this remains to be experimentally validated. Despite recent advances, not least in the rapid rate of genome sequencing (Grigoriev et al., 2014), only a minute fraction of the millions of fungal species believed to exist (Blackwell, 2011) have been studied at the genetic level. This is because the necessary combination of tools required for functional biology, i.e., a genome sequence, transformation protocols, targeted gene mutations and genetic crosses, have been developed in relatively few species. However, research conducted beyond the current model organisms is vital to gain a more comprehensive understanding of fungal biology. Paecilomyces variotii is one of the vast number of fungal species for which techniques for genetic manipulation have not previously been reported, despite its relevance to human activities. In this study, we have produced genome assemblies for two strains as well as developing transformation, efficient targeted gene disruption using Agrobacterium and convenient genetic crosses. Considering that PEG-mediated protoplast transformation is commonly used in several species of the Eurotiales (Nara et al., 1993;de Bekker et al., 2009;Arentshorst et al., 2012;Oakley et al., 2012;Weyda et al., 2017), it is likely that this protocol could also be adapted to P. variotii. Taken together, P. variotii could now be considered as a convenient model for studying aspects of the diverse biology of the Eurotiales (de Vries et al., 2017), and in particular the family Thermoascaceae, including studying RIP. Future work will undoubtedly uncover more novelties or shared features in this ubiquitous organism. AUTHOR CONTRIBUTIONS AU, SM, MM, AW, and AI performed the experiments. AU, SM, GH, SM, JP, AL, KB, IG, and AI were involved in genome sequencing, assembly and annotation. AU, SM, JH, IG, and AI analyzed the data. AU, MM, SM, RdV, IG, and AI designed the experiments, discussed the results, and wrote the manuscript. All authors provided final approval for the manuscript.
2018-12-13T14:05:20.261Z
2018-12-13T00:00:00.000
{ "year": 2018, "sha1": "e934fda01039f14fd68670a207252b53caccbf4e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.03058/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e934fda01039f14fd68670a207252b53caccbf4e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119233588
pes2o/s2orc
v3-fos-license
Compact sources in the Bologna Complete Sample: high resolution VLA observations and optical data Among radio galaxies, compact sources are a class of objects not yet well understood, and most of them cannot be included in classical populations of compact radio sources (flat spectrum AGN or compact steep spectrum sources). Our main goal is to analyze the radio and optical properties of a sample of compact sources and compare them with FRI/FRII extended radio galaxies. We selected in the Bologna Complete Sample a sub sample of Compact sources, naming it the C BCS sample. We collected new and literature sub-arcsecond resolution multi-frequency VLA images and optical data. We compared total and nuclear radio power with optical emission line measurements. The [OIII] luminosity - 408 MHz total power relation found in High and Low excitation galaxies, as well as in young (CSS) sources, holds also for the C BCSs. However, C BCSs present higher [OIII] luminosity than expected at a given total radio power, and they show the same correlation of Core Radio Galaxies, but with a higher radio power. C BCSs appear to be the high power tail of Core Radio Galaxies. For most of the C BCSs, the morphology seems to be strongly dependent to the presence of dense environments (e.g. cluster or HI-rich galaxies) and to a low age or restarted radio activity. INTRODUCTION. According to their radio power and morphologies, radio galaxies are classified as FRI and FRII (Fanaroff & Riley 1974) and Compact sources. FR I and FR II are extended sources on kpc up to Mpc scale. Their properties have been analyzed by several authors (e.g. Fanaroff & Riley 1974, Laing et al. 1983, Ledlow & Owen 1996. Recently, Capetti et al. 2011 (and references therein) discussed the correlation between optical and radio properties at the light of unification models and accretion properties. Sources with a projected linear size smaller than 15-20 kiloparsec are usually defined as compact sources and can be high or low power radio sources. High radio power sources can have flat or steep spectrum. Flat spectrum sources are small because of projection and relativistic effects being identified (in agreement with unified models, see e.g. Urry & Padovani 1995) as objects dominated by the emission of a relativistic jet oriented at a small angle with respect to the line of sight. According to their radio power and optical properties, they are classified as FSRQ (Flat Spectrum Radio Quasars) or BL Lac objects. High power, compact steep spectrum (CSS) sources can be small because they are young sources (e.g. Stanghellini et al. 2005 and references therein). CSS sources are not beamed sources and they are likely to be young radio galaxies that could evolve into large radio objects, FR I/FR II (see Fanti et al. 1995; Send offprint requests to: E.Liuzzo, e-mail: liuzzo@ira.inaf.it Readhead et al. 1996; O'Dea 1998, for reviews, but also van Breugel et al. 1984). Strong support to this scenario comes from the measurements of proper motions of the hot spots of some of them (Polatidis & Conway 2003, Giroletti & Panessa 2009, with separation velocities of 0.1-0.4 h −1 c, and then small kinematic ages which are in agreement with spectral ages derived from flux density measurements (Murgia et al. 1999;Murgia 2003). Kunert-Bajraszewska et al. 2010 selected and studied the properties of low power CSS sources with flux density < 70 mJy at 1.4 GHz and α 4.85GHz 1.4GHz > 0.7. These authors suggest the existence of a large population of short-lived objects, poorly known. Some of these could be precursor of large-scale FR I galaxies. To better investigate the nature and properties of compact radio galaxies, we selected from the Bologna Complete Sample (BCS, Giovannini et al. 2001, Liuzzo et al. 2009b) all sources with a projected linear size smaller than 20 kpc. We will name this sub-sample the C BCS (Compact BCS sources) and it is composed by 18 objects. This complete sub-sample is selected at low frequency, therefore should present no bias with respect to the source orientation, possible beaming effects and spectral index. We note that most sources in our sample show a moderately steep spectral index, and could not be included in samples as the one presented by Kunert-Bajraszewska et al. 2010. The radio power of C BCS sources is low with respect to powerful FR II or FSRQ, but in the same range of giant FR I radio galaxies (10 23−26 W/Hz at 408 MHz). Some of them were previously analyzed by us in the radio band (Giroletti et al. 2005b). We present in this paper radio data of the 5 remaining ones with new high resolution Very Large Array (VLA) observations. We also discuss their optical properties, showing for the first time their optical spectra, since the emission lines analysis is fundamental in order to understand the nature and properties of these objects. Moreover we include in this paper data of few extended BCS sources in order to usefully compare the C BCS radio and optical properties with different radiogalaxy types (e.g. FRII and FRI extended sources). The layout of the paper is the following: -in Sect. 2, we describe radio and optical data for C BCS sample, in particular the new high resolution VLA images and TNG (Telescopio Nazionale Galileo) observations; -in Sect. 3, we present optical results for all our targets; -in Sect. 4, we analyze the relation between the optical and radio emissions; -in Sect. 5, we report notes on single sources ; -in Sect. 6, we discuss our main results with literature ones; -in Sect. 7, we resume our main conclusions. RADIO AND OPTICAL DATA. Table 1 summarizes references for radio and optical data of all C BCS; N indicates sources for which new data are presented for the first time in this paper. We added also a few extended BCSs with new data presented here, to increase the statistics in the comparison between compact and extended sources. The new high resolution VLA observations were obtained in two observing runs in 2006 March 11 and 2006 April 04. The array was in A configuration, and the observing frequencies were 8.4 GHz and 22 GHz. Standard observing schedules for high frequency observations were prepared, including scans to determine the primary reference pointing, and using a short (3 s) integration time and fast switching mode (180 s on source, 60 s on calibrator) for K band (22 GHz) scans. Post-correlation processing and imaging were performed with the NRAO (National Radio Astronomy Observatory) Astronomical Image Processing System (AIPS). Parameters of natural uv-weighted images are reported in Table 2. Our new VLA images for resolved sources are shown in Sect.5. To separate the different source components and relative fluxes, dimensions, we used in AIPS the JMFIT task which is a sophisticated least-squares fit of image with Gaussian components. Taking into account also spectral index considerations, we identified the core as the unresolved (point-like, see Tab.5) component having the higher flux density in the source. If given, the spectral index maps are made with AIPS using the same uv-range. Typical errors for flux density measurements are ∼3% for 8.4 GHz, and ∼10% for 22 GHz. Giroletti et al. 2003;4: Giroletti et al. 2005a;5: Giroletti et al. 2005b;6: Giroletti et al. 2006;7:Liuzzo et al. 2009a;8: Massaro et al. 2009;9.Liuzzo et al. 2010 ;10:Crawford et al. 1999, 11: Morganti et al. 1992, 12: Capetti et al. 2010, 13: Ho et al. 1997, 14: Buttiglione et al. 2009, 15: SDSS DR7, Abazajian et al. 2009, 16: Anton 1993 Optical Data We collected optical information available for all C BCS sources, presenting new optical spectra and completing with the emission line measurements from literature. In Tab Data analysis The data were reduced using the LONGSLIT package of NOAO's (National Optical Astronomy Observatory) IRAF 1 is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. It is available at http://iraf.noao.edu/ reduction software. A bias frame was subtracted from any frame, then the flat field correction was applied to remove the pixel-to-pixel gain variations. After that, the wavelength calibration, the optical distortions corrections and the background subtraction were applied. One-dimensional spectra were extracted by summing in the spatial direction over an aperture corresponding to the nuclear part of the source: we extracted and summed the 6 pixel rows closest to the center of the spectrum, corresponding to 1.65 ′′ . Lastly the relative flux calibration was made using spectro-photometric standard stars observed during each night. In Fig. 1 we present the C BCS sources optical spectra after the calibration. In order to properly measure the emission lines intensities, we needed to subtract the stellar emission of the host galaxies. Before removing the stellar continuum, we corrected for reddening due to the Galaxy (Burstein et al. 1982(Burstein et al. , 1984 using the extinction law of Cardelli et al. 1989. The galactic extinction E(B-V) used for each object was taken from the NASA Extragalactic Database (NED) database. The adopted method consists on modeling the nuclear spectra with a single stellar population taken from the Bruzual et al. 2003 library and then subtracting the best fit model spectrum from the nuclear one. The templates assume a Salpeter Initial Mass Function (IMF) formed in an instantaneous burst, with solar-metallicity stars in the mass range 0.1 ≤ M ≤ 125 M ⊙ . The parameters free to vary independently each other in order to obtain the best fit are the stellar age (from 1 to 13 Gyr), the metallicity (from 0.0008 to 0.5 solar metallicity), the normalization of the model, the velocity dispersion, the continuum emission from the AGN (Active 1 IRAF (Image Reduction and Analysis Facility) Galactic Nucleus) and its slope. Even if this method of stellar removal gives also an estimate of the velocity dispersion, for the resolution of our spectra this value is dominated by the instrumental broadening. The spectral regions chosen for the fit are centered on the Hβ and Hα emission lines, with a range of 3600 -5500 Å for the Hβ and 5700 -7100 Å for the Hα. The emission lines are excluded from the fit, since they are strongly affected by the nuclear emission more than the stellar one. Other small regions are excluded because of telluric absorption, cosmic rays or other kind of impurities. At the end of this operation, as a result, we obtained the non stellar nuclear emission produced by the AGN activity. In Fig. 2 we show as an example of the adopted procedure, the spectra of 2 C BCS sources. The source spectra are in solid lines, the top spectra are before the stellar removal and the bottom spectra are the results of the host galaxy stellar population subtraction. The dotted line through the top spectra indicates the single stellar population model. The dashed line across the bottom spectra indicates the zero flux level. These spectra have a quite flat continuum emission with the overlap of emission lines produced by the photoionised gas. Diagnostic diagrams. Diagnostic diagrams are constructed from pairs of observed line ratios which reveal information on ionizing continuum, ionization parameter, gas temperature and other physical properties of the emission line regions. According to Heckman 1980, Baldwin et al. 1981, Kewley et al. 2006 and other works, star forming galaxies are separated from AGNs and AGNs into High Excitation Galaxies (HEG) and Low Excitation Galaxies (LEG).These ratios are chosen so that considered lines are very close to avoid reddening and extinction problems (strongest for the bluer part of the spectrum), and also the ratios can be measured even if there are uncertainties on the flux calibration of the spectra. Moreover, the chosen lines are often the strongest features of the optical spectra, easily found also in low luminosity galaxies. In Fig. 3 we show the three diagnostic diagrams in Tab λ6583/Hα). These line ratios are used to separate the star-forming galaxies from AGNs, since the [NII]/Hα is more sensitive to the presence of low level AGN than other lines due to its sensitivity to metallicities, while the [OIII]/Hβ line ratio is sensitive to the ionization parameter of the gas (the amount of ionization transported by the radiation moving through the gas). The ([OIII]λ5007/Hβ) versus ([SII]λ6716λ6731/Hα) and the ([OIII]λ5007/Hβ) versus ([OI]λ6364/Hα) diagnostic diagrams are more sensitive to the hardness of ionizating radiation field, dividing the AGNs into two branches: high ionization sources (i.e. High Excitation Galaxies, HEG) lie on the upper branch, low ionization sources (i.e. Low Excitation Galaxies, LEG) lie on the lower branch. In Fig. 3, B2 0708+32B, B2 0722+30, B2 1257+28, B2 1855+37, B2 0331+39, B2 0844+31, B2 1101+38, B2 1512+30 are not considered due to their upper limits or undetected lines in at least one of the diagnostic ratios. The star forming, HEG and LEG regions are separated by solid lines according to Kewley et al. 2006. The region between the dashed line and the solid line in the Log ([OIII]λ5007/Hβ) versus Log([NII]λ6583/Hα) diagram indicates the region of composite galaxies, sources with both star-forming and nuclear activity. We also plotted in color the HEG (grey) and LEG (cyan) regions occupied by the radio-loud AGN with redshift z < 0.3 taken from the 3CR Third Cambridge Catalog of Radio Sources) sample ). The red dotted lines represent the HEG/LEG separation derived for 3CR. The 3CR catalog of radio sources is characterized by unbiased selection criteria with respect to optical properties and orientation, and it spans a relatively wide range in redshift and radio power, covering the whole range of behavior of radio-loud AGN. BCS sources, with detected enough emission lines to make diagnostic diagrams (see Tab. 4), are shown as colored dots according to their classification. The position of C BCS sources on the diagnostic diagrams indicates that the majority of our sources belongs to the LEG group. The only exceptions are B2 0648+27 and 3C 305, located in the HEG region. We note also that in this region is present B2 1144+35 a BCS FRI radiogalaxy. We also labeled the source B2 0149+35, source in a low activity phase detailed discussed in Sect. 5.1.2. OPTICAL-RADIO CORRELATION The correlation between AGN optical narrow emission line and radio power is verified since long time (e.g. Baum et al. 1989aBaum et al. , 1989b. This correlation is explained thinking of a common energy source for both the optical lines and radio emission: the isotropically emitted radiation from the active nucleus ionizes the optical emitting gas and the radio luminosity is determined by the properties of the central engine. As shown by Morganti et al. 1997, the same optical-radio relation holds both for extended sources and compact sources. They compared the [OII]λ 3727 Å and [OIII]λ 5007 Å emission line luminosities of a sample of CSS sources with the values found for extended sources of similar radio power and redshift. They found a very intriguing result: in the correlation between the [OII]λ 3727 Å -radio luminosities, both compact and extended radio sources lie on the same linear correlation; instead, looking at the [OIII]λ 5007 Å -radio luminosities, compact sources tend to be at the lower side of the [OIII]λ 5007 Å luminosity. In Figs. 4 and 5, we have done respectively the [OIII]λ 5007 Å -408 MHz radio extended luminosities and [OIII]λ 5007 Å -5 GHz radio core luminosities plots. We have included C and E BCS sources with available optical data, comparing our results with different samples. B2 0708+32B and B2 0331+39 are not included because we have no information on the [O III] emission line flux. As we used a combination of our TNG, SDSS and other telescopes spectra, a potential issue concerning our optical spectroscopic data is the difference in the spatial size of the associated spectral aperture. This is due to the different angular sizes of the aperture and also to the range of redshift covered by our sample. However, we verified that no link is present between the instrumental setup (or redshift) and the location of the various objects in Fig. 4 and 5. We added samples of more powerful compact sources (orange crosses: Gelderman et al. 1994, magenta plus: Morganti et al. 1997), in order to verify the presence of an optical-radio correlation. Moreover, we superimposed the 3CR LEG and HEG sources (grey and cyan small squares, Buttiglione et al. 2010) to compare compact sources with extended ones. Finally, we plot also (green diamonds) the Core Radio Galaxies (CoRG) sample of Baldi & Capetti 2010 to compare our sample with these peculiar faint compact sources. From Fig. 4, we note that C BCS sources show a lower total radio power than the LEG objects discussed by Buttiglione et al. 2010 or that their optical emission line is higher than expected from the radio power. C BCS sources show a significantly higher optical luminosity with respect to the best linear fit obtained by Buttiglione et al. 2010 for HEG and LEG sources (see cyan and green dashed lines). CoRGs studied by Baldi & Capetti 2010 follow the same trend of C BCS. It looks like that core radio galaxies and C BCS have in general a low total radio power with respect to HEG, LEG and CSS radio sources, but the line optical luminosity is a factor two higher with respect to the correlation with the total radio power. We compared also at the [OIII]λ 5007 Å luminosity with the 5 GHz radio core luminosities for all samples discussed before (Fig. 5). In this diagram CoRGs as well as C BCS sources are in between the HEG and LEG best linear fit. Our C BCS sources seems to be intermediate objects between 3CR LEG + CSS sources and the CoRGs of Baldi & Capetti 2010. Nuclear emission of 3CR HEG sources is definitely brighter in [OIII] luminosity, while C BCS at a given core radio power are near to the linear bets fit of LEG sources but on average C BCS are optically brighter. We note that also CSS sources do not follow the two linear fits suggesting a possible common fit of CSS, C BCS and Core radio galaxies. It seems that they follow an independent track with respect to HEG and LEG sources. Kunert-Bajraszewska et al. 2010, discussing a sample of Low Luminosity Compact Objects, suggest two parallel evolutionary tracks for HEG and LEG sources, evolving from GPS (Gigahertz Peaked Sources) to CSS to FR. Our diagram, thanks to the addition of low power Core Radio galaxies and C BCS sources selected at low frequency and without any constrain on their spectral index, suggest a more complex scenario. Compact BCS sources In this Section, we give short notes on the whole sample of C BCSs. For all resolved targets in our new radio maps, we present here also an image. For all these sources, if not specified, the beam size and the noise level of each map are those reported in Tab.2. In Tab.5, we give basic source parameters at 8.4 GHz and 22 GHz for our new VLA images. The reported core positions are obtained from our new VLA observations. The arcsecond core flux density at 5 GHz and total flux density at 408 MHz are from Liuzzo et al. 2009b. The radio galaxy B2 0116+31 (4C 31.04) is classified as a lowredshift CSO (Compact Symmetric Object) (Giovannini et al. 2001, Cotton et al. 1995. VLBA (Very Long Baseline Array) images at 5 GHz show a compact core component with hot spots and lobes on either side (Giroletti et al. 2003). Spectral line VLBI (Very Long Baseline Interferometer) observations reveal the presence of an edge-on circumnstellar H I disk (Perlman et al. 2001). The optical nucleus is also found to have cone-like features well aligned with the radio axis. According to Perlman et al. 2001, optical data suggest a relatively recent merger having occurred 10 8 yrs ago. In our TNG spectrum we detect all the diagnostic emission lines and their ratios clearly settle the source among LEG. B2 0149+35. B2 0149+35 is identified with NGC 708, a D/cD galaxy associated with the brightest galaxy in the central cooling flow cluster Abell 262 (Braine & Dupraz 1994). B2 0149+35 is a low brightness galaxy whose nuclear regions are crossed by an irregular dust lane and dust patches (Capetti et al. 2000). The interaction between the cooling gas and the radio source is discussed by Blanton et al. 2004. The comparison between the total and core radio power and between radio and X-ray images suggests that at present the radio core is in a low activity phase (Blanton et al. 2004, Liuzzo et al. 2010). Our VLA observations show an unresolved component with a flux density of 5.5 mJy at 8.4 GHz, 4.9 mJy at 22 GHz, and a flat spectrum (α 8.4 22 ∼0.12). The lower flux density in 5 GHz VLBA maps can be due to a slightly inverted spectrum or to some extension lost in VLBA images and/or because of variability. From the diagnostic diagrams and the [O III]-radio plots the source is classified as LEG. B2 0222+36. This source shows an halo-core structure at arcsecond resolution ). Sub-arcsecond 8 GHz VLA map resolve the structure into a core and two components on either side, while at 22 GHz it shows an S-shaped morphology with a dominant core and two lobes (Giroletti et al. 2005b). In VLBI images at 1.6 GHz, B2 0222 + 36 is two sided, with jets emerging in opposite directions along the north-south axis. Since there is evidence of a change of the jet direction in the inner region, Giroletti et al. 2005b speculate that the jet orientation is rotating because of instabilities of the AGN. This could explain the small size of the source because an unstable jet did not allow the growth of a large scale radio galaxy. In this case the old round halo could be due to the diffusion of radio emission during the orbit of the inner structure. From the diagnostic diagrams, this source is classified as a LEG-type. B2 0258+35. This source was studied with VLA and Merlin + EVN (European VLBI Network) by Sanghera et al. 1995, who classified it as a CSS source. VLA data show a double structure with a separation of 1.1 ′′ . EVN + MERLIN images reveal an extended plume-like feature at both ends of the source and a jet-like feature in between. Sub-arsecond VLA images at 8 and 22 GHz reveal the same structure. The source appears to strongly interact with the ISM (InterStellar Medium) as shown by the large bending of the arcsecond structure of the SE lobe and the presence of a surrounding low brightness extended structure in the VLA images. Large amount of extended HI-disk in the central region of the source is detected (Emonts et al. 2006a(Emonts et al. , 2006b. From the diagnostic diagrams, it is a LEG-type source. B2 0648+27 This object is only slightly extended at the lowest frequencies (Giroletti et al. 2005a). The connection between the small size of the radio emission and the presence of a major merger in this galaxy about 10 9 years ago, with the presence of a large amount of H I in this galaxy (Morganti et al. 2003) is remarkable. From the optical spectrum, we observe very strong high excitation emission lines and the Balmer lines in absorption, as the spectra produced by a dominant young stars population. In the diagnostic diagrams, this source occupies the region of HEG galaxies. This is the most powerful source in optical band in our sample. B2 0708+32B. This source reveals an extended morphology in direction NS with two symmetric lobes about 10 ′′ in size (∼10 kpc). In VLBA images at 5 GHz (Liuzzo et al. 2009b), a double structure oriented along P.A. (Position Angle) ∼ 150 • and extended ∼4 mas is observed. Since no spectral index information is available, the identification of the core is not possible. Our 8.4 GHz map (Fig.6) shows an unresolved nucleus (Tab. 5) and two symmetric lobes in agreement with the structure present in the 1.4 GHz observations. No jet-like structure is visible. In our 22 GHz VLA observation, only the core is visible (Tab. 5). The angular resolution of VLA images is too low to resolve the double structure visible in VLBA images. Using VLA data at 1.4 GHz from Fanti et al. 1986, we estimated the spectral index for lobes and central component. We derived α lobes ∼ 1 for the two lobes and for the central component: α 8.4 1.4 = 0.3 and α 22 8.4 = 0.54. The steepening of the high frequency spectrum and the high degree of symmetry of the source suggests that the compact central component could be identified as Compact Symmetric Source (CSO) and this small size radio source is a restarted young source. We want also emphasize how rare it is to see extended emission in a CSO. Up to now the only known with this properties are 0108+388 (Stanghellini et al. 2005) and 0402+379 (Zhang et al. 2003). We have no optical spectroscopic information on this sources, so its classification is not possible. B2 0722+30. This radio source is associated with a disk galaxy. Strong absorption is associated with its disk and a bulge-like component is also clearly visible. The radio emission originates from two symmetric lobe-like features in E-W direction, i.e. at an angle of ∼ 45 • to the disk (Capetti et al. 2000). In Fig. 7, we report the optical image from the HST (Hubble Space Telescope) overimposed to a VLA radio image at 5 GHz. At arcsecond resolution, this source shows an FRI radio morphology with a total power of 3.1× 10 23 W Hz −1 at 408 MHz and a linear size of ∼ 13.5 kpc. Our 8.4 GHz VLA observations (Fig. 8) show an unresolved nucleus, and a marginal detection of the East lobe which appears completely resolved. At 22 GHz, only the core is detected. It is unresolved with a total flux of 6.5 mJy (Tab. 5). The radio core position is slightly offset with respect to the brightest optical region (∼ 1 ′′ ), however we note that the optical core is not well defined because of the large dust absorption. This source is one of the rare objects where a disk galaxy in the nearby Universe hosts a radio emission FR I like shape. Literature research can provide information on only other three cases: 1) the spiral galaxy 0313-192 in Abell 428 exhibiting a giant (∼350 kpc) double-lobed FRI (Ledlow et al. 1998(Ledlow et al. , 2001Keel et al. 2006); 2) NGC 612 is a typical S0 galaxy with a large-scale star-forming H I disc and a powerful FR-I/FR-II radio source (Véron-Cetty & Veŕon 2001; Emonts et al. 2008); 3) the BL Lac object PKS 1413+135 (McHardy et al.1994;Lamer et al. 1999). In the optical, we do not found either spectrum or the [O III] emission line for this source: an optical classification is so not possible. B2 1037+30 The source is only slightly resolved at 1.4 GHz ). At subarsecond resolution, at 8 GHz an edge-brightened structure, with complex sub-structures: jets, and lobes with hot spots is detected, while at 22 GHz only a point-like component, probably the core, and the resolved NW hot spot are evident. In VLBI images, the core is clearly revealed with a faint, diffuse emission detected on the shortest baselines (Giroletti et al. 2005b). According to its radio properties Giroletti et al. 2005b classified it as a young CSO source. In the optical, B2 1037+30 is identified with the brightest galaxy in the cluster Abell 923. From analysis, it follows the LEG sources: the optical line ratios set the source LEG region close to the boundary with HEG sources. The same position is found in the optical-extended radio plot, while in the opticalcore radio plot and in the accretion rate plot the source in clearly identified with LEG types. The optical line ratios set the source close to the HEG-LEG boundary. The same position is found in the optical-radio plots. B2 1101+38 Mrk 421 (B2 1101+38) is a well-known BL Lac, widely studied at all frequencies and detected at TeV energies (Punch et al. 1992). The NVSS (NRAO VLA Sky Survey) image shows a 30 ′′ core-dominated source, with emission on either side. The estimated viewing angle is θ ≤ 20 • . For more details on radio structure of this BL Lac, see Giroletti et al. 2006. This source is identified with the nearby galaxy NGC 4278. In the radio band reveals a compact structure at all frequencies between 1.4 and 43 GHz ( Condon et al. 1998;Di Matteo et al. 2001: Nagar et al. 2000. Only at 8.4 GHz, at a resolution of ∼200 mas, it is slightly resolved into a two-sided source with an extension to the south (Wilkinson et al. 1998). VLBA data show a two-sided emission on sub-parsec scales in the form of twin jets emerging from a central compact component (Giroletti et al. 2005a). Large amount of extended HI disc is detected in its central region (Emonts et al. 2006a(Emonts et al. , 2006b. This source shows low [O III]/Hβ ratio, settling in the LEG region of the diagnostic diagrams. In particular, it shows low [O III] luminosity, as confirmed by its position in the lower left side of the [O III]-radio plots. B2 1257+28 This is one of two dominant members of the Coma cluster (Abell 1656) and considered the BCG. It is a cD galaxy whit a small size WAT (Wide Angle Tail) structure. Arcsecond scale properties are discussed in Feretti & Giovannini 1987: it has a total flux density at 1.4 GHz of 190 mJy and the core flux density at 6 cm is 1.1 mJy. The radio emission is completely embedded in the optical galaxy, and a gap of radio emission is present between the core and the SW lobe, while a faint jet connecting the core and the NE lobe is detected. Parsec scale properties are discussed in details in Liuzzo et al. 2010: the source shows a one-sided structure with a total flux density of 10.1 mJy, and the core flux density is 7.27 mJy. Our optical analysis classified this source as LEG type object. 3C 272.1 3C 272.1 is identified with a giant elliptical galaxy. It shows strong radio emission and a two-sided jet emerging from its compact core. HST images reveal dust lanes in the core of the galaxy while no significant amount of diffusely distributed cold dust was detected at sub-millimeter wavelengths (Leeuw et al. 2000). At parsec scale, this nearby source shows a clear onesided structure (Giovannini et al. 2001). This source has low [O III] luminosity, as confirmed by its position in the lower left side of the [O III]-radio plots. B2 1254+27 This radio galaxy is identified with the BCG (Brightest Cluster Galaxy) of a sub-group merging into the Coma cluster. The optical galaxy, NGC 4839, is classified as a cD galaxy , and references therein). In VLA observations at 1.4 GHz, it appears as a FRI radio source with 2 lobes in direction N-S. At 5 GHz, the arcsecond core flux density is ∼ 1.5 mJy (Giovannini et al. 2005). Our VLA maps at 8.4 GHz and at 22 GHz show a marginally resolved structure with an inverted spectral index (see Tab. 5), suggesting the presence of a compact nuclear emission. The LEG classification of this source derived from the diagnostic diagrams is confirmed also by the [O III]-radio plots. B2 1322+36B On VLA scales, this source shows a twin-jet morphology. On parsec scale, it has a core-jet structure extending ∼ 10 mas in the same direction as the main large-scale jet. According to its position in all optical plots, this source is classified as LEG. B2 1346+26 This galaxy is the central galaxy of the cool core cluster A1795, which hosts the bright FR I radio source 4C 26.42. We studied in details its parsec scale morphology in Liuzzo et al. 2009a. Our multi-frequency and multi-epoch VLBA observations reveal a complex, reflection-symmetric morphology over a scale of a few mas. The source appears two-sided with a well defined and symmetric Z-structure at ∼5 mas from the core. The kiloparsec-scale morphology is similar to the parsec-scale structure, but reversed in P.A., with symmetric 90 • bends at about 2 arcsec from the nuclear region. A strong interaction with the ISM can explain the spectral index distribution and the presence of sub-relativistic jets on the parsec scale. In the optical, even if we detect upper limits on the [O III] emission line, the position of this source in diagnostic diagrams reveals a LEG nature. 3C 305 This small FR I radio galaxy shows a plumed double structure with two faint hot spots and symmetric jets. The optical galaxy is peculiar, with continuum emission on the HST scale perpendicular to the radio jet (Giovannini et al. 2005, and references therein). The X-ray emission of this source is extended but it is not connected with radio structure. However, the X-ray emission is cospatial with the optical emission line region dominated by the [O III]5007. This could be interpreted as due to the interaction between the radio jet and the ISM (Massaro et al. 2009). From our optical analysis, according to its position in Fig. 3, this source is classified as HEG. Please give a shorter version with: \authorrunning and/or \titilerunning prior to \maketitle B2 1557+26 The host galaxy IC 4587 is a smooth and regular elliptical galaxy (Capetti et al. 2000). In our 8.4 GHz VLA observations (Fig. 9), the radiosource is resolved in a core and a NE jet aligned with the emission observed in the 5 GHz VLBA images by Giovannini et al. 2005. The jet is extended about 0.6 arcsec from the core. At 22 GHz, the jet is not detected and the core has a total flux of 12.3 mJy (Tab. 5). According to its position in all the optical plots, this source is classified as LEG. B2 1855+37 This source shows a distorted double structure on the kiloparsec scale, with no detection on VLBA scale, that suggests its identification with a small symmetric source with a faint core (Giovannini et al. 2005, and references therein). The extended structure of this source seems to be confined by external gas pressure. Due to upper limits on the optical emission lines, an optical classification is not possible. B2 0331+39. Previous high resolution VLA observations at 1.4 GHz (A and B configuration) of this source (4C 39.12) revealed a resolved core plus a faint halo ∼1 arcmin in diameter . Data at 5 GHz also show that the core is resolved and the inner source region is dominated by a bright one-sided structure extended ∼ 1 ′′ , and surrounded by a symmetric low brightness halo. This structure is confirmed by our 8.4 GHz VLA image (Fig. 10): the radio source is characterized by a nuclear emission plus one sided jet in South direction and a halo around it. We note that the bright jet extension is not limited by sensitivity, but it looks like that the bright jet really ends after ∼ 1 ′′ . Our 22 GHz VLA map (Fig. 10) shows an unresolved core and the Southern jet. However, because of the high angular resolution and low surface brightness at this high frequency, the extended halo is not visible (see Tab.5). From Parma et al. 1986, we derived the spectral index between 1.4 GHz and 8.4 GHz of the halo: α 1.4 8.4 ∼0.4. We produced also a spectral index image of the arcsecond jet comparing 8.4 and 22 GHz images obtained using the same uv-range and angular resolution (Fig. 11). The core region has a flat spectral index with a clear steepening up to ∼ 0.9 along the jet, which appears steeper than the surrounding halo. The one-sided jet structure is in good agreement with previous VLBA observations of this source presented in Giovannini et al. 2001 where the parsec scale jet is found at the same P.A. of the arcsecond jet presented here. At lower resolution this source is highly peculiar. From the NVSS image (see Fig. 10), we see a bright one-sided structure with a size of ∼3 arcmin (70 kpc) oriented at P.A.= 220 • , i.e. very different from the jet and inner halo P.A. (160 • ). Moreover, a fainter symmetric diffuse emission is present on a larger angular scale (more than 6 arcmin, 150kpc) at about the same P.A. of the inner one-sided jet and halo. The total NVSS flux density is 1.1 Jy. The total spectral index is 0.5 -0.6 from 74 MHz up to 4.8 GHz (see NED archive data), with a clear evidence that the new restarted component is dominant at all frequencies. This complex morphology suggests a restarting activity with a change of the P.A. in the different epochs. A more detailed study is necessary to understand this peculiar source. We do not found either the optical spectrum or the [O III] emission line for this source: an optical classification is not possible. Please give a shorter version with: \authorrunning and/or \titilerunning prior to \maketitle B2 0844+31 This is a symmetric narrow-line FR II radio galaxy. At parsec resolution a bright core and two-sided jets are visible. The image from the FIRST Survey shows two extended FR I lobes at a larger distance from the core beyond the FR II-type hot spot, indicating a prior phase of radio activity. For this reason, this source could be classified as a restarted source, i.e., a source in which the extended FR I structure is related to previous activity, whereas the inner FR II structure originates from more recent core activity (Giovannini et al. 2005). According to its position in all the optical plots, this source is classified as LEG. B2 1003+35 It is the largest known FRII radio galaxy with its projected linear size of more than 6 Mpc. It shows a complex structure at arcsecond resolution with evidence of relativistic jets oriented in the same direction as the large-scale structure with some oscillations (Giovannini et al. 2005). Its peculiar radio structure has been interpreted as evidence of restarted activity (O'Dea et al. 2001). According to its position in all the optical plots, this source is classified as LEG type, but close to the HEG boundaries. B2 1144+35 This is a large scale (∼ 0.9 Mpc) FRI radio source, core dominated with a short and bright two-sided jet. The bright arcsecond scale core is resolved at milliarcsecond resolution into a nuclear source, a main jet with an apparent superluminal velocity, and a faint counter-jet. Evidences of a dynamic interaction with the surrounding medium are present. The radio morphology of this source shows clear discontinuities at different linear scales sug-gesting a continuous activity but with high and low level periods (Giovannini et al. 2007). From the optical point of view, this galaxy falls among HEG sources in the diagnostic diagrams, showing a very strong [O III]λλ 4959,5007 Ådoublet. This classification is confirmed with the optical line luminosity -total and core radio power plots, as well as in the accretion rate plot. B2 1512+30. The host galaxy does not show any outstanding morphological features, except for very faint elongated dust absorption at the center (Capetti et al. 2000). In VLA image at 1.4 GHz with 5 arcsec of resolution, it appears as double source with two lobes in direction NS (Fanti et al. 1987), in agreement with the FIRST (Faint Images of Radio Sky at Twenty-Centimeters) image. The angular size is ∼ 22 arcsec corresponding to ∼ 38 kpc. Both in our 8.4 GHz and 22 GHz images, the source is undetected above ≥ 0.25 mJy/beam and ≥ 0.30 mJy/beam in X and K bands respectively. Using the total flux at 408 MHz and at 1.4 GHz, we derived a spectral index α 408MHz 1.4GHz ∼ 2. The non detection of a core emission at high frequency together with the steep low frequency spectral index suggest that this object is a dying radiosource with a radio quiet core. From our optical study, according to its position in the diagnostic diagrams (Fig. 3), this source is classified as LEG. B2 1626+39 This source is identified with the central galaxy 3C 338 in the cool core cluster A2199. It is a multiple nuclei cD galaxy with the presence of dust lanes (Jensen et al. 2001). On kiloparsec scales it has two symmetric extended radio lobes, characterized by a steep spectrum and misaligned with the central emission. The two radio lobes are connected by a bright filamentary structure. Both the steep radio spectrum and strong filamentary emission may be caused by interactions with the dense intracluster medium (Gentile et al. 2007). 3C 338 was the first radio source in which a two-sided jet was observed on parsec scales (Feretti et al. 1993). DISCUSSION. We show in Fig. 12 a radio power vs linear size diagram for the 95 sources of the whole BCS sample. Looking at the radio properties of C BCSs (red squares), some compact sources are in agreement with a general correlation between the linear size and the radio power, and they show a radio power at 408 MHz lower than 10 24 W/Hz. However, about half of compact sources show a radio power larger than 10 24 W/Hz .i.e. are in the same range of classical extended radio galaxies. To better investigate the nature and properties of C BCS properties we compared optical and radio data. From Fig. 3, we note at first that the majority of C BCS sources are LEG. Only a couple of exceptions are present. These peculiar compact sources are B2 0648+27 identified with a high rich HI galaxy (Emonts et al. 2006a(Emonts et al. , 2006b, and 3C 305 identified with a peculiar gas rich galaxy discussed in detail by Massaro et al. 2009. We note that also one FR I BCS is among HEG galaxies. This source studied in details by Giovannini et al. 2005 show evidence of a multi phase radio activity. Recently, it restarted the radio activity and it shows on the parsec scale structures moving with an apparent velocity larger than c. We conclude that for all the 3 cases, the optical type could be explained by the peculiar activity of the source inner region (restarted activity or strong confinement and interaction with the surrounding medium). From Fig. 4, we point out that C BCS sources show a correlation between total radio power and [OIII] Luminosity in agreement with Core Radio galaxies of Baldi & Capetti 2010 and not with FR I radio galaxies. Also the three HEG sources discussed just before are in the same correlation while the few other BCS FR I or FR II sources show the same correlation of 3C radio galaxies. These considerations seem to suggest us that among LEG galaxies two different populations are present according to their radio and [OIII] luminosity: FR I radio galaxies and Core radio galaxies + C BCS. In particular, we note that, at given total radio power, compact sources have an overluminous [O III] emission at a given radio power with respect to HEG and LEG 3CR sources. Moreover, if we consider the core radio power at 5 GHz, compact sources show a different trend with respect to LEG radio galaxies: they are overluminous in optical at low radio power (CoRGs) and with properties similar to 3CR FRI at higher radio power (C BCS). Finally, data suggest that CSS could follow the same trend of compact sources, becoming optically underluminous at very high radio power. A larger statistics is necessary to clarify this point. To better understand this correlation we take the opportunity of many data available for C BCS sources to investigate their origin. Among C BCSs, we have quite different sources: -a source can be compact because of projection effects: In our sample only one source is clearly oriented at a small angle with respect to the line of sight: B2 1101+38 (Mkn 421, Tab. 6). This source is a well known BL-Lac type object (Giroletti et al.2006) and its size is affected by strong projection effects. We note, that as discussed in Liuzzo et al. 2009b, the percentage of sources oriented at small angle with respect to the line of sight in the BCS is in agreement with unified models prediction. -a source can be compact because young or restarted activity: In our sample many sources show evidence of recent nuclear activity as CSO and CSS sources (e.g. B2 0258+35, Tab. 6), or restarted/recurrent activity (e.g. B2 0149+35, Tab. 6). These sources are expected to have a strong interaction with the dense inner ISM (see e.g. B2 1346+26, the BCG in A1795, Tab. 6). Peculiar source is B2 1855+37 which is characterized by a low-no jet emission could be in the final stage of this scenario, and we do not know if it will die or will restart a new radio activity.If we compare these type of compact sources with the extended BCSs with evidence of a restarted (e.g. B2 0331+39, B2 0844+31, B2 1144+35, Tab.6) or dying activity (e.g. B2 1512+30, Tab.6) we do not found any differences in their OIII luminosity and 408 MHz total power. We note also that on 18 C BCSs, 6 sources are in clusters/groups and they are the BCGs. Literature studies (O'Dea et al. 2001) observed that compact, low power and steep spectrum radio sources in BCGs are not rare. Radio properties of these objects could be explained if nuclear fueling is related to the AGN activity cycle and we can see galaxies in a period of relative AGN quiescence, or just restarted (young). The presence of restarted and cyclic radio activity in such clusters are requested by the cooling scenario (McNamara et al. 2005, and references therein). We claim that the fraction of C BCS sources that are BGCs is considerable, corresponding, as discussed above, to the 1/3 (6/18) of the C BCSs. If we consider the whole BCS sample, the fraction of BGCs is 10/95. Among the 10 BCGs, the majority of them (6/10) are compact BCS sources. This seems to suggest that the strong interaction with the dense ISM of the cluster environment increases the source probability to be compact radiosource, due to frustration effect and/or jet instability. However, optical properties of extended BCGs (see Tab. 6 and Sect. 5) and compact BCGs are similar. -sources in HI-rich galaxies. Results of Emonts et al. 2006aEmonts et al. , 2006b revealed that FRI type lie in a particular region of the HI mass disk/radio power diagram. However, there are sources that differ from FRI type having a large value of HI mass. Studying in details these latter objects, they found that all these are compact sources, even if not all compact sources have large HI mass. Some of our C BCS sources (B2 0648+27, B2 0258+35 and NGC 4278, Tab. 6) show a large amounts of extended HI disk. For these HI rich compact objects Emonts et al. 2006a and2006b suggest that they do not grow into extended sources because they are frustrated by the ISM in the central region of the galaxy, or because the fuelling stops before the sources can expand. If these H I-rich radio galaxies formed through a major merger of gas-rich galaxies, part of the gas is expelled in large-scale tidal structures, while another part is transported into the central kpc-scale region (e.g. Barnes 2002). The latter could be responsible for frustrating the radio jets if they are not too powerful. Alternatively, while the geometry and the conditions of the encounters appear to be able to form the observed large-scale HI structures, they might not be efficient in channeling substantial amounts of gas to the very inner pc-scale region. This might prevent stable fuelling of the AGN and hence large-scale radio structures do not develop. This hypothesis seems reasonable looking through our C BCSs: evidences that NGC 4278 and B2 0648+27 cannot grow as they are frustrated by the local ISM are present (Giroletti et al. 2005b); while B2 0258+35 displays variable levels of activity, suggestive of inefficient fuelling, to expand beyond the kpc scale (Giroletti et al. 2005b). Giovannini et al. 2005). The red squares indicate C BCS sources , while the blue ones represent the remaining extended objects. We found also a slight correlation between the amount of HI mass and the central radio morphology: objects with high HI mass (e.g. B2 0258+35) show more diffuse radio emission in the central region. Emonts et al. 2006aEmonts et al. , 2006b discuss that HI-rich low power compact sources have different formation history from FRIs objects being likely the products of major merger as the detected large amounts of HI demonstrate. In this scenario, it will be interesting to note that in B2 0648+27 and B2 1217+29 the presence of a major merger is clearly confirmed (Emonts et al. 2006a(Emonts et al. , 2006b. CONCLUSION. Radio galaxies are classified as FRI, FRII and Compact sources according to their powers and morphologies. Compact objects show emission properties which are not yet well understood. To investigate this peculiar class of sources, we selected from the Bologna Complete Sample (BCS, Giovannini et al. 2001Giovannini et al. , 2005Liuzzo et al. 2009b) all objects with a linear size smaller than 20 kpc, forming the C BCS. Part of these targets were previ-ously analyzed by us in the radio band (Giroletti et al. 2005b). Here, we complete the radio analysis of the C BCS sample presenting new high resolution VLA observations for the remaining sources. Moreover, we discuss for the first time all optical available data for C BCSs. From the comparison between C BCSs and other source samples/extended radiogalaxies, we derive that: -diagnostic diagrams reveal that with a few exceptions, C BCSs show optical LEG properties as 3CR FR I radio galaxies and CoRGs of Baldi & Capetti 2010. -The optical [OIII] -radio correlations (total and nuclear radio power) suggest a common linear correlation for CoRGs and C BCS sources, different from the known linear correlation of HEG and LEG 3CR radio galaxies, suggesting that C BCSs could be the powerful tail of CoRGs. A possible continuity with powerful CSS it is not yet clear. -From our sub-arcsec radio data, the compactness of C BCSs is mostly due to a low source age and/or restarted activity in a gas rich environment (e.g. BCG galaxies and HI-rich galaxies). Projection effects hold in a very few cases in agreement with unified models predictions. [OIII] is the logarithm of [OIII]λ 5007 Å line luminosity in unit of 10 −15 erg s −1 ; in Col.7, it is reported the classification based on our radio and optical analysis; Col. 8 gives some notes on sources properties.
2012-09-19T10:30:54.000Z
2012-09-19T00:00:00.000
{ "year": 2013, "sha1": "8e125b24d23c1a41b8d7d0995fd506aaf76cecaa", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2013/02/aa20012-12.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "8e125b24d23c1a41b8d7d0995fd506aaf76cecaa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
6213698
pes2o/s2orc
v3-fos-license
Intracellular and surface distribution of a membrane protein (CD8) derived from a single nucleus in multinucleated myotubes. We have investigated the contribution of an individual nucleus to intracellular and surface membranes in multinucleated muscle fibers. Using a retroviral vector, we introduced the gene encoding the human T-lymphocyte antigen CD8 into C2 mouse muscle cells to form a stable line expressing the human protein on its surface. The intracellular and surface distributions of the protein were then investigated by immunocytochemistry in hybrid myotubes containing a single nucleus expressing CD8. We show that the intracellular distribution of CD8 is limited to a local area surrounding the nucleus encoding it and several neighboring nuclei. On the cell surface, however, the protein is distributed over the entire myotube. Widespread distribution of a surface membrane protein in multinucleated myotubes can thus result from localized synthesis and processing. M ANY membrane proteins are not distributed uniformly on the cell surface, but are restricted to structurally and functionally distinct domains. In adult muscle fibers, for example, >90 % of the acetylcholine receptor (AChR)' is found in the postsynaptic membrane of the neuromuscular junction, which constitutes <0.1% of the total muscle cell surface (Salpeter, 1987). Recent experiments suggest that in muscle fibers, which contain hundreds of nuclei, differential gene expression among nuclei may contribute to the restricted distribution of the AChR. Thus both Northern blot analysis and in situ hybridization demonstrate that AChR subunit mRNA is concentrated near endplates of adult rat and chicken muscle fibers (Merlie and Sanes, 1985;Fontaine et al., 1988;Fontaine and Changeux, 1989). One implication of these experiments is that surface membrane proteins are synthesized and processed near the nuclei that are the source of their mRNA, an idea that is supported by several recent observations (Pavlath et al., 1989;Ralston and Hall, 1989;Rotundo, 1989). A restricted surface distribution could then result: (a) if protein were inserted into the membrane locally; and (b) if movement of the protein were subsequently constrained, either by intrinsic barriers to diffusion in the membrane (Pavlath et al., 1989), or by attachment to the cytoskeleton or extracellular matrix. To compare the intracellular and surface distributions of a well-defined membrane protein produced from a single nucleus in muscle cells, we introduced the gene encoding the human T-lymphocyte antigen, CD8, into the C2 muscle cell 1. Abbreviations used in this paper: AChR, acetylcholine receptor; GM, growth medium. line. CD8 is a 34-kD protein containing a large extracellular domain with immunoglobulin homology, a single hydrophobic transmembrane segment, and a short, highly charged cytoplasmic sequence (Littman et al., 1985;Littman, 1987). After infecting C2 muscle cells with a retroviral vector encoding CDS, we selected stable lines expressing the protein and characterized the synthesis, processing, and transport to the surface of CD8 in these lines. Hybrid myotubes were then formed with the infected and parental cells using cell ratios such that only one or a few nuclei per myotube expressed CD8. Labeling of these nuclei with [3H]thymidine allowed their identification by autoradiography so that the intracellular and surface distributions of the protein derived from a single identified nucleus could be determined by immunocytochemistry. Our results show that within hybrid C2 myotubes, CD8 is restricted to a region near the nucleus that encodes it, but that the protein is widely distributed on the surface. Localized synthesis and insertion of a membrane protein is thus not sufficient to produce a restricted surface distribution in myotubes. Antibodies Hybridoma cells producing the mouse monoclonal antibody OKT8 (Hoffman et al., 1980) were obtained from the American Type Culture Collection (Rockville, MD), and antibodies obtained from ascites fluid in CAFI mice by standard procedures. Infection of C2 Cells with Retroviral Vector Supernatant from cultures of PAl2 ceils (an amphotropic retrovirus packaging cell line) that had been transfected with the plasmid pMVT-CD8 was a generous gift of Dan Littman (Department of Microbiology and Immunology, University of California, San Francisco). pMV7-CD8 (Maddon et al., 1986) is derived from cloning the human CD8 gene (Littman et al., 1985) into the retroviral vector pMV7 (Kirschmeier et at., 1988). It also contains the gene for neomycin phosphotransferase, which makes mammalian cells resistant to the antibiotic (3418. For infection, C2 myoblasts in growth medium (GM) were plated on 10-cm culture dishes at a density of 3 x 105 cells/dish. 24 h later, the frozen viral supernatant was thawed, cleared by spinning for 10 min at 1,500 g, diluted in GM containing 4 t~g/ml polybrene (Aldrich Chemical Co., Milwaukee, Wl) and added to the calls. After 6 h, the cells were rinsed twice with PBS and fed with GM. 2 d later the cells were split and replated at a density of 3 x l0 s cells/10 cm dish in GM supplemented with 600 ~tg/ml active G418 (Gibco Laboratories, Grand Island, NY). Thereafter, fresh medium was added every second day. After 6 d, the concentration of G418 was reduced to 400 #g/ml. Colonies became visible about a week after the infection and were assayed a few days later for surface expression of CD8 by a red blood cell resetting assay (see below). Since supernatants from the retroviral packaging cell line PAl2 may conlain helper virus (Miller and Buttimore, 1986), we tested selected cell lines and found helper virus activity in four out of five. The cell line that was free of helper activity was retained for our experiments. Red Blood Cell Rosetting The dishes with G418-resistant colonies were washed with PBS, covered with 5 ml of a 1:1,000 dilution of OKT8 in PBS-5% FCS and left at room temperature for 1 h. They were washed with PBS. They were then incubated with a 0.4% suspension in PBS-5% FCS of red blood cells to which goat anti-mouse IgG bad been coupled by the CrCI3 method (Galfr~ and Miistein, 1981). After 30 to 45 rain, the plates were shaken by a mild lateral blow, rinsed with PBS, and examined under a tissue culture microscope. Metabolic Labeling with I~S] Amino Acids C2 cultures grown on 6-cm dishes were incubated for 15 rain at 37°C in methionine-and cysteine-free DME H-16 (CMF) and subsequently labeled with 0.2 mCi/ml [35S]methionine (Tran-35S-label, ICN) in 1.2 ml of cysteine methione-free. The cells were then washed once with complete medium supplemented with 2 mg/ml each of methionine and cysteine and incubated further in the same medium. For experiments without chase, cells were washed three times with cold PBS with 2 mg/ml cysteine and methionine. All cells were extracted with 500 #1 of 10 mM Tris (pH 7.4), 66 mM EDTA, 1% NP-40, 0.4% DOC, and 1 mM PMSF (NDET buffer). For neuraminidase treatment, the cells were labeled for 15 min, washed as described above, and incubated for 1 h at 37°C. The pH of the medium was lowered to 6.0 by addition of morpholino-ethanesulfonic acid buffer (20 mM final concentration). One unit (2 nag) of type V neuraminidase (Sigma Chemical Co., St. Louis, MO) was added and the ceils returned to 37°C for 1 h, then washed and extracted in NDET. Immunoprecipitations Cell extracts were centrifuged for 5 rain in a centrifuge (Eppendorf made by Brinkrnann Instruments, Inc., Westbury, NY) at maximum speed. The supernatants were then incubated with OKT8 and SDS (to a final concentration of 0.2%) overnight at 4°C on a rotatory shaker. After addition of 40 t~l of a 10% suspension of Staphylococcus aureus (Pansorbin; Calbiocbem-Behring Corp., San Diego, CA) and incubation on a shaker at room temperature for 40 min, the Pansorbin suspension was layered on 900 pl of 35% sucrose in half-strength NDET buffer with SDS (NDETS) in a tube (Eppendorf made by Brinkmann Instruments, Inc.). After centrifugation, the pellet was washed with NDETS followed by distilled water. The pellet was then resuspended in 60 ~! sample buffer, heated for 3 min in a boiling water bath, and centrifuged. The supernatant was analyzed by electrophoresis on a 9% SDS polyacrylamide gel according to Laemmli (1970) using molecular mass standards from Bio-Rad Laboratories (Cambridge, MA). After dec_ tropboresis, the gel was stained with Coomassie blue, enhanced for 15 rain at room temperature with 1 M sodium salicylate, dried, and exposed to preflashed film (XRomat; Eastman Kodak Co., Rochester, NY). Formation of Hybrid Myotubes Parental C2 cells and C2-CD8 cells were plated on separate 10-era dishes at a density of 7.5 x 104 cells/dish. On the next day, fresh growth medium was added and the C2-CD8 cells were supplemented with [3H]thymidine (6.7 Ci/mmol; ICN Radiochemicals, Irvine, CA) at a final concentration of 0.05 t~Ci/ml. 48 h later, the cells were trypsinized and plated together at a density of 60,000 cells/cm 2 in the wells of duplicate 4-well slides (Lab-Tek, Nunc Inc., Naperville, IL), pretreated with a 1:5 dilution of Vitrogen (Collagen Corp., Palo Alto, CA) in water. After 24 h, fusion medium was added, followed by fusion medium supplemented with cytosine arabinoside 24 h later to kill residual unfused myoblasts. 1-2 d later the cells were processed for immunocytochemistry. Immunocytochemistry Surface Staining of Live Cells. Cells were grown on small glass coverslips in 24-well tissue culture dishes or on 4-well slides (Lab-Tek). OKT8 was added to a final dilution of 1:800. After 1 h at 37"C, the cells were washed with tissue culture medium, and incubated with fresh medium containing fluorescein (FITC)-conjugeted goat anti-mouse (Cappei Laboratories, Malvern, PA) at 37°C for 1 h. The ceils were then washed with PBS, fixed for 15 min with 2% paraformaldehyde, rinsed in PBS, stained with 2.5 gg/ml bis-benzimide (Hoechst 33258, Sigma Chemical Co.) in distilled water for a few minutes and mounted in glycerol supplemented with paraphenylenediamine (Platt and Michael, 1983). Staining ofbgxed, Perraeabilized Cells. Cells were stained as described in Ralston and Hall (1989). Autoradiography. After the staining was completed, the slides were dried on a heating plate at low setting, dipped in autoradiographic emulsion (K5; llford, Mobberley, UK), and exposed for 12-14 d at 4"C. They were then developed (D-19; Eastman Kodak Co.), fixed, and mounted in glycerol with paraphenylenediamine. Isolation of C2 Cells Expressing CD8 C2 myoblasts were infected with the retroviral vector pMV7-CD8 (Maddon et at., 1986) which contains the gene for the human T-ceU lymphocyte antigen CD8 as well as the neomycin resistance gene. >95 % of the G418-resistant colonies obtained after infection expressed CD8 on their surface as determined by a red blood cell rosetting assay (see Materials and Methods). 25 of the positive colonies were expanded and analyzed by immunofluorescence using monoclonal antibody OKT8 directed against CD8 (Hoffman et at., 1980). The five lines with highest expression for surface CD8 were tested further. Fluorescence microscopy and analysis by the fluorescence-activated cell sorter showed that virtually all cells in each of these lines express CD8 on their surface. Each of the five selected lines was compared with the parental line with respect to growth rate, time course of myoblast fusion, and expression and spontaneous clustering of the AChR. No differences were observed beyond those normally seen between subclones of the parental C2 line. One of the lines, termed C2-CD8, was chosen for further experiments (see Materials and Methods). Assembly and lntraceUular Transport of CD8 in C2-CD8 Cells We then characterized the CD8 protein synthesized in C2 cells and compared its properties to those reported for the protein made in lymphocytes. We also characterized the intracellular processing of CD8 and its transport to the surface as such studies have not been previously described. Myoblast and myotube cultures of C2-CD8 were incubated with 35S-labeled amino acids, and extracts of the cells were immunoprecipitated with OKT8. In both cases, a single band of apparent molecular mass 33 kD was observed (Fig. 1), close to the value of 34 kD obtained with CD8 immunoprecipitated from human peripheral blood lymphocytes (Snow and Terhorst, 1983). CD8 extracted from C2-CD8 cells was cleaved by neuraminidase but not by endoglycosidase F (data not shown), suggesting that in C2 cells, as in lymphocytes, CD8 has O-linked, but not N-linked, carbohydrates (Snow et, al., 1984(Snow et, al., , 1985Littman, 1987). To investigate the structure of CD8 in the plasma membrane of C2-CD8 cells, we radioiodinated the surface proteins of C2-CD8 myoblast and myotube cultures, immunoprecipitated membrane protein extracts and analyzed the precipitates on reducing and nonreducing gels (Fig. 2). Although a single band of 33 kD was observed in reducing gels, this band was not present in nonreducing gels. Rather, a band of 66 kD was seen, along with a large amount of high molecular mass material that did not penetrate the gel. CD8 expressed in C2 cells thus assembles into dimers and higher molecular mass complexes, as it does in human tymphocytes (Snow and Terhorst, 1983). Processing of CD8 in C2-CD8 cells was examined in pulse-chase labeling experiments with [35S]amino acids. After a 5-min pulse, newly synthesized CD8 migrated as a 24-kD band (Fig. 3), in good agreement with the molecular weight of 23,550 predicted by cDNA sequencing (Littman et al., 1985). At 10 min, a 27-kD band appeared that was subsequently rapidly chased into the mature 33-kD protein. By 15 rain, all of the newly synthesized protein had been processed to the mature form. The time course of transport to the surface was investigated using the susceptibility of CD8 on the cell surface to cleavage by trypsin (Snow et al., 1985), Cells in myoblast and myotube cultures were trypsinized at various times after Figure 2. Analysis of CD8 expressed on the surface of C2-CD8 cells. Cell surface proteins of control C2 myoblasts (lanes a and c) and of C2-CD8 myoblasts (lanes b and d) were radioiodinated (see Materials and Methods), extracted, and immunoprecipitated with OKTS. The precipitates were divided into two aliquots. One was resuspended in sample buffer containing 5 % DTT as reducing agent (lanes a and b), and the other in sample buffer without reducing agent (lanes c and d). Samples were run on a 9% SDS-polyacrylamide gel. a 5-min pulse-label with [35S]-amino acids and the immunoprecipitated CD8 compared with immunoprecipitates from nontrypsinized cells taken at the same time (Fig. 4). Newly synthesized CD8 first became susceptible at 30 min after the pulse, when trypsin caused a decrease in the intensity of the 33-kD band with the concomitant appearance of a lower molecular mass band. Not all of the CD8 was cleaved, even after 2 h of chase, indicating that some of the protein does not reach the surface. From 2 to 6 h, the level of surface CD8 remained constant (data not shown). Localization of CD8 in C2-CD8 Cells The surface and intracellular distribution of CD8 in C2-CD8 was examined by indirect immunofluorescence (Fig. 5). When intact, unfixed cultures (Fig. 5, a and b) were stained to visualize the surface distribution, bright patches of CD8 were seen. These were uniformly distributed over the entire Cultures of C2-CD8 myoblasts were labeled with a 5-min pulse of Tran-35S-label, chased for the time indicated, extracted, immunoprecipitated with OKTS, and analyzed on a 9% SDS-polyacrylamide gel. tures of C2-CD8 myoblasts were labeled with a 5-rain pulse of Tran-35S-label and chased for various times. For each time point, two dishes were extracted, one of them after treatment with 100 #g/ml trypsin for 30 min at 4°C. The extracts were inununoprecipitared with OKT8 and analyzed by SDS-PAGE on a 9% gel. surface of both myoblasts (Fig. 5 a) and myotubes (Fig. 5 b). Capping (formation of a single aggregate) of the antigen was occasionally observed shortly after plating myoblasts at low density, but was otherwise not observed. When cells were fixed and permeabilized to reveal the intracellular distribution of CD8 (Fig. 5, c and d) the dominant pattern of staining was similar to that seen in muscle cells with Golgi markers (Tassin et al., 1985;Miller et al., 1988;Gu et al., 1989). Thus, in myoblasts ( Fig. 5 c), a coarsely granular staining was seen at one pole of the nucleus, and in myotubes (Fig. 5 d) a discontinuous, but nonpolar, perinuclear staining was observed, coupled with a coarsely gran-ular pattern between nuclei. In addition to the Golgi pattern, a fine-grained staining was observed that appeared in linear arrays aligned with the myotube axis (data not shown). To determine if CD8 seen by intracellular staining represents precursor to cell surface CD8, we treated cultures of C2-CD8 myotubes for 2 h with cycloheximide (0.4 mM) before fixation and subsequent staining. Such treatment essentially abolished the Golgi-like staining, but did not affect the linear arrays of fine-grained staining (not shown). Thus, the Golgi staining seen in untreated cells presumably corresponds to CD8 destined for the surface. Similar results were obtained with myoblast cultures. Localization of CD8 in Hybrid Myotubes Hybrid myotubes were formed using mixtures of myoblasts from the parental C2 line and from C2-CD8. The C2-CD8 cells were incubated with [3H]thymidine before plating with C2 myohlasts to label their nuclei. We have shown previously that >98% of nuclei are labeled under the conditions used . Autoradiography of hybrid myotubes formed using various ratios of C2 to C2-CD8 myoblasts showed that myotubes containing a single C2-CD8 nucleus could be obtained when ratios of 20:1 or 50:1 were used. When such hybrid myotube cultures were stained, autoradiographed and examined for immunofluorescence, CD8 was seen to be distributed over the entire surface of the my- Figure 5. Surface and intracellular localization of CD8 in C2-CD8 cells. OKTS, followed by FITC--conjugated goat anti-mouse (GaM) was used to stain cultures of C2-CD8 myoblasts (a and c) or myotubes (b and d). Either intact cells (a and b) or cells that had been fixed and permeabilized (c and d) were stained. Bar, 20 #m. ombes, without respect to the position of the nucleus encoding CD8 (the source nucleus) (Fig. 6). The diffuse localization of CD8 was independent of myotube size and of C2:C2-CD8 ratio over the range 1:1-50:1. Occasionally, local aggregates of CD8 were seen. In cultures that were also stained with rhedamine-~njugated ot-bungarotoxin, the aggregates of CD8 were invariably found to correspond to clusters of AChR (Fig. 7). When such aggregates were observed on the surface of hybrid myotubes, they were not preferentially localized in close proximity to the source nucleus. To determine the intracellular distribution of CDS, cultures were permeabilized before staining and subsequent autoradiography. In this case a Golgi-like pattern of immunofluorescence was seen near the source nucleus (Fig. 8). In myotubes containing a single source nucleus, immunofluorescence was not confined to the region around this nucleus, but generally extended over a region containing several nuclei. In some cases these nuclei were in close proximity to the source nucleus (Fig. 8 b), but in other cases they were one or several nuclear diameters away (Fig. 8, a and b). To express these results quantitatively, we measured the range of CD8 staining, both within and at the surface of hybrid myombes containing a single pH]-positive nucleus. For surface staining, we measured 16 myombes, whose length averaged 420 + 119 (SD)/tm. The average extent of CD8 staining in these myotubes was 416 + 118/~m. Thus, 99 + 3% of the myotube length (,x,42 nuclear diameters) was covered with CD8. By contrast, the extent of intracellular staining, measured on 11 myotubes, ranging from 217 to 605/zm Figure 6. Surface localization of CD8 in hybrid myotubes. Hybrid myotubes were formed by plating together C2 myoblasts and [3H]thymidine-labeled C2-CD8 myoblasts in a 20:1 ratio. Intact myotubes were incubated with OKT8 followed by FITC-GaM, fixed, stained with Hoechst 33258, and processed for autoradiography. Two examples of hybrid myotubes are shown. Each one contains a single [3HI-positive nucleus (/arge arrow), which is clearly identified by the autoradiographic grains in phase optics (a and c). The other nuclei in the same myotube were identified by observing the field with the filter appropriate for Hoechst (not shown). We have indicated the position of these nuclei on the figure by small arrows. The same field, examined in FITC optics (b and d) shows the surface distribution of CD8. Bar, 20/~m. in length (average 365 + 131/~m), was 68 + 31/zm, representing 20 + 13% of the total length. In hybrid myotubes, CD8 encoded by a single nucleus thus occupies a restricted intracellular region, but after insertion into the membrane covers virtually the entire cell surface. Discussion The major aim of this work was to examine CD8 distribution in hybrid myotubes containing a single nucleus expressing the foreign gene. Our experiments yielded two results: (a) that CD8 has a restricted intracellular distribution that encompasses the area around the source nucleus as well as several neighboring nuclei; and (b) that CD8 has an unrestricted distribution on the cell surface. From this we conclude that membrane proteins that are synthesized and processed locally can occupy the entire muscle cell surface after they are inserted into the surface membrane. The conclusion that CD8 is synthesized and processed locally is based on the pattern of immunocytochemical staining Fig. 6) is indicated by a large arrow, whereas the other nuclei in the field (identified by Hoechst staining) are indicated by small arrows. around the source nucleus in hybrid myotubes. The major component of this staining appears to be the Golgi apparatus, although there is also a fine-grained component that may represent the endoplasmic reticulum or lysosomes. Metabolic labeling experiments demonstrated that CD8 undergoes extensive intracellular processing after synthesis (Fig. 3), part of which presumably occurs in the Golgi apparatus. The protein synthesized by muscle cells appears to be O-glycosylated, as in lymphocytes, and O-glycosylation has been thought to take place in the Golgi (Hanover et al., 1982;Abeijon and Hirschberg, 1987;see, however, Pathak et al., 1988). After treatment with cycloheximide, staining in the Golgi apparatus disappears, indicating that the protein there is part of a transient precursor population. The pulse-chase metabolic labeling experiments deserve special comment because such experiments have not been reported previously for CD8. The protein is synthesized in C2-CD8 cells as a 24-kD precursor, and is subsequently processed via a 27-kD intermediate, to a final product of 33 kD (Fig. 3). As the final product has properties that are similar to those reported for CD8 produced by lymphocytes, it is likely that the pathway in lymphocytes is similar to that seen in C2-CD8 cells. Previous experiments based on enzymatic and chemical cleavage of the final product in lymphocytes have suggested that carbohydrates contribute "o2 kD to the molecular mass of the mature protein (Snow et al., 1985). Our results are in agreement with the values of 9-10 kD suggested by in vitro translation experiments (Littman et al., 1985). The pulse-chase experiments indicate that the final steps of intracellular processing in C2-CD8 cells occur within 15 min, presumably in the Golgi apparatus, and that CD8 first appears on the surface within 30 rain of synthesis. Surface CD8, which is assembled into dimers and larger aggregates, is not removed rapidly, but is relatively stable. A fraction of the newly synthesized CD8 apparently is not transported to the surface, but remains in an intracellular compartment, inaccessible to trypsin. This compartment could correspond to the "hidden pool" that has been seen in studies of the AChR (Devreotes et al., 1977). The finding that intracellular CD8 in near its source nucleus is consistent with previous results from our own and other laboratories showing that proteins targeted to intracellular structures are distributed in a local region near the nucleus from which the mRNA originates (Miller et al., 1988;Pavlath et al., 1989;Ralston and Hall, 1989;Rotundo, 1989). This region is not confined to the area of a single nucleus, but encompasses that of several nuclei, so that compartments associated with neighboring nuclei may be partially shared. Intracellular CD8, for example, extended on average ,o70 #m from its source nucleus, and was almost always associated with several nuclei. Although these nuclei were often clustered, physical contiguity or close juxtaposition to the source nucleus was not required (see Fig. 8). In this respect our results may differ from those of Pavlath et al. (1989). The range of intracellular CD8 staining places an upper limit on the migration of the CD8 mRNA from its source within the muscle fiber. This limit is in general agreement with a previous, similar estimate that we have made for the mRNA encoding a protein targeted to the nucleus (Ralston and Hall,i989). CD8 mRNA could have a more restricted distribution than that of the intracellular protein if there were exchange between the compartments associated with each nucleus at the level of the endoplasmic reticulum and/or the Golgi apparatus. Exchange of membrane proteins at the level of the Golgi has been shown to occur both in vivo and in vitro (Rothman et al., 1984). In contrast to its restricted intracellular location, CD8 occupies the entire surface of hybrid myotubes. Although inhomogeneities in its distribution were observed, these bore little relation to the position of the source nucleus. Interestingly, CD8 sometimes formed clusters; these were always associated with clusters of AChR, and possibly represent nonspecific trapping of CD8. The widespread distribution of CD8 in the membrane presumably results from diffusion of the protein from its local site of membrane insertion (Frye and Edidin, 1970). Nonclustered acetylcholine receptors are able to diffuse in the muscle membrane (Axelrod et al., 1976;Poo, 1982;Stya and Axelrod, 1983), and it is likely that CD8 does as well. In lymphocytes, CD8 is uniformly distributed on the surface and freely redistributes in the presence of antibodies (A. Kupfer, personal communication). Our results are thus consistent with the idea that there are no intrinsic barriers to diffusion in the myotube membrane. The most prominent membrane protein that is nonuniformly distributed in myotubes is the AChR, which is concentrated in patches on the myotube surface, both in aneuml cultures and at sites of nerve-muscle contact in co-cultures (Schuetze and Role, 1987). Recent experiments suggest that sites of nerve-muscle contact may also be sites of preferential synthesis and insertion of the AChR (Role et al., 1985;Merlie and Sanes, 1985;Fontaine et al., 1988;Fontaine and Changeux, 1989). Our results suggest that retention of the AChR near these sites is not simply the result of local insertion, but must involve specific mechanisms, such as attachment to the extracellular matrix or cytoskeleton, to prevent its dispersion. In contrast to the results reported here, Pavlath et al. (1989) have reported that the antigen recognized by 5.1Hll, a monoclonal antibody that reacts with a surface protein of human cells (Walsh and Ritter, 1981;Hurko and Walsh, 1983;Walsh et al., 1982), is in some cases retained near the source nucleus in interspecific hybrids. The antigen recognized by 5.1Hll has recently been reported to be human N-CAM OValsh et al., 1989). Because the antibody recognizes secreted and glycosyl-phosphatidylinositol-linked forms of N-CAM, as well as the transmembrane form, the significance of the different results found in the two studies is unclear. The restricted distribution of human N-CAM in interspecific hybrids could arise because of its interaction with extracellular matrix or cytoskeleton. Our experiments and those of others suggest the following model for the localization of proteins in muflinucleated muscle cells. Both soluble and membrane proteins are synthesized and processed in an area closely surrounding their source nucleus. Soluble proteins are then free to diffuse through the cytoplasm (Mintz and Baker, 1967;Ralston and Hall, 1989), unless they are targeted to a subcellular organelle such as the nucleus , mitochondria, lysosomes, or to a local macromolecular assembly such as the myofibrils (Pavlath et al., 1989) or the cytoskeleton. Their range, in these cases, will reflect the competition between the rates of diffusion and of local uptake. The situa-tion is similar for membrane proteins: after insertion into the plasma membrane near their source nucleus, they are free to diffuse, unless they become associated with cytoskeletal or extracellular matrix elements. The range of a membrane protein will thus, like that of a soluble protein, reflect a competition between the kinetics of diffusion and of local entrapment. Understanding the contribution of an individual nucleus to the intracellular and surface organization of muscle cells may be important in several contexts. Recent experiments on the distribution of AChR subunit mRNA show surprising heterogeneity among nuclei even in uninnervated or denervated cells in which AChR is evenly distributed on the surface (Harris et al., 1989;Fontaine and Changeux, 1989). These results raise the possibility that other uniformly distributed proteins in muscle cells are derived from only a sub-'set of nuclei scattered throughout the muscle fiber that express mRNA for their synthesis. Finally, knowing the range over which the products of a single nucleus extend will have obvious importance for attempts to obtain phenotypic rescue of diseased muscle fibers by fusing normal cells into them (Partridge et al., 1989).
2014-10-01T00:00:00.000Z
1989-11-01T00:00:00.000
{ "year": 1989, "sha1": "d8c4c87b1b68a14d7fd941a506c9a23654d12a05", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/109/5/2345/1058651/2345.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d8c4c87b1b68a14d7fd941a506c9a23654d12a05", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219156509
pes2o/s2orc
v3-fos-license
Dynamics of vitamin A uptake, storage, and utilization in vocal fold mucosa Objective Extrahepatic vitamin A is housed within organ-specific stellate cells that support local tissue function. These cells have been reported in the vocal fold mucosa (VFM) of the larynx; however, it is unknown how vitamin A reaches and is disseminated among VFM target cells, how VFM storage and utilization vary as a function of total body stores, and how these parameters change in the context of pathology. Therefore, in this study, we investigated fundamental VFM vitamin A uptake and metabolism. Methods Using cadaveric tissue and serum from human donors representing the full continuum of clinical vitamin A status, we established a concentration range and analyzed the impact of biologic and clinical covariates on VFM vitamin A. We additionally conducted immunodetection of vitamin A-associated markers and pharmacokinetic profiling of orally dosed α-retinyl ester (a chylomicron tracer) in rats. Results Serum vitamin A was a significant predictor of human VFM concentrations, suggesting that VFM stores may be rapidly metabolized in situ and replenished from the circulatory pool. On a vitamin A-sufficient background, dosed α-vitamin A was detected in rat VFM in both ester and alcohol forms, showing that, in addition to plasma retinol and local stellate cell stores, VFM can access and process postprandial retinyl esters from circulating chylomicra. Both α forms were rapidly depleted, confirming the high metabolic demand for vitamin A within VFM. Conclusion This thorough physiological analysis validates VFM as an extrahepatic vitamin A repository and characterizes its unique uptake, storage, and utilization phenotype. INTRODUCTION Vitamin A is an essential dietary molecule. It underpins vision (as the retinal chromophore) and is critical for an array of cellular functions, including proliferation, differentiation, and morphogenesis [1,2]. Most vitamin A is stored as retinyl esters in hepatic stellate cells and released to the circulatory pool as retinol-binding protein 4 (RBP4)bound retinol for transport to extrahepatic tissues [3,4]. Alternatively, retinyl esters can be transported to extrahepatic sites postprandially by RBP4-independent chylomicra [5,6]. The relative contribution of RBP4and chylomicron-mediated transport to extrahepatic target cells varies as a function of the target organ and total body vitamin A status [5,7e 9]. Outside of the liver, vitamin A stores have been identified in the stellate cells of various extrahepatic tissues, such as the pancreas, kidneys, spleen, lungs, and larynx [3,4]. These local tissue repositories provide a readily available source of vitamin A to resident cells with high metabolic needs. In the vocal fold mucosa (VFM) of the larynx, VF stellate cells housed within discrete anatomic niches called the macula flavae (MF) store vitamin A [10,11], whereas nearby VF epithelial cells have no known storage capacity but are highly responsive to vitamin A bioavailability [12e14]. Vitamin A deficiency leads to local depletion from VF stellate cells [12], epithelial hyperkeratosis [13], and if the deficiency occurs during embryogenesis, profound laryngotracheal malformation [14]. Despite the evidence of vitamin A's importance to VF stellate and epithelial cell biology and its relevance to clinical disorders [15], there are no physiological studies of vitamin A uptake and utilization within the larynx. Such data are needed to better understand how vitamin A reaches VFM target cells, how VFM storage and utilization vary as a function of total body vitamin A stores, and how these parameters change in the context of pathology. In this study, we addressed these knowledge gaps using cadaveric tissue and serum from human donors representing the full continuum of clinical vitamin A status (deficient through hypervitaminotic), vitamin A-sufficient and -deficient in vivo rat models, and pharmacokinetic profiling of the a form of vitamin A, which cannot bind RBP4. We established a vitamin A concentration range for VFM; explored the impact of biologic and clinical covariates on VFM storage; compared vitamin A-specific uptake, processing, and utilization markers across VF cell subpopulations; and tested the capacity of VFM to uptake postprandial vitamin A directly from chylomicra. This cross-species phenotypic characterization provides a 1 foundation for future research into vitamin A biology and clinical disorders of the larynx. Human tissue procurement Human biospecimens were obtained with approval of the University of Wisconsin Health Sciences Institutional Review Board; specimens intended for vitamin A analyses were procured by the National Disease Research Interchange. Whole larynges, liver biopsies (5 Â 5 Â 10 cm), and blood sera (5 mL, isolated from whole blood via gel separation and centrifugation) were harvested from 26 cadavers (12 males, 14 females, age 49e101 y; Figure 1B) < 16 h postmortem. Two donors (1 male and 1 female) were identified as Hispanic or Latino, White; 24 donors (11 males and 13 females) were identified as non-Hispanic or Latino, White. Samples were snap-frozen in liquid N 2 , transported to our laboratory on dry ice, and stored at -80 C until use. An additional 8 human larynges were procured from autopsy cadavers (6 males, 2 females, age 40e68 y) < 36 h postmortem and processed for immunoblotting (n ¼ 5) and histology and immunohistochemistry (n ¼ 3). The donors had no history of chemoradiation to the head and neck region, had not undergone prolonged endotracheal intubation or ventilation prior to death, were not septic, and had negative infectious disease serology. All of the donors had a negative history for laryngeal disease; all of the larynges were considered normal at autopsy and during tissue microdissection. One donor had a positive history of liver cirrhosis. The causes of death were cardiopulmonary events in most cases, esophageal cancer sequelae in one case, granulomatosis with polyangiitis sequelae (but no laryngeal involvement) in one case, dementia sequalae in one case, and unknown in two cases. Liver and serum data from this cohort were included in a previously reported analysis of the relationship between serum retinyl esters and total liver vitamin A reserves [16]; this prior analysis included one additional cadaver from whom we procured liver and serum but no larynx. Compound synthesis a-Retinyl acetate was synthesized using a previously described method for synthesizing 13 C-retinyl acetate [17] with the following modifications: a-ionone (Sigma Aldrich) was used in place of b-ionone as the starting reagent and 13 C was not added. The synthesized aretinyl acetate was purified (>95%) on 8% water-deactivated neutral Al 2 O 3 using hexanes and diethyl ether; purity was confirmed by thinlayer chromatography, ultraviolet (UV)-visible spectroscopy, and high-performance liquid chromatography (HPLC) with photodiode array detection. 2.3. Animals, diet, and compound dosing Animal experiments were conducted in accordance with the Public Health Service Policy on Humane Care and Use of Laboratory Animals and the Animal Welfare Act (7 U.S.C. et seq.); the protocols were approved by the University of Wisconsin School of Medicine and Public Health Animal Care and Use Committee. We used a rat model as rats have documented vitamin A-storing stellate cells in the anterior and posterior MF (aMF and pMF, respectively) [11] and tolerate oral dosing of a-retinyl acetate [18]. The rats were housed in a temperature-and humidity-controlled environment with a 12-h lightedark cycle. Aspen bedding was used as it absorbs moisture, eliminates odor, and has low nutritional value (maize cobs, another option, would have interfered with our bioassays Original Article due to kernel contamination). The rats were placed on a vitamin A-free purified diet ad libitum to attenuate their preexisting hypersupplemented status and obtain the intended vitamin A sufficiency or deficiency targets. The vitamin A-deficient diet (TD.04175; Harlan-Teklad) contained the following (in g/kg): casein (200); DL-methionine (3); sucrose (280); maize starch (215); maltodextrin (150); cellulose (50); soybean oil (55); mineral mix AIN-93G (TD.94046) (35); calcium phosphate (3.2); vitamin mix without added A, D, E, and choline (TD.83171) (5); vitamin D 3 (0.0044); vitamin E (0.242); choline dihydrogen citrate (3.5); and tert-butylhydroquinone (0.01). In our preliminary experiment, 3-week-old male weanling Sprague Dawley rats (n ¼ 75; Charles River) were vitamin A-depleted for 3 weeks to obtain vitamin A deficiency. Next, the rats received a single 1 mg (3.5 mmol) oral dose of a-retinyl acetate in cottonseed oil vehicle (n ¼ 70); the control rats (n ¼ 5) received no dose. The rats were euthanized at 0 (control), 0.5, 1, 1.5, 2, 3, 4, 6, 8, 12, 24, 48, 96, 168, and 336 h post-dose; blood, liver, and kidneys were collected at each time point. Tissue wet weights were recorded. In our primary experiment, 5-week-old male Fischer 344 rats (n ¼ 65; Charles River) were vitamin A-depleted for 3 weeks to obtain a marginal vitamin A status. Next, the rats received a single 2 mg (7 mmol) oral dose of a-retinyl acetate in cottonseed oil vehicle (n ¼ 55); the control rats were dosed with vehicle only (n ¼ 5) or received no dose (n ¼ 5). Approximately 250 mL of blood was collected from the saphenous vein of 5e6 a-retinyl acetate-dosed rats per time point at 0 (no dose control), 1, 3, 5, 9, 11, and 24 h post-dose. We sampled different animals at each time point as serial blood draws would have caused unacceptable blood volume loss. The rats were euthanized at 7 (n ¼ 30) or 72 h (n ¼ 25) post-dose; the vehicle control rats were euthanized at 7 h post-dose. Blood, larynx, liver, lungs, kidneys, and spleen were collected at each time point. Tissue wet weights were recorded. Human serum samples were processed as 500 mL aliquots; 1.25 Â volume of ethanol was added to denature proteins. The internal standard C-23 b-apo-carotenol was added to determine extraction efficiencies. Samples were extracted three times with 0.75 mL hexanes; supernatant fractions were pooled, dried under N 2 , and reconstituted in 100 mL methanol:dichloroethane (75:25, v/v). Two mL was injected into the ultra-performance liquid chromatography (UPLC) instrument. Rat Rat serum samples were processed as previously described for human serum with the following modifications. In the preliminary experiment, extractions were reconstituted in 100 mL methanol:dichloroethane (50:50, v/v) and 50 mL was injected into the HPLC instrument. In the primary experiment, the starting volume of serum from all of the saphenous vein draws was 35e100 mL, extractions were reconstituted in 30 mL methanol:dichloroethane (75:25, v/v), and 2 mL was injected into the UPLC instrument. 2.5. Tissue processing for liquid chromatography 2.5.1. Human Human larynges and liver samples were transferred to room temperature (RT) for 60 min and then dissected. Bilateral VFM were microdissected from each larynx with retention of both aMF and pMF and then weighed. Each liver sample was dissected and weighed to obtain a 1 g sample for vitamin A analyses; surplus tissue was retained for protein isolation and immunoblotting. Bilateral VFM pairs were immersed in 1 mL of ethanol and 20 mL of C-23 b-apo-carotenol was added. The tissue was minced with scissors, 500 mL of phosphate-buffered saline (PBS) was added, and the entire solution was homogenized (Tissue-Tearor 985370; Biospec). Each sample was transferred to a glass tube and the homogenization tube was rinsed twice with 1 mL of ethanol and once with 2 mL of hexanes; each rinse solution was added to the sample tube followed by an additional 1 mL of PBS. The non-polar hexane layer was then transferred to a new tube and the extraction was repeated twice, each time using 2 mL of hexanes. Finally, all of the extraction fractions were pooled, dried under N 2 , and reconstituted in 20 mL methanol:dichloroethane (50:50, v/v). Four mL was injected into the UPLC instrument. Human liver tissue (1 g) was ground with 4e5 g of sodium sulfate in a mortar and pestle; 500 mL of C-23 b-apo-carotenol was added. The samples were extracted repeatedly with dichloromethane through a Whatman #1 filter (GE Healthcare) to a 50 mL volume. A 5 mL aliquot was dried under N 2 and reconstituted in 300 mL methanol:dichloroethane (75:25, v/v). One mL was injected into the UPLC instrument. Rat Rat larynges were microdissected and each VFM (with retention of both aMF and pMF) was removed and then weighed. Prior studies demonstrated that the accurate quantification of vitamin A in rat VFM requires pooling across animals [11]; based on pilot UPLC data, we pooled bilateral VFM from sets of 5 larynges per biological replicate. Pooled rat VFM samples were processed as previously described for human VFM with the following modifications: the initial ethanol volume was 500 mL and post-homogenization rinses were performed twice with 500 mL ethanol and once with 1 mL hexanes. Rat liver (0.5 g), lung (whole), kidney (1 g), and spleen (whole) samples were processed as previously described for human liver with the following modifications. Liver, lung, and spleen samples were extracted to a 50 mL final volume and kidney samples were extracted to a 25 mL final volume. For the liver and kidney samples, a 5 mL aliquot was dried under N 2 ; for the lung and spleen, the entire sample was dried with a rotary evaporator (Rotavapor R-114; Buchi) coupled with a circulation chiller (WK230; Lauda-Brinkman), re-dissolved three times in 1 mL of dichloromethane, and then dried under N 2 . The extractions were reconstituted in 100 mL (lung), 200 mL (liver), or 250 mL (kidney and spleen) of methanol:dichloroethane (75:25, v/v). In the preliminary experiment, 50 mL of liver or 25 mL of kidney reconstituted extract was injected into the HPLC instrument; in the primary experiment, 2 mL (all of the tissues) was injected into the UPLC instrument. Liquid chromatography The rat preliminary experimental samples were analyzed as follows. The serum was analyzed using an isocratic HPLC system comprising a guard column, a Waters Symmetry C18 column (3.5 mm, 4.6 Â 75 mm), a Waters Resolve C18 column (5 mm, 3.9 Â 300 mm), a Rheodyne injector, a Shimadzu SPD-10A UV-visible spectroscopy detector, a Waters Delta 600 pump and controller, and a Shimadzu C-R7A Plus data processor. Tissue was analyzed using the columns and a Waters 1525 binary HPLC pump, a Waters 717 autosampler, and a Waters 996 photodiode array detector. The mobile phase was acetonitrile:water (87.5:12.5, v/v); 10 mmol of ammonium acetate was used as a modifier, and the flow rate was 0.7 mL/min. All of the human cadaver and rat primary experimental samples were analyzed using a Waters Acquity UPLC HSS C18 1.8-mm VanGuard pre-column in conjunction with a Waters Acquity UPLC HSS C18 column (1.8 mm, 2.1 Â 150 mm). The method utilized two solvent mixes run in a 29-min gradient: solvent A was acetonitrile:water:propanol (70:25:5, v/v/v) with 10 mmol ammonium acetate; solvent B was methanol:propanol (75:25, v/v). The gradient began with 100% solvent A for 7 min, a linear transition to 5% solvent A over 4 min, a further transition to 1% solvent A over 12 min, a reversal to 100% solvent A over 2 min, and maintenance of 100% solvent A for the final 4 min. The column temperature was 32 C, and the flow rate was 0.4 mL/min. HPLC and UPLC detection was set at 311 nm for a-retinol and a-retinyl ester and 325 nm for retinol and retinyl ester ( Figure 3B shows representative spectra; Figures 3D, 4B, and S2A show representative chromatograms). Retinyl oleate and palmitate coeluted in this system (as did a-retinyl oleate and palmitate). Concentrations were calculated using curves generated via analysis of HPLC-purified standards. Histology and immunohistochemistry Human larynges (n ¼ 3) were microdissected and each VF (VFM including aMF and pMF, elastic and hyaline regions of the arytenoid cartilage [eAC and hAC, respectively], and thyroarytenoid muscle [TA]) was removed en bloc. Tissue was fixed in 4% paraformaldehyde for 24 h, dehydrated in 70% ethanol, and 5-mm-thick paraffin sections were prepared. Sections intended for morphological assessment were stained with hematoxylin and eosin (H&E). Sections intended for immunostaining were processed for antigen retrieval in a decloaking chamber (BioCare Medical) using 10 mM of citrate buffer (pH 6.0), permeabilized using 0.2% Triton X-100 for 10 min, blocked with 10% bovine serum albumin in PBS for 60 min, and incubated with primary antibodies at 4 C overnight. For horseradish peroxidase (HRP)-based detection, endogenous peroxidase was quenched using 3% H 2 O 2 in PBS. ImmPRESS anti-mouse and anti-rabbit immunoglobulin G (IgG) HRP polymers were used for secondary detection (30 min incubation time) and an ImmPACT DAB kit was used to develop the signal according to the manufacturer's instructions (all of the reagents were obtained from Vector Labs). Sections were counterstained with hematoxylin, dehydrated, cleared, and cover-slipped. Rat larynges and livers (n ¼ 3 per organ) were harvested en bloc, dehydrated in 20% sucrose, embedded in optimal cutting temperature compound (Tissue-Tek, Sakura Finetek), and snap-frozen in liquid N 2 . Five-mm-thick cryosections were prepared (for larynges, in the axial plane; for livers, in any orientation), fixed using either 4% paraformaldehyde at RT, methanol at 4 C, or acetone at -20 C, and air dried at RT. The sections were washed with PBS and blocked with 5% goat serum and 0.02% Tween 20 in PBS at RT for 30 min. The sections were sequentially incubated with primary antibodies at 4 C overnight, relevant secondary antibodies (Alexa Fluor 488-or 594-conjugated, goat anti-mouse, and goat anti-rabbit IgG [1:400; A-11001, A-11008, A-11005, and A-11012, Invitrogen]) at RT for 1 h, and nuclear dye DAPI (1 mg/mL; SigmaeAldrich) at RT for 5 min. A subset of slides were additionally incubated with lipophilic dye BODIPY 505/515 (5 mM; D-3921, Thermo Fisher) prior to DAPI. The slides were covered with antifade mounting medium (SigmaeAldrich) and cover-slipped. Imaging was performed using a Nikon Ti-S/L100 inverted microscope connected to DS-Qi2 (Nikon) and Infinity 1-1 (Lumenera) digital cameras; images were captured with consistent exposure settings. Negative control sections stained without either the primary or secondary antibody showed no immunoreactivity. The primary antibodies Immunoblotting Human larynges (n ¼ 5) were microdissected and the following VF subsites were harvested and processed separately: mid-membranous lamina propria and epithelium (LP þ Epi, not including MF); aMF and Original Article pMF (pooled within each larynx); TA; and hAC. Liver tissue (n ¼ 5) was obtained from the primary cadaver cohort. The samples (40e80 mg) were processed for protein isolation using a Qproteome Mammalian Protein Prep Kit (Qiagen) according to the manufacturer's instructions. Homogenization in lysis buffer was performed using a rotor-stator unit (TissueRuptor; Qiagen) for 30 s on ice followed by a probe ultrasonicator (300 V/T; Biologics) for 3 min (20 s on-off cycle) on ice. The protein concentration was measured using a DC Protein Assay (Bio-Rad) according to the manufacturer's instructions. Protein isolates were incubated in Laemmli sample buffer containing 2-mercaptoethanol (Bio-Rad) at 95 C for 5 min. Polyacrylamide gel electrophoresis was performed using 4e15% precast gels (mini-PROTEAN TGX, Bio-Rad) with 10 mg of total protein load per lane. Following transfer, polyvinylidene fluoride membranes were cut at 25 kDa and treated individually with 5% non-fat dry milk in Trisbuffered saline containing 0.05% Tween 20 at RT for 1 h, then incubated with primary antibodies at 4 C overnight. Blots were detected using relevant HRP-conjugated secondary antibodies and enhanced chemiluminescence substrate (Pierce) at RT for 1 h. Imaging was performed using a BioSpectrum 815 system (Ultraviolet Products). Statistics The total vitamin A concentration was defined as the sum of all of the (b form) retinol and retinyl ester concentrations; the total a-vitamin A concentration was defined as the sum of all of the a-retinol and aretinyl ester concentrations. Esterification was calculated by dividing the retinyl ester (or a-retinyl ester) concentration by the total vitamin A (or total a-vitamin A) concentration and converting to a percentage. Thresholds for vitamin A deficiency and hypervitaminosis A were defined as 0.1 and 1.0 mmol/g total liver vitamin A, respectively, based on recent guidelines [20]. Data intended for statistical testing were first evaluated for normality and equality of variance using visual inspection of raw data plots, Levene's test, and the folded F test. Human vitamin A data were analyzed using analysis of covariance (ANCOVA), with the VFM concentration as the dependent variable; age, sex, body mass index, serum concentration, and liver concentration were used as covariates. Additional relationships between VFM, serum, and liver concentrations were analyzed using Pearson's r. Western blotting densitometric data were analyzed using one-way analysis of variance (ANOVA). Rat vitamin A data were analyzed using a t test in cases of two experimental groups or one-way ANOVA in cases of more than two experimental groups. All of the data analyzed using ANCOVA and ANOVA met normality and equality of variance assumptions; where indicated by the data, t tests were conducted under an unequal variance assumption (the Satterthwaite method). In all of the ANOVA models, if the F test revealed a significant difference, planned pairwise comparisons were performed using Fisher's protected least significant difference method. A type I error rate of 0.01 was used for all of the statistical testing; all of the P values were two-sided. Vitamin A storage in human VFM The current understanding of vitamin A storage in human VFM is limited to intracellular staining of VF stellate cells with gold chloride [10,11], detection of vitamin A autofluorescence [10], and a single report on retinol and retinyl ester concentrations in three donors [11]. Despite its importance to VF biology, it is unknown how vitamin A storage in VFM corresponds to that of circulating plasma or liver, where most body reserves are housed [3]. To obtain baseline data, we procured VFM, liver, and serum from 26 human cadavers ( Figure 1A) and assayed vitamin A forms and concentrations using UPLC. We used cadaveric donors because elective VFM biopsy risks iatrogenic damage in healthy individuals and analysis of liver tissue is the gold standard for assessing vitamin A status. The samples were obtained <16 h postmortem; the donors were mid-to late-life adult males (n ¼ 12) and females (n ¼ 14) with a broad range of body mass indices ( Figure 1B). Rather than isolating the MF, we analyzed intact bilateral VFM pairs (with retention of aMF and pMF) to ensure that we captured all of the vitamin A-associated cells and allow comparison with prior data from humans and other species [11]. The donors exhibited a wide range of total liver vitamin A reserves (0.001e3.38 mmol/g; Figure 1C); the total vitamin A concentrations in the VFM (0.086e0.821 nmol/g) and serum (0.065e3.15 mmol/L) were more tightly clustered across individuals. Based on recent guidelines [20], 12 donors were classified as vitamin A sufficient (0.1e1.0 mmol/ g liver), 6 were vitamin A deficient (<0.1 mmol/g liver), and 8 were hypervitaminotic (>1.0 mmol/g liver). The mean concentration in the VFM was 0.06% of that in the liver (0.416 AE 0.218 nmol/g vs 0.743 AE 0.971 mmol/g [mean AE SD], respectively) and higher than in previously reported data collected usingHPLC [11]. Liver vitamin A was primarily detected as retinyl esters (consistent with cytoplasmic storage in hepatic stellate cells) [3], serum vitamin A was primarily detected as retinol (consistent with RBP4-mediated transport) [1], and VFM vitamin A was variably esterified (0e70.9%; Figure 1C). We used ANCOVA to build a statistical model of the total vitamin A concentration in the VFM (omnibus P ¼ 0.0010; Figure 1D). The serum concentration was the only significant predictor of the VFM concentration (P ¼ 0.0008); the donor age, sex, body mass index, and liver concentrations were non-significant variables (P ¼ 0.017e0.907). Regression analyses of the retinol and retinyl ester concentrations showed the strongest linear relationships between VFM and serum retinol (r ¼ 0.763; P < 0.0001) and VFM and liver retinyl esters (r ¼ 0.726; P < 0.0001). These findings are consistent with a physiologic relationship between vitamin A uptake and utilization in VFM with that available from circulating plasma and liver reserves. To corroborate our UPLC data, we immunoassayed human VF for stellate cell, vitamin A uptake, and vitamin A utilization markers (Figure 2 and Fig. S1). The stellate cell marker glial fibrillary acidic protein (GFAP) [21,22] was predominantly expressed by cells in the aMF and pMF [11,23]; posteriorly, it exhibited a gradient of reduced immunosignal as the pMF transitioned into the eAC and then hAC. Stimulated by retinoic acid 6 (STRA6), the RBP4 cell-surface receptor and retinol transmembrane transporter [24] was strongly expressed by stellate cells, fibroblasts, and epithelial cells; in contrast, cellular retinol-binding protein 1 (RBP1), which accepts retinol from STRA6 and donates it as a substrate for esterification or oxidation [1,25], was weakly expressed. Lipoprotein lipase (LPL), a multifunctional enzyme that facilitates hydrolysis and cellular uptake of retinyl esters from chylomicra [26,27], was consistently expressed across the VF regions and cell types. Retinoic acid receptor-a (RARA) was expressed by stellate cells in the aMF and pMF, chondrocytes in the hAC, and luminal epithelial cells. In sum, these data confirm that human VFM contains distinct cell populations with the machinery to uptake, process, and metabolize retinol and retinyl esters from circulation. Pharmacokinetics of single dose a-retinyl ester in vitamin Adeficient rats To support our human characterization work with physiologic data, we used an in vivo rat model to assess vitamin A transport and pharmacokinetics. As vitamin A trafficking to VFM might involve both RBP4dependent and -independent mechanisms, we used an a-retinyl ester dosing paradigm to test whether VFM can receive postprandial retinyl esters directly from chylomicra independent of RBP4. This approach takes advantage of the inability of a-retinol (distinguished from retinol [also known as b-retinol] by a shift of the cyclohexene ring double bond from the 5,6 to the 4,5 position; Figure 3A,B) to bind to RBP4 [28], meaning that while it can accumulate in the liver, it remains sequestered and cannot reenter the circulation for RBP4-mediated transport to extrahepatic organs [7,29]. Therefore, detection of avitamin A in extrahepatic tissue provides evidence of postprandial trafficking by chylomicra. Before assaying VFM uptake, we piloted the approach in serum, liver, and kidneys after dosing vitamin A-deficient Sprague Dawley rats with a single 1 mg (3.5 mmol) a-retinyl acetate bolus ( Figure 3C). We selected the rat strain and deficiency model based on precedence in the vitamin A literature [18,30,31] and because we reasoned that initial detection of a forms would be more straightforward in animals with reduced total body vitamin A; we selected the a-retinyl acetate dose based on prior research showing no liver toxicity at 3.5 mmol even after repeated administration [18]. Kidney uptake was comparable for a-retinol and a-retinyl esters, with both concentrations peaking at 8 h post-dose. Finally, retinol and retinyl ester concentrations in the serum and liver varied across the monitoring period ( Figure 3E and Fig. S2B), suggesting that, at least in the vitamin A-deficient condition, a single a-retinyl ester dose might impact the utilization of existing vitamin A stores. Uptake and utilization of a-retinyl esters in vitamin A-sufficient rat VFM Having successfully profiled the uptake and utilization of orally dosed aretinyl ester in the serum, liver, and kidneys, we next evaluated singledose pharmacokinetics in VFM ( Figure 4A). We used Fischer 344 strain rats to allow comparison with prior VFM data [11]; we kept the rats in vitamin A-sufficient status to better evaluate the impact of postprandial avitamin A delivery on utilization of existing vitamin A stores. We added a vehicle control, expanded the extrahepatic comparison tissues to include the lungs and spleen, and narrowed the monitoring period to 72 h postdose. Finally, as rat VFM is a relatively small tissue (>0.2 mm 3 [32,33]; w0.8 g ww in our dataset) with stellate cells restricted to the aMF and pMF, we optimized detection and quantification by doubling the a-retinyl acetate dose to 2 mg (7 mmol), using UPLC in place of HPLC, and pooling bilateral VFM from 5 larynges per biological replicate. Preliminary analysis confirmed the detection of all of the a forms of interest ( Figure 4B). The serum a-retinyl ester concentration peaked at 3 h, approached baseline by 11 h, and was completely cleared from the circulation by 24 h post-dose ( Figure 4C). We noted parallel but smaller spikes in the serum retinol and a-retinol concentrations, presumably due to the dose overwhelming the enterocyte esterification apparatus, leading to co-packaging of intestinal retinol and a-retinol with a-retinyl esters in chylomicra. The total a-vitamin A concentrations at 7 and 72 h were highest in the liver, followed by the lungs, kidneys, spleen, and VFM ( Figure 4D); the concentrations in all of the extrahepatic tissues decreased between 7 and 72 h. VFM a-retinol and a-retinyl esters (0.931 AE 0.048 and 2.00 AE 0.284 nmol/g at 7 h, respectively) were nearly completely metabolized by 72 h ( Figure 4E). The spleen and kidneys showed the most similar concentration profiles to that of VFM with the exception that the initial uptake (7 h) in both tissues favored aretinol over a-retinyl esters. In contrast to these fast-metabolizing tissues, the lungs retained the majority of absorbed a-vitamin A and the liver accumulated additional a-vitamin A through 72 h post-dose. Finally, we observed extrahepatic retinol uptake (VFM, kidneys, and lungs) and utilization (spleen and kidneys) at 7 and 72 h (Fig. S3), consistent with these tissues absorbing the a-retinyl ester dose alongside chylomicron-co-packaged or RBP4-bound retinol from the circulation. Characterization of stellate cell-and vitamin A-associated markers in vitamin A-sufficient rat VF We immunostained vitamin A-sufficient rat VF for stellate cell and vitamin A uptake, storage, and utilization markers. Consistent with human VF (Figure 2 and Fig. S1), GFAP was expressed by stellate cells in the aMF and pMF as well as chondrocytes in the eAC ( Figure 5A). Most of the GFAP þ VF stellate cells (and neighboring GFAP þ eAC cells) strongly co-expressed STRA6 and RARA; a subset was GFAP þ LPL þ ( Figure 5B). As vitamin A reserves are stored as retinyl esters within cytoplasmic lipid droplets [1], we additionally probed tissues with lipophilic dye boron-dipyrromethene (BODIPY) and assessed the distribution of lipid droplet coating protein perilipin 1 (PLIN1). Most of the VF stellate cells were PLIN þ ; a subset was BODIPY þ . Assessment of additional VF regions showed STRA6 and RARA expression patterns (Figs. S4 and S5) that were comparable to those observed in human tissue (Fig. S1); LPL was weakly expressed in the TA and by a small subset of epithelial cells (Fig. S6); BODIPY was detected in the hAC and PLIN1 was weakly expressed by basal epithelial cells (Fig. S7). Overall, these immunostaining data corroborate our a-retinyl ester dosing results by confirming that, as in humans, rat VFM cells have the requisite machinery to uptake, process, store, and metabolize vitamin A irrespective of the (RBP4-dependent or -independent) delivery mechanism. DISCUSSION Using human cadavers and an in vivo rat system, we herein provide the first physiologic analysis of vitamin A uptake, storage, and utilization within VFM benchmarked to liver, serum, and other extrahepatic tissues. Our data confirm VFM as a bona fide extrahepatic vitamin A repository in the larynx, advance characterization of the VF stellate cell phenotype, and show general biologic concordance between humans and rats. Notably, approximately half (n ¼ 14; 54%) of the humans in our cadaver cohort met the gold standard (total liver reserve) criteria for vitamin A deficiency or hypervitaminosis A [20]. While both conditions are assumed to exist in the US adult population [34], definitive liver biopsy-based assessment is rare in humans. The high prevalence estimates in our cohort may reflect inadequate micronutrient intake in certain individuals [35e37], overconsumption of fortified foods and supplements in others [38,39], and medical comorbidities and end-oflife interventions in some. Analysis of the complete dataset allowed us to build a statistical model of vitamin A utilization based on a wide range of biologically and clinically relevant concentrations. Serum vitamin A was the only significant predictor of the VFM concentration in the full model, suggesting that VFM stores may be rapidly metabolized in situ and replenished from the circulatory pool. Further, VFM vitamin A favored the retinol form but exhibited a range of esterification across individuals, consistent with the dynamic management of metabolic and storage needs. We corroborated these observations using immunodetection of vitamin A-associated markers and pharmacokinetic profiling of orally dosed a-retinyl ester in rats. Prior research showed that a-retinol has 40e50% of the biopotency of retinol [18]; in VFM, these molecules presumably support the same biologic functions. On a vitamin A-sufficient background, dosed a-vitamin A was detected in rat VFM in both ester and alcohol forms, indicating postprandial trafficking by chylomicra and initial hydrolysis followed by some esterification. Both a forms were nearly fully depleted by 72 h, confirming the high metabolic demand for vitamin A within VFM. It is well-established that VFM vitamin A is stored locally by stellate cells in the aMF and pMF [10,11]. Our observation of cytoplasmic RARA in human and rat VF stellate cells was consistent with a prior report [12] and suggests that these vitamin A-storing cells may additionally metabolize retinoic acid. Importantly, while classically known as a nuclear receptor and transcription factor, RARA can localize to the cytoplasm in quiescent cells and undergo nuclear translocation when retinoic acid is present [40,41]. In fact, RARs appear to mediate the pleiotropic functions of retinoids in part by operating in multiple subcellular compartments [42e45]. The VF stellate cell phenotype is distinct from that of VF epithelial cells, which preferentially express nuclear RARB [13], do not retain retinyl esters for storage [11], and are highly sensitive to vitamin A bioavailability [13,14]. While our data further show that VF epithelial cells strongly and uniformly express STRA6, it is unclear whether vitamin A is supplied to the VF epithelium by neighboring stellate cells or the circulatory pool. By demonstrating uptake of dosed a-vitamin A from chylomicra, we show that, in addition to plasma retinol and the local VF stellate cell repository, VFM can directly access postprandial retinyl esters to meet metabolic demands. The total VFM uptake under vitamin A-sufficient conditions presumably involves a combination of RBP4-dependent and -independent transport: future research could quantify the relative dependence on each mechanism by comparing a-retinyl ester kinetics with that of a retinoid tracer capable of binding RBP4 (for example, 3,4-didehydroretinyl ester [also known as vitamin A 2 ]) [8,9,18]. Future research should also examine whether subpopulations of VFM cells uptake vitamin A via the same or different mechanisms, if they do so in a coordinated or independent fashion, and whether they use established or novel metabolic pathways for intracellular vitamin A processing. single-and merged-channel images. The yellow arrowheads indicate GFAP þ STRA6 þ , GFAP þ LPL þ , GFAP þ RARA þ , and BODIPY þ PLIN þ stellate cells. Scale bar, 50 mm (lowmagnification images) and 10 mm (high-magnification images). Original Article microdissections. C.R.D. performed HPLC of human tissue and serum. S.A.T. synthesized the a-retinyl acetate. K.N. and C.R.D. conducted the primary a-retinyl acetate experiment and C.R.D. performed UPLC. K.N. performed all of the histology, immunodetection assays, and microscopy. K.N. and N.V.W. analyzed the data and wrote the manuscript. All of authors reviewed and approved the final version.
2020-05-28T09:15:46.885Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "c4dc0dc6147980e05ed8336b5476a1da8f2cb92e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.molmet.2020.101025", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4aeded616d6e0b9bb179f7cd61fdb65af6c7db8c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
52933358
pes2o/s2orc
v3-fos-license
Serotonin Mediates Depression of Aggression After Acute and Chronic Social Defeat Stress in a Model Insect In all animals, losers of a conflict against a conspecific exhibit reduced aggressiveness, often coupled with depression-like symptoms, particularly after multiple defeats. While serotonin (5HT) is involved, discovering its natural role in aggression and depression has proven elusive. We show how 5HT influences aggression in male crickets, before, and after single and multiple defeats using serotonergic drugs, at dosages that had no obvious deleterious effect on general motility: the 5HT synthesis inhibitor alpha-methyltryptophan (AMTP), the 5HT2 receptor blocker ketanserin, methiothepin which blocks 5HT receptor subtypes other than 5HT2, 5HT's precursor 5-hydroxytryptophan (5HTP) and re-uptake inhibitor fluoxetine. Contrasting reports for other invertebrates, none of the drugs influenced aggression at the first encounter. However, the recovery of aggression after single defeat, which normally requires 3 h in crickets, was severely affected. Losers that received ketanserin or AMTP regained their aggressiveness sooner, whereas those that received fluoxetine, 5HTP, or methiothepin failed to recover within 3 h. Furthermore, compared to controls, which show long term aggressive depression 24 h after 6 defeats at 1 h intervals, crickets that received AMTP or ketanserin regained their full aggressiveness and were thus more resilient to chronic defeat stress. In contrast, 5HTP and fluoxetine treated crickets showed long term aggressive depression 24 h after only 2 defeats, and were thus more susceptible to defeat stress. We conclude that 5HT acts after social defeat via a 5HT2 like receptor to maintain depressed aggressiveness after defeat, and to promote the susceptibility to and establishment of long-term depression after chronic social defeat. It is known that the decision to flee and establishment of loser depression in crickets is controlled by nitric oxide (NO), whereas dopamine (DA), but not octopamine (OA) is necessary for recovery after defeat. Here we show that blocking NO synthesis, just like ketanserin, affords resilience to multiple defeat stress, whereas blocking DA receptors, but not OA receptors, increases susceptibility, just like fluoxetine. We discuss the possible interplay between 5HT, NO, DA, and OA in controlling aggression after defeat, as well as similarities and differences to findings in mammals and other invertebrate model systems. INTRODUCTION Aggression toward a conspecific is a widespread behavioral strategy in the Animal Kingdom adapted to secure resources and ensure survival at minimal cost (Stevenson, 2018). In addition to the physical dangers, losing a conflict (social defeat) can have enduring adverse behavioral costs, including suppressed aggressiveness (De Boer et al., 2016) often coupled with general depression like symptoms, particularly after chronic social defeat (rodents Hammels et al., 2015;Koolhaas et al., 2017;fish: Backstrom and Winberg, 2017;insects: Rose et al., 2017;Trannoy and Kravitz, 2017;crayfish: Bacque-Cazenave et al., 2017). Social defeat is thus currently viewed as a model for gaining insights into depression in humans (Laman-Maharg and , improved animal welfare (Toyoda, 2017) and behavioral syndromes underlying animal "personality" (Briffa et al., 2015). The proximate mechanisms underlying defeat associated depression are not fully understood. Aggressive experience modifies numerous neurotransmitter systems (De Boer et al., 2016), and drugs that influence them have manifold effects on aggression . Among them, serotonin (5HT) has a complex relationship to aggression that depends on age, sex and social status, which reflects the intricacy of the 5HT system with its many receptor subtypes and widespread innervation (Carhart-Harris and Nutt, 2017). Generally, however, 5HT precursors, re-uptake inhibitors and 5HT receptor agonists typically reduce overt aggression in vertebrates including man (De Boer et al., 2016;Carhart-Harris and Nutt, 2017;Trainor et al., 2017). Serotonin is thus thought to dampen aggression by promoting withdrawal or terminating aggression (Olivier, 2015). However, 5HT drugs can also increase aggression, by decreasing submissiveness after social defeat (Morrison and Cooper, 2012;Bauer, 2015;Clinard et al., 2015;Olivier, 2015). Thus, 5HT is a potential mediator of loser depression, chronic defeat stress and general stress (Hammels et al., 2015;Koolhaas et al., 2017). However, the extent to which this occurs normally, or only under extreme, pathological conditions, is questioned (Olivier, 2015). Notwithstanding remarkable similarities in the mechanisms underlying aggression in insects, mice and man (Thomas et al., 2015), the role proposed for 5HT seems to differ. In invertebrates, 5HT, its precursor, 5HT 1A agonists and genetic activation of 5HT neurons can increase aggression and win chances while reducing the tendency to flee in crustaceans (Kravitz, 2000), fruit flies (Dierick and Greenspan, 2007;Johnson et al., 2009;Alekseyenko et al., 2010) and stalk eyed flies (Bubak et al., 2014). Recently though, blockade of 5HT receptors was reported to prohibit the acquisition of anxiety like behavior after defeat in crayfish (Bacque-Cazenave et al., 2017). Furthermore, 5HT neurons in insects (Drosophila) also have inhibitory effects on behavior (Pooryasin and Fiala, 2015) and mediate stress induces behavioral depression (Ries et al., 2017), but it is not known if 5HT influences post defeat depression. Here we investigate how 5HT drugs affect aggression in crickets after single and multiple defeats and compare this with their action before losing in socially naive crickets. At present, the role of 5HT in the aggressive behavior of crickets is unclear (reviewed in Stevenson and Rillich, 2017). Inhibition of 5HT synthesis is claimed to reduce win chances (Dyakonova et al., 1999), but have no clear effect on aggressiveness per se (Stevenson et al., 2000(Stevenson et al., , 2005, whereas 5HT's precursor enhances some elements of cricket aggression (e.g., fight duration), but reduces others (e.g., attack frequency), without altering win chances (Dyakonova and Krushinsky, 2013). It has thus been suggested that "behavioral features of dominant male crickets are likely to be connected with the activation of the serotonergic system" whereas "a decrease in serotonergic activity may be functionally important for the control of loser behavior" (Dyakonova and Krushinsky, 2013). More recently, it was found in crickets that nitric oxide (NO) triggers the actual decision to flee and establishes subsequent loser depression Rillich and Stevenson, 2017), whereas octopamine and dopamine (OA, DA, Stevenson et al., 2005;Rillich and Stevenson, 2014) promote recovery. In view of this, we also test how drugs that influence these neuromodulators influence aggression after chronic social defeat in comparison to serotonergic drugs. Our experiments provided evidence that 5HT acts primarily in crickets to maintain depressed aggressiveness in losers after defeat, and particularly so after multiple defeat, most likely as the result of interactions with NO and DA. Our work thus provokes new thought on the roles of 5HT and NO in controlling aggression in insects and mammals. Experimental Animals Mature, 2-3 week-old, adult male crickets, Gryllus bimaculatus were taken from a breeding stock kept under standard conditions at Leipzig University (22-24 • C, relative humidity 40-60%, 12 h: 12 h light: dark regime daily feeding on bran and vegetables). Prior to experimentation, they were isolated in glass jars with ample food and water for 48 h. All experiments were performed during daytime. All treatments complied with the Principles of Laboratory Animal Care and the German Law on the Protection of Animals. Evaluation of Aggression The aggressiveness of test crickets was evaluated by matching them against equally sized males (<5% weight difference) that were made hyper-aggressive by flying them in a wind stream before the match (as in Stevenson and Rillich, 2015) and which always won the contests. Contests were staged in a Perspexglass arena (16 × 9 × 7 cm) and follow a stereotyped sequence which we score 0-6 to denote the level of aggressive escalation (Stevenson et al., 2000): Level 0: mutual avoidance. Level 1: one cricket attacks, the other retreats. Level 2: antennal fencing. Level 3: mandible threat by one cricket. Level 4: mandible threat by both. Level 5: mandible engagement. Level 6: grappling, an allout fight. Fight duration, from initial contact to retreat of the loser, was recorded with a stopwatch, deducting any pauses that occasionally occurred. Pharmacological Treatments We tested the following drugs (Sigma Aldrich, Deisenhofen, Germany), which were injected into the haemocoel via the pronotal shield using a microsyringe: The 5HT-receptor antagonists ketanserin (+)-tartrate and methiothepin mesylate, which have differing receptor-subtype affinities (Vleugels et al., 2015); the competitive serotonin synthesis inhibitor alpha-methyltryptophan (AMTP); serotonin's precursor 5hydroxytryptophan (5HTP) and re-uptake inhibitor fluoxetine hydrochloride; the selective octopamine-receptor blocker epinastine hydrochloride (Roeder et al., 1998); the insect dopamine-receptor blocker fluphenazine dihydrochloride (Degen et al., 2000); the inhibitor of nitric oxide (NO) production N ω -nitro-L-arginine methyl ester hydrochloride (LNAME) and its inactive enantiomer DNAME as control. Drugs were dissolved in either insect saline (contents in mM: NaCl 140, KCl 10, CaCl2 7, NaHCO3 8, MgCl2 1, N-trismethyl-2-aminoethanesulfonic acid 5, d-trehalose dihydrate, pH 7.4) or first in dimethylsulfoxide (DMSO) and diluted in ringer. The drug dosages used here are given in Table 1. That used for AMTP has been shown to be the minimum required to achieve almost complete depletion of serotonin as determined by immunocytochemistry, but above that shown to achieve complete depletion as determined by HPLC (see Stevenson et al., 2000). The dosages for all other drugs were selected as the minimum that induced clear effects on aggression, without obvious detrimental effect on general motility as judged by eye and were established in previous investigations (Rillich and Stevenson, 2014, 2017Stevenson and Rillich, 2015), or in pilot experiments for the present study ( Figure S1). Procedure To avoid possible temporal variations, we performed separate controls for each drug in parallel at approximately the same times. We ran 3 different protocols. (1) Separate cohorts of test crickets were pretreated with drug and then matched at a first fight against a hyperaggressive opponent 60 min later (exception: AMTP 24 h later) and then once more against the previous opponent either 15, 30, 60, or 180 min later to evaluate loser recovery ( Figure 1A). (2) Separate cohorts of crickets were matched at a first fight against a hyper-aggressive opponents and then again either 6, or 2 times in succession at 1 h intervals (multiple defeat), and then finally once more after 24 h (recovery test; Figures 2A, 3A). The hyper-aggressive opponents were swapped at each match to preclude defeated crickets adapting their behavior toward familiar opponents (Trannoy and Kravitz, 2017). (3) Here, untreated crickets were first matched 6 times in succession at 1 h intervals (multiple defeat) and subsequently treated with drug, either 1 or 23 h after the last defeat, and then tested once more 24 h after the last defeat (recovery test; Figure 4A). Data Analysis Our analysis is based on 1991 test crickets. Each was used for only one experiment. Statistical tests were performed using Prism 6 (GraphPad, La Jolla, CA, USA). The median and the interquartile range (IQR) were calculated for non-parametric data sets and non-parametric tests were also performed on fight duration since the data failed D'Agostino and Pearson omnibus normality tests, even after log transformations. The Mann-Whitney U-test was used to test for significant differences in the distributions between two unpaired data sets. An alpha value of P < 0.05 was considered significant ( * , * * , * * * p < 0.05, 0.01, 0.001 respectively). In the multiple defeat experiments we tested the hypothesis whether or not 6, or 2 previous defeats lead to long-term aggressive depression, and how this is influenced by drugs. For completeness, we also show the data for the 6 previous fights, and since these are repeated comparisons of the same animal groups we applied the Bonferroni correction to alpha for 5 multiple comparisons ( * p < 0.01). In one experiment ( Figure S2), 3 treatments were compared to control, and we applied the Kruskal-Wallis test with Dunn's multiple comparisons. 5HT, Initial Fights, and Loser Depression In our first experiment, we pre-treated crickets with drug and evaluated their first fights, and then a second fight at different times after defeat against hyper-aggressive opponents ( Figure 1A). At the first fight, controls typically escalated to physical interactions that lasted several seconds (e.g., AMTPcontrol, level: median 5, IQR 3-5; duration: median 6 s, IQR 3-9.75, n = 80). Compared to controls, the level and duration of aggression at the first fight was not significantly different for crickets pretreated with AMTP ( Figure 1B, U-test: p-level = 0.72; p-duration = 0.76), the receptor blockers ketanserin ( Figure 1C, U-test: p-level = 0.33; p-duration = 0.94) or methiothepin ( Figure 1D, U-test: p-level = 0.23; p-duration = 0.18), the precursor 5HTP ( Figure 1E, U-test: p-level = 0.66; pduration = 0.64) or the re-uptake inhibitor fluoxetine ( Figure 1F, U-test: p-level = 0.26; p-duration = 0.40). Subsequently, 15 min after defeat, all groups of vehicle treated losers were nonaggressive and tended to retreat from the hyper-aggressive opponent (e.g., AMTP-controls: median level 1, IQR 1-2, n = 20; see also Stevenson et al., 2005). The aggressiveness of drug treated crickets 15 min after defeat was not significantly different to their respective controls, except for those that received ketanserin, which were significantly more aggressive at the 15 min trial (U-test: p-level = 0.036; p-duration = 0.019). This fitted the trend that ketanserin and AMTP induced earlier recovery from social defeat, whereas recovery was suppressed by 5HTP and fluoxetine. For example, 30 min after defeat, ketanserin treated crickets were significantly more aggressive than controls (Utest: p-level = 0.003; p-duration = 0.0011). On the other hand, fluoxetine treated crickets still showed significantly depressed aggression (median level 1, IQR 1-2, n = 31) compared to controls 180 min after defeat (median level 5, IQR 2-5, n = 31, U-test: p-level < 0.001; p-duration < 0.001). A similar, but less pronounced trend was also evident for 5HTP-treated crickets ( Figure 1E). Contrasting this, methiothepin treated crickets at They were injected 60 min prior to testing, except AMTP which was given on day 3, 2, and 1 prior to testing (as in Stevenson et al., 2000). Controls received vehicle or DNAME. the 180 min trial tended to be less aggressive than controls (Utest: p-level = 0.025; p-duration = 0.027), indicating that this 5HT receptor blocker may act to dampen loser recovery. 5HT and Chronic Social Defeat We next analyzed the influence of pre-treatment with serotonergic drugs on the acquisition of longer-term depression of aggression after chronic social defeat (Figure 2). Whereas controls typically regain their aggressiveness within 3 h of a single defeat (Figure 1), or 24 h after two defeats, after 6 defeats they typically retreated at the 24 h trial (e.g., AMTPcontrols: median level 1, IQR 1-1, n = 20, Figure 2B). Drug treatment again indicated that 5HT suppresses aggression specifically after social defeat. For example, while controls became progressively less aggressive with each encounter, ketanserin treated crickets remained aggressive, and showed significantly higher aggression and fight duration at the 4th, 5th, and 6th encounter (Figure 2C, e.g., 6th fight: median level 5, IQR 1.75-5, median duration 6 s, IQR 0.5-9.75, n = 20, U-test: p-level < 0.001; p-duration < 0.001). Furthermore, ketanserin significantly blocked the acquisition of longerterm aggressive depression so that 24 h after 6 defeats the test crickets were significantly more aggressive than controls (ketanserin: median level 5, IQR 1.25-5, n = 20; control: median level 1, IQR 1-2, n = 19, U-test: p-level = 0.0026; p-duration < 0.001). Compared to this, there was no difference between ketanserin and control 24 h after 2 defeats (U-test: p-level = 0.435; p-duration = 0.625). AMTP had essentially the same effect as ketanserin, though less pronounced (significant differences indicated in Figure 2B). The 5HT receptor blocker methiothepin, however had no significant effect on the level of aggression or duration at any trial compared to control ( Figure 2D). For example, 24 h after 6 defeats methiothepin treated crickets exhibited depressed aggression, and showed no significant difference to control (U-test: p-level = 0.907; p-duration = 0.791). The precursor of 5HT and its re-uptake inhibitor, in contrast, increased susceptibility to social defeat stress. First, and as also indicated in Figures 1E,F, crickets that received 5HTP or fluoxetine were significantly less aggressive than controls at the second fight, 1 h after the first defeat (e.g., fluoxetine compared to vehicle, U-test: p-level < 0.001; p-duration < 0.001). Secondly, whereas 24 h after 2 defeats the control recovered from defeat (e.g., fluoxetine control: median level 4, IQR 1-5), those that received 5HTP or fluoxetine showed significantly depressed aggression (e.g., fluoxetine, U-test: p-level < 0.001; p-duration < 0.001). Nitric Oxide and Other Amines In contrast to the 5HT inhibitors AMTP, ketanserin and methiothepin, and confirming our earlier study pre-treatment with the NO-synthesis inhibitor LNAME led to a significant increase in aggression at the first fight (e.g., U-test compared to DNAME: p-duration = 0.0035; Figure 3B). Otherwise, LNAME's effect matched that of ketanserin and AMTP, but was even more pronounced. For example, whereas the alleviating effect of ketanserin on loser depression first became evident after 4 successive defeats (Figure 2C), LNAME treated crickets showed recovery 1 h after the first defeat (median level 5, IQR 3.25-6, U-test compared to DNAME: p-level < 0.001; p-duration < 0.001), and no signs of loser depression with subsequent defeats (Figure 3B). Furthermore, compared to control, and as found for ketanserin, LNAME treated crickets showed no sign of long term aggressive depression 24 h after 6 defeats (median level 5, IQR 4-5.75, n = 16, U-test compared to DNAME: p-level < 0.001; p-duration < 0.001). Interestingly, the insect dopamine (DA) receptor antagonist fluphenazine had a similar effect as 5HT agonists, in that it increased susceptible to social defeat stress ( Figure 3C). As for all serotonergic drugs tested, fluphenazine had no significant effect on aggression at the first fight (U-test: p-level = 0.985; (C) As in B but 24 h later for crickets that received LNAME (dark blue bars), ketanserin (yellow bars), methiothepin (pale blue bars) 1 h after multiple defeat. (D) As for C, but 23 h after multiple defeat. Controls received vehicle or DNAME to control for LNAME (gray bars). Effect of Drugs Applied After Multiple Defeat To check whether ketanserin and LNAME block long term aggressive depression after multiple defeat (Figures 2, 3) by simply facilitating earlier recovery from each defeat, we tested their effects when given 1 and 23 h after multiple defeat (Figure 4). Before drug, crickets again showed progressively declining aggressiveness with each defeat, so that at the 6th fight they typically retreated (median level 1, IQR 1-1, median duration 0s, IQR 0-0). When tested 24 h later, controls (DMSO and DNAME) still showed reduced aggression, as did those that received methiothepin (Figures 4C,D). Contrasting this, both ketanserin and LNAME prohibited acquisition of long term depression, regardless of whether the drugs were given 1 h (Figure 4C) or 23 h ( Figure 4D) after multiple defeat (U-tests compared to respective control, ketanserin at 1 h: p-level = 0.0031; p-duration = 0.0016; ketanserin at 23 h: p-level = 0.0037; p-duration = 0.0034; LNAME at 1 h: p-level = 0.0015; p-duration = 0.0013; LNAME at 23 h: p-level = 0.0048; p-duration = 0.0034). DISCUSSION This study provides novel insight into the natural behavioral role of 5HT in insect aggression, with potential parallels to vertebrates. We propose that 5HT is released specifically after social defeat to maintain depressed aggressive behavior in losers for a progressively longer period with successive defeats, resulting in long term behavioral depression, analogous to the chronicdefeat stress syndrome in mammals (Hammels et al., 2015;De Boer et al., 2016;Trainor et al., 2017). To evaluate the full aggressive potential of each individual test cricket, we recorded how they escalate (level of aggression) and persist (fight duration) against standard hyper-aggressive opponents that always won. This is in essence similar to the intruder-resident paradigm in rodents, where small intruders are matched against more aggressive residents . Drugs were applied at relatively high concentrations (Table 1) in order to overcome the brain's sheath (see Stevenson et al., 2005). Despite this, the effective dosage in nervous tissue can be expected to be in the physiological range, since each drug had selective effects on aggression, without adversely affecting general motility, and we were able to discriminate the specific actions of different and even closely related transmitters (e. g., Figures 2, 3; see also Rillich and Stevenson, 2014) and in some instances, even receptor subtypes (below). None of the tested serotonergics administered before the tournament influenced aggression at the first fight (Table 1; Figures 1, 2). This conflicts with reports that 5HT typically promotes aggression in invertebrates (Kravitz, 2000;Dierick and Greenspan, 2007;Johnson et al., 2009;Alekseyenko et al., 2010;Bubak et al., 2014). We suspect that this discrepancy may at least partly reflect differences in drug concentration and application (acute vs. chronic), that can differentially affect different 5HT receptor subtypes, as shown for 5HTP (Pranzatelli, 1988). In our hands, a single dose of 5HTP (20 µl/5 mM) failed to influence a cricket's initial fighting behavior (Figure S1), whereas the 100-fold dose (100 µl/100mM) increased some elements of aggression (fight duration), but reduced others (threat behavior, attack frequency), without affecting win chances (Dyakonova and Krushinsky, 2013). On the other hand, chronic treatment by feeding for 4 days on 5HTP led only to increased aggression in fruit flies (20 mM, see Dierick and Greenspan, 2007) and stalk-eyed flies (3% ≈ 135 mM, see Bubak et al., 2014). However, feeding Drosophila for 3-4 days with comparatively low drug concentrations (3 mM) confirmed that 5HT can promote aggression in socially naive Drosophila, and it was revealed that 5HT elevates aggression specifically via a 5HT 1A like receptor (Johnson et al., 2009). Furthermore, acute genetic activation of a subset of 5HT neurons in Drosophila heightens aggression, and one pair seems necessary for aggressive escalation, which seem to act via 5HT 1A receptors to inhibit aggression-suppressing follower neurons (Alekseyenko et al., 2010(Alekseyenko et al., , 2014. In view of this, 5HT might under circumstances that remain to be revealed also promote aggression in socially naive crickets. However, as outlined below, our data suggests that its main action is to dampen the normal recovery of aggression after social defeat. Acute treatment with fluoxetine prohibited the normal recovery of aggression after a single defeat (Figure 1F), and increased susceptibility to chronic social defeat, in that only 2 defeats sufficed to induce longer term aggressive depression ( Figure 2F). The effects of acute fluoxetine treatment are generally thought to result from blocking the transporter for removing 5HT after release in mammals (Morrison and Melloni, 2014) and also insects, though less effectively (Corey et al., 1994). Fluoxetine can also, however, increase catecholamine levels, particularly after chronic administration (Bymaster et al., 2002), and then have antidepressant effects, including reduced defeatinduced pathophysiology in rodents (Bauer, 2015;Hammels et al., 2015). Even so, since 5HTP had the same effect as fluoxetine on cricket aggression (Figures 1E, 2E), and the latter's action was blocked by the 5HT receptor antagonist ketanserin (Figure S2), fluoxetine probably also increases endogenous 5HT in crickets. Supporting this, aggressive depression in losers was reduced after AMTP (Figure 1B), and even more effectively by the 5HT receptor blocker ketanserin ( Figure 1C). These inhibitors also prevented longer term aggressive depression after chronic defeat (Figures 2B,C). This is not simply due to the drugs, which were applied before the tournament, preventing loser depression from occurring in the first place, since ketanserin also prohibited long term depression even when given 23 h after multiple defeat ( Figure 4D). This indicates that 5HT levels are elevated for at least a day after experiencing multiple defeat. Contrasting ketanserin, methiothepin tended to prolong loser depression (Figure 1D), which we think is due to effects on a different 5HT receptor subtype. Insects and vertebrate 5HT receptors are phylogenetically and functionally related, but have different pharmacological profiles (Vleugels et al., 2015). Crickets express two 5HT 1 , two 5HT 2 , one 5HT 7 receptor (Watanabe et al., 2011;Watanabe and Aonuma, 2012), but they are not yet fully pharmacologically characterized. Ketanserin is regarded as selective for insect 5HT 2 receptors (Johnson et al., 2009), whereas methiothepin is less selective (Vleugels et al., 2015), but is considered to block all subtypes in combination with ketanserin. In honeybees, for example, methiothepin blocks all 5HT receptors except 5HT2B, which is selectively blocked by ketanserin (Thamm et al., 2013;Tedjakumala et al., 2014). We therefore propose, that 5HT decreases aggression after defeat in crickets via a 5HT2 type receptor. Notably, and in agreement with our observations on crickets, ketanserin had no effect on aggression in socially naive fruit flies (Johnson et al., 2009). Since the aggression depressing effect of 5HT was only evident in losers, it seems to depend on the animal's subordinate social status. In crayfish, changes in social status entail changes in neuronal circuits and possibly 5-HT receptors (Issa et al., 2012), but this need not be the case in crickets. An alternative, or at least complementary possibility, is that the action of 5HT in losers depends on prior activation of the neurotransmitter pathway that controls the initial decision to retreat. In crickets, the decision to flee and subsequent loser depression is initiated by nitric oxide (NO) and does not require 5HT Rillich and Stevenson, 2017). Here we showed that blocking NO synthesis with LNAME either before ( Figure 3B) or after (Figure 4) multiple defeats prohibited long term aggressive depression even more effectively than ketanserin. Notably, and in contrast to blocking 5HT, blocking NO increases aggression at the first fight Figure 3B). Since 5HT's dampening effect on cricket aggression requires prior social defeat, it depends on NO, but it needs to be tested whether or not NO acts directly on serotonergic neurons. In mammals, disruption of NO production also leads to substantially increased aggression, possibly be interacting with 5HT, but the relationship is unclear (Bedrosian and Nelson, 2014). To our knowledge it is not known if NO influences defeat induced depression in mammals. We have previously shown that both OA and DA restore aggression in losers after a single defeat, whereby DA, but not OA, is actually necessary for natural recovery (Rillich and Stevenson, 2014). Here we investigated how the DA receptor antagonist fluphenazine influenced aggression after multiple defeats and found that it had the same effect as elevating 5HT. Thus, as for 5HTP and fluoxetine, fluphenazine had no effect on aggression at the initial fight, but increased the susceptibility to chronic defeat in that only 2 successive defeats established long term aggressive depression ( Figure 3C). In contrast, blocking receptors for OA, which is recognized as an insect stress hormone (Adamo and Baker, 2011) and has promoting effects on cricket aggression (reviews: Stevenson andRillich, 2016, 2017), had no effect on post-defeat aggression ( Figure 3D). We suspect, therefore, that 5HT's dampening effect on loser aggression may result from inhibition of DA, but this needs to be specifically tested. In mammals, 5HT suppresses aggression mainly by inhibiting neurons that release and or respond to arginine-vasopressin (Morrison and Melloni, 2014). However, DA may also be involved. Social defeat increases activity of DA neurons in the mesolimbic system (Laman-Maharg and Trainor, 2017), where DA plays a central role in reward (O'Connell and Hofmann, 2011) and in mediating anhedonia due to social defeat (Hammels et al., 2015). In summary, our data call for a re-evaluation of the role of 5HT in invertebrate aggression. In contrast to most invertebrate studies, we found no evidence that 5HT acts to increase aggression in socially naïve crickets. However, in view of our finding that methiothepin tended to enhance aggression in losers (Figure 1D), we still think that 5HT may under behavioral circumstances that remain to be discovered, have a natural aggression promoting effect, as suggested by work of Dyakonova and Krushinsky (2013), for example via 5HT 1 -like receptors as in Drosophila (Johnson et al., 2009;Alekseyenko et al., 2014), but this needs to be tested in crickets with more selective drugs. Nonetheless, our experiments, particularly those with fluoxetine and ketanserin, indicate that 5HT is released after social defeat and acts via 5HT 2 -type receptor to maintain the state of depressed aggressiveness characteristic for subordinates. This contrast the earlier suggestion that "a decrease in serotonergic activity may be functionally important for the control of loser behavior" (Dyakonova and Krushinsky, 2013), " but complies with the finding that the brain content of 5HT is reduced after defeat (Murakami and Itoh, 2001), when this is considered as a consequence, rather than cause of defeat. Taken together, the control of post-defeat aggression is surprisingly similar to that in mammals (Bauer, 2015;Clinard et al., 2015) and possibly also crustaceans (Bacque-Cazenave et al., 2017). Contrary to current understanding in mammals (Bedrosian and Nelson, 2014), however, it seems that in crickets NO release is a pre-requisite for 5HT's inhibitory action on the recovery from social defeat stress, which in turn may result from inhibition of DA. This, however, remains to be experimentally verified. ETHICS STATEMENT The animals used in this study (invertebrates, insects, crickets: Gryllus bimaculatus), are exempt from any need to obtain any special permissions, all animals were from a breeding stock, and none removed from their natural environment. All treatments complied with the Principles of Laboratory Animal Care and the German Law on the Protection of Animals. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS JR and PS conceived and designed the experiments. JR Performed the experiments. JR and PS analyzed the data. PS contributed reagents materials analysis tools. PS and JR wrote the paper. FUNDING Support by the German Research Council (DFG) is greatly appreciated (grants: STE 714/5-1; RI 2728/2-1). ACKNOWLEDGMENTS We thank Dr. Stefan Schöneich and the referees for constructive comments on our manuscript. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnbeh. 2018.00233/full#supplementary-material Figure S1 | Dose dependent effects of fluoxetine (A) and 5HTP (B). Plots of level of aggression exhibited by crickets at their first fight against hyper-aggressive opponents and 3 h after defeat (symbols: median, bars interquartile range, n > 16 for each). Figure S2 | Ketanserin blocks effect of fluoxetine. (A) Procedure: Test cricket received vehicle or drug 1 h before their first fight against hyper-aggressive opponents, which they lost, and once more 1 h after defeat. (B) Level of aggression, (C) Fight duration. Significant differences are given as p-values from Kruskal-Wallis tests, and differences between groups from Dunn's multiple comparisons are indicated by asterisks: * * p < 0.01, n.s. not significant. Note that fluoxetine no longer prohibits post-defeat recovery (red bars) when given together with the 5HT receptor blocker ketanserin (red-hatched bars).
2018-10-08T13:04:49.892Z
2018-10-08T00:00:00.000
{ "year": 2018, "sha1": "f497c3edd490993ea1842d54ecaa3c898350ef28", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2018.00233/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f497c3edd490993ea1842d54ecaa3c898350ef28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237426282
pes2o/s2orc
v3-fos-license
Prediction of TERTp-mutation status in IDH-wildtype high-grade gliomas using pre-treatment dynamic [18F]FET PET radiomics Purpose To evaluate radiomic features extracted from standard static images (20–40 min p.i.), early summation images (5–15 min p.i.), and dynamic [18F]FET PET images for the prediction of TERTp-mutation status in patients with IDH-wildtype high-grade glioma. Methods A total of 159 patients (median age 60.2 years, range 19–82 years) with newly diagnosed IDH-wildtype diffuse astrocytic glioma (WHO grade III or IV) and dynamic [18F]FET PET prior to surgical intervention were enrolled and divided into a training (n = 112) and a testing cohort (n = 47) randomly. First-order, shape, and texture radiomic features were extracted from standard static (20–40 min summation images; TBR20–40), early static (5–15 min summation images; TBR5–15), and dynamic (time-to-peak; TTP) images, respectively. Recursive feature elimination was used for feature selection by 10-fold cross-validation in the training cohort after normalization, and logistic regression models were generated using the radiomic features extracted from each image to differentiate TERTp-mutation status. The areas under the ROC curve (AUC), accuracy, sensitivity, specificity, and positive and negative predictive value were calculated to illustrate diagnostic power in both the training and testing cohort. Results The TTP model comprised nine selected features and achieved highest predictability of TERTp-mutation with an AUC of 0.82 (95% confidence interval 0.71–0.92) and sensitivity of 92.1% in the independent testing cohort. Weak predictive capability was obtained in the TBR5–15 model, with an AUC of 0.61 (95% CI 0.42–0.80) in the testing cohort, while no predictive power was observed in the TBR20–40 model. Conclusions Radiomics based on TTP images extracted from dynamic [18F]FET PET can predict the TERTp-mutation status of IDH-wildtype diffuse astrocytic high-grade gliomas with high accuracy preoperatively. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05526-6. Introduction Mutations in the telomerase reverse transcriptase promoter (TERTp), leading to telomerase activation and lengthened telomeres, play an important role in the formation of brain cancer and individual prognosis [1][2][3]. In diffuse astrocytic high-grade gliomas without mutation of the isocitrate dehydrogenase gene (IDH-wildtype), TERTp mutations are reported to be associated with poor overall survival [4][5][6]. Molecular genetic analysis of the TERTp-mutation status has therefore gained increasing attention in the clinical routine diagnosis of IDH-wildtype diffuse astrocytic gliomas and will be included in the upcoming glioma WHO classification [7][8][9]. Molecular imaging using positron emission tomography (PET) with radiolabelled amino acids such as O- (2-[ 18 F]fluoroethyl)-L-tyrosine ([ 18 F]FET) is a useful tool for the characterization and evaluation of primary brain neoplasms [10][11][12], and its application in the clinical management of brain tumour patients has been recommended by the Response Assessment in Neuro-Oncology (RANO) Working Group [13][14][15][16][17]. While static image data (standard 20-40 min summation images) are particularly used for the delineation of the tumour extent, the assessment of dynamic [ 18 F]FET PET data has been shown to provide additional information about tumour biology [18]. More aggressive gliomas (i.e. high-grade gliomas and/or IDH-wildtype gliomas) were shown to be characterized by a high tracer uptake within the first 5-15 min post injection (p.i.) with subsequent curve decrease, while less aggressive gliomas (i.e. low grade gliomas and/or IDH-mutant gliomas) typically show a slowly increasing [ 18 F]FET uptake with highest values in the later time frames [12,19,20]. As the early peak uptake in aggressive gliomas is missed in the standard 20-40 min p.i. summation images, it does not surprise that the maximal tumour-to-background ratio (TBR max ) evaluation obtained in early summation images (5-15 min p.i.) was reported to perform better than the standard static TBR max values (20-40 min p.i.) for the differentiation between low-grade and high-grade gliomas [17], which led to the suggestion to include these early summation images for a better glioma characterization. Another interesting parameter derived from dynamic [ 18 F]FET PET is the minimal time-to-peak (TTP min ), which is extracted from the time-activity-curves and was reported to provide prognostic information [21]. Interestingly, an early TTP min was associated with an aggressive disease course in newly diagnosed gliomas and was able to predict an IDH-wildtype status [22,23]. Yet, in our recently published study investigating [ 18 F]FET uptake characteristics in TERTp mutant and TERTp wildtype glioblastomas, neither the standard TBR max as static parameter nor TTP min as dynamic parameter were associated with the TERTp-mutation status [24]. In recent years, radiomics have been increasingly investigated as a promising non-invasive tool for accurate diagnosis and prognosis assessment by converting medical images into high-dimensional quantitative image features and establishing predictive models [25][26][27][28][29][30][31][32]. However, radiomics have not been applied for the detection of TERTp mutations on [ 18 F] FET PET images so far. Therefore, the aim of this study was to evaluate radiomic features extracted from standard static images (20-40 min p.i.), early summation images (5-15 min p.i.) as well as dynamic [ 18 F]FET PET images for the prediction of the TERTp-mutation status in patients with newly diagnosed IDH-wildtype diffuse astrocytic highgrade glioma. F]FET-negative gliomas (tumourto-background ratio, TBR < 1.6) were excluded. All patients had given written informed consent prior to the PET scan as part of the clinical routine. The retrospective analysis of PET imaging data was approved by the institutional ethics committee (604-16). A total of 61% of the investigated patients (97/159) have been evaluated in a previous study [24]. Histopathology and molecular genetic analysis Histopathology and molecular genetic analyses were performed at the Institute of Neuropathology, LMU Munich, Germany. All patients initially classified according to the 2007 WHO brain tumour classification [34] were reclassified according to the 2016 WHO classification [33]. The IDH-mutation status and TERTp-mutation status were evaluated according to clinical standard protocols [35,36]. [ 18 F]FET PET scans were performed at the Department of Nuclear Medicine, LMU Munich, Germany. Images were acquired by using an ECAT EXACT HR + PET scanner (Siemens Healthineers, Inc., Erlangen, Germany) with the standard protocol [11,37]. Exactly 180 MBq of [ 18 F]FET were injected after a 15-min transmission scan with a 68 Ge rotating rod source. After tracer injection up to 40 min post injection in 3-D mode consisting of 16 frames (7 × 10 s, 3 × 30 s, 1 × 2 min, 3 × 5 min, and 2 × 10 min) with a reconstructed voxel size of 2.03 × 2.03 × 2.43 mm 3 and matrix size of 128 × 128 × 63, dynamic emission recording was finished. Two-dimensional filtered back-projection reconstruction algorithm using a 4.9-mm Hann Filter was applied for image reconstruction, then corrected for attenuation, decay, dead time, and random and scattered coincidences. When relevant motion was visible in dynamic PET data, a frame-wise correction was performed by using PMOD fusion tool (version 3.5, PMOD Technologies, Zurich, Switzerland) after framewise checking for motion. Segmentation of tumour volumes and brain background First, a background activity was extracted from a large crescent-shaped volume of interest (VOI) in the contralateral healthy hemisphere as published previously [38]. For tumour segmentation, a VOI was drawn using a TBR-threshold of 1.6 in static 20-40 min p.i. summation images as suggested by Pauleit et al. [39]. All segmentations were processed within the PMOD View tool (version 3.5, PMOD Technologies, Zurich, Switzerland). Feature selection Before feature extraction, a stratified random split was used to assign 70% of the patients to the training cohort (n = 112) and the remaining 30% to the testing cohort (n = 47), with a balanced distribution of TERTp-wildtype and TERTp-mutation. Features were standardized as follows: for each feature, we calculated the mean value and the standard deviation. The mean value was subtracted from each individual value, which was then divided by the standard deviation. Feature normalization was computed only in the training cohort and then applied on the testing cohort. Since the number of features was large, we compared the similarity of each feature pair. If the Pearson correlation coefficient (PCC) value of the feature pair was larger than 0.99, we removed one of them. After this process, the number of the features was reduced and each feature was independent to each other. The recursive feature elimination (RFE) based on logistic regression classifier was performed to reduce redundant features and select potential TERTp-mutation related features [45]. Considering the imbalance of comparison groups, we performed the weighted logistic regression in the 'balanced' mode, which gives higher weight to the minority class and lower weight to the majority class and therefore automatically adjusts weights inversely proportional to class frequencies in the input data [46]. Each iteration removes a feature which is considered least important. After stratified splitbased 10-fold cross-validation, the area under the receiver operating characteristic curve (AUC) of the model in the training cohort was used to determine the optimal number of features. Model construction and testing Logistic regression (LR) models were built to predict the TERTp-mutation status by fitting the selected radiomic features. Each model was generated by using only the radiomic 1 3 features extracted from each image (i.e. TBR [5][6][7][8][9][10][11][12][13][14][15] , TBR , and TTP images) separately. According to the coefficients of selected features generated by the LR models [47], the risk probability of TERTp-mutation was calculated by the following formula: x is the value of selected features, is the coefficient of selected features, and 0 represents the intercept. In case of P > 0.5 , TERTp-mutation status was considered as positive by the LR model. Model testing was applied to the independent testing cohort, which was not involved in the process of model training. The workflow of the process is presented in Fig. 1. Statistical analysis To evaluate the model performance, receiver operating characteristic curve (ROC) analysis was performed in the training and testing cohort. The AUC was calculated as quantitative measure to illustrate diagnostic power. The accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. 95% confidence intervals (CI) were calculated by using a non-parametric bootstrap method, which was repeated 1000 times to get a bootstrap distribution of the results. Categorical variables or continuous variables were reported as numbers and percentages or as mean and standard deviation. Categorical variables were compared by the P(y = 1|x; ) = 1 1 + e − T x χ 2 test, and continuous variables were compared by the Mann-Whitney U test. P < 0.05 were considered statistically significant. Statistical analyses were programmed in Python (v. 3.8.5; https:// www. python. org/). Patient characteristics A total of 159 patients (median age, 60.2 years; range, 19-82 years) were enrolled in this study. Exactly 31 patients (19.50%) were diagnosed with TERTp-wildtype, and 128 patients had TERTp mutation. The clinical characteristics are presented in Table 1. There were no significant differences between the training and testing cohorts with regard to age, sex, WHO grade, and TERTp mutation status, with TERTp-wildtype rates of 19.64% and 19.15%, respectively. Radiomic feature extraction and selection In this study, 107 radiomic features of candidates were gen- Fig. 2 The feature selection process of the RFE method. Each iteration removes a feature that is considered least important and corresponds to a 10-fold cross-validation. After 10-fold cross-validation, the AUC of the model in the training cohort was used to determine the optimal number of features. The minimum AUC of feature num-ber was selected. a TBR 5-15 model, b TBR , and c TTP model; 9, 14, and 10 features were selected respectively. RFE recursive feature elimination, AUC area under the receiver operating characteristic curve Diagnostic Validation of the TBR 20-40 model, TBR 5-15 model, and TTP model According to the above-mentioned formula, the risk probabilities of TERTp-mutation were calculated. The coefficients of selected features in the TBR 20-40 model and TBR 5-15 model are shown in Table S1. The coefficients of selected features in the TTP model are shown in Table 2. Detailed information about the performance of each model is shown in Table 3. Discussion Our study showed that radiomics based on dynamic [ 18 F] FET PET data can reliably predict the TERTp-mutation status of IDH-wildtype diffuse astrocytic high-grade gliomas. Best predictability was reached using the TTP model derived from dynamic PET, and weak predictive capability was obtained with radiomics based on early summation images (5-15 min p.i.), while no reliable information about the TERTp-mutation status was possible based on the standard summation images (20-40 min p.i.). Previous studies have shown that patients with IDHwildtype TERTp-mutant glioblastoma have a significantly shorter progression free and overall survival compared to those with TERT-wildtype status. Therefore, TERTp-mutation status is now considered to be an important diagnostic and prognostic factor in primary glioblastomas and especially in patients with IDH-wildtype glioma [3,5,8,9,48]. TERTp-mutations indicate tumours that require aggressive and immediate treatments [3]. Hence, a preoperative tool for the prediction of a TERTp-mutation would be useful for early decision making and clinical management of patients with suspected glioma. Several studies have analyzed the value of MRI based radiomics to predict the TERTp-mutation status in brain tumour patients [49][50][51]. Although these studies reported to achieve high accuracy values in the range of 79.88-93.80%, only WHO grade II or/and III gliomas have been considered and a limited number of patients has been investigated [49][50][51]. Besides, Tian et al. established a multiparameter MRI based radiomics model for the prediction of the TERTp-mutation status in patients with high-grade glioma [52], but ignored that TERTp-mutations play different roles in different IDH phenotypes [48]. Compared with conventional MRI, amino acid PET has been shown to be more sensitive in the definition of brain tumour extent [39], and dynamic [ 18 F]FET uptake parameters extracted from the TAC have shown to be an independent biomarker for prognosis [53,54]. Several studies have reported the informative value of [ 18 F]FET PET-based radiomics in personalized clinical decisions and individualized treatment selection [27][28][29]55]. Lohmann et al. found textural feature analysis in combination with TBRs to better differentiate brain metastasis recurrence from radiation injury than TBRs alone, and [ 18 F]FET PET radiomics achieved a higher accuracy than the best standard FET PET parameter (TBR max ) to diagnose patients with pseudoprogression [27,55]. Haubold et al. utilized multiparametric [ 18 F]FET PET/MRI and MR fingerprinting to decode and phenotype cerebral gliomas, which may serve as an alternative to invasive tissue characterization [28]. In addition, Carles et al. evaluated the prognostic value of [ 18 F]FET PET radiomics after re-irradiation, and found it could contribute to the selection of recurrent glioblastoma patients benefiting from re-irradiation [29]. However, all studies included radiomics based on standard static images (20-40 min p.i.) only and did not extract radiomic features derived from dynamic [ 18 F] FET PET as well as early summation images (5-15 min p.i.) even though two studies have shown the impact of dynamic parameters on radiomics [32,56]. Furthermore, no study has evaluated the potential to predict the TERTp-mutation status by [ 18 F]FET PET radiomics so far. This study included standard static images (20-40 min p.i.), early summation images (5-15 min p.i.), and dynamic [ 18 F]FET PET images to develop the radiomic models. A total of 107 features were extracted from each [24]. Interestingly, radiomics based on the standard TBR 20-40 model showed a low performance for the prediction of the TERTp-mutation status, and even the TBR 5-15 model, generated from nine early summation [ 18 F]FET PET features, had an accuracy of only 66% and an AUC of 0.61 in the testing cohort. With a high prediction accuracy of 83% in the TTP model, our study demonstrates that radiomic features extracted from dynamic PET data can achieve a higher performance level than models based on static PET data. Remarkably, the sensitivity of the TTP model reached 92.1% in the testing cohort, so that patients with aggressive TERTp-mutant glioma can be identified non-invasively with high probability [3]. With the generated multivariate LRbased formula, health practitioners will be able to calculate the patient individual risk probability of bearing a TERTpmutation before neurosurgical intervention. Our study shows that even sophisticated radiomic analysis of static [ 18 F]FET PET imaging cannot replace dynamic acquisitions, at least with regard to the prediction of the TERTp-mutation status. Traditional dynamic [ 18 F]FET PET parameters such as the classification of the time-activity curve (increasing vs. decreasing or increasing vs. plateau vs. decreasing), the slope or the TTP min were most frequently calculated from a mean VOI-TAC of the tumour or from the hot-spot of the tumour with a 90% isocontour [10,12,19]. Considering the heterogeneity of gliomas, it may happen that the hotspot in standard summation images does not correspond to the most suspicious tumour aggressiveness when only considering TTP min and TAC and that, therefore, the most aggressive areas are inadvertently not evaluated. In contrast, we extracted the dynamic [ 18 F]FET uptake information in every voxel within the tumour VOI and generated TTP images. This approach, which was first introduced by Kaiser et al. [40,42], ensures that the dynamic information including the heterogeneity of uptake kinetics is extracted and that radiomics can be performed on the prognostically valuable dynamic data. The correlation between tumour heterogeneity and TERTp-mutation status can be considered in GreyLevelNonUniformityNormalized (GLNN) feature, which was used in the TTP model (see Table 2). GLNN belongs to Gray Level Dependence Matrix (GLDM), which is mathematically equal to first order-uniformity and is a measure of the homogeneity of the image array. A low value implies a greater heterogeneity, which was correlated with the TERTp-mutation, indicating that tumours with more heterogeneous TTP images are more likely to be classified as TERTp-mutant glioma. Several limitations of this study should be discussed. First, the number of investigated patients is relatively small. However, it needs to be considered that we analyzed a very homogeneous group of patients with newly diagnosed and untreated IDH-wildtype diffuse astrocytic high-grade glioma. To exclude any influence by scanner type, all images in this study were derived from the same PET scanner, which limited the number of patients as well. In order to increase the number of patients, multi-centre validation studies are needed which, however, require phantom studies and harmonization of reconstruction parameters to make images from different PET scanners comparable. Another approach to directly harmonize features extracted from different devices may be to use the ComBat method [57]. In addition, our results are difficult to extrapolate to other centres, as the PET images analyzed in this study were acquired with our old PET scanner with fixed time frames, resulting in relatively long time frames (predominantly 5 and 10 min) in the dynamic analysis which could not be changed afterwards, and were reconstructed using filtered back-projection, while most PET centres now use other reconstruction methods such as ordered subset expectation maximization (OSEM). Furthermore, radiomic features were only extracted from the [ 18 F]FET-positive tumour VOI to construct the model. Besides the tumour VOI, the remaining image (with normal seeming tissue) may still contain invisible but useful information. To analyze the entire images, deep learning methods will be necessary. Furthermore, our study focused on PET-based radiomics only. A combination with MRI may improve the performance of the prediction model and should be evaluated in future studies. Conclusion While conventional [ 18 F]FET PET parameters assessed by standard analyses have previously shown no association with the TERTp-mutation status, radiomic models can predict the TERTp-mutation status of IDH-wildtype diffuse astrocytic high-grade gliomas with high accuracy preoperatively. Notably, this is only the case for radiomics based on dynamic image data (TTP model) instead of standard summation images (20-40 min). Further external validation in multicentre studies with a larger number of patients is needed to evaluate the potential for clinical applications.
2021-09-07T13:49:35.447Z
2021-09-07T00:00:00.000
{ "year": 2021, "sha1": "0773f13e84d4410237c67e0f941f56ec4513ccc2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-021-05526-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2b132ec124fad6dd1a418792c9ea95a41ed17fcd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257063467
pes2o/s2orc
v3-fos-license
Deep‐Learning Assisted Polarization Holograms Multiplexing holography with metasurfaces using different degrees of freedom of light has enabled recent applications in display and information processing. In terms of polarization‐multiplexed holograms, the most general form is an arbitrary Jones matrix profile in storing the maximum amount of information. It requires a relaxation to bianisotropic metasurfaces from a conventional single‐layer implementation of nanostructures, but it will also complicate both the inverse design of the nanostructures and the hologram generation algorithm. Here, an integrated neural network approach, being extended from the recent DeepCGH algorithm, is developed to obtain metasurface structural profiles directly from independent holograms from an arbitrary set of polarizations to another, with maximally four different co‐ and cross‐polarization conversion channels. Such an information‐driven approach enables designing complex polarization holograms directly from an existing metamaterial library without detailed physical knowledge on the constraints, and can be extended to other multiplexing holograms to further facilitate an efficient usage of the information stored on a metasurface. Introduction Recently, computer generated holograms (CGH) have found practical applications with metasurfaces due to their superior capabilities in storing huge amount of information.By shining laser light on these metasurfaces, the stored information can then be revealed as optical holograms with designed amplitude, phase, or both amplitude and phase profiles. [1,2]Furthermore, there have been a series of development in metasurfaces to is expected to add the cross-polarization flexibility and further increase information capacity.Nevertheless, to design these polarization holograms of multiple channels, CGH techniques can be adopted by modifying the Gerchberg-Saxton (GS) algorithm.[23][24][25][26][27][28][29] However, such an approach is not easily scalable to more complex geometries (to break t xy = t yx ) or to more variety of nanostructures.To overcome such issue and to further enhance our design capability of polarization holograms, it is beneficial to adopt a machine learning approach, which can be generic for the sake of more complex information stored on metasurfaces or larger capability in multiplexing holograms.A machine-based approach, focusing on the information rather than the physics perspective, does not need the case-by-case extension of the GS algorithm.The automation on the part of inversely designing structures [35][36][37][38][39][40] from required phases has been recently adopted in wavelength-multiplexed metasurface holograms with a hybrid of neural network and evolution strategy optimization approach, [41] and polarization-multiplexed metasurface with an end-to end framework to facilitate full exploitation of the prescribed design space and push the multifunctional design capacity to its physical limit. [42]n the other hand, the GS algorithm for hologram generation can also be replaced by an unsupervised neural network, being called the DeepCGH algorithm. [43]The deep-learning-based algorithm gives a higher accuracy and faster hologram generation than the GS algorithm. In this work, we develop an integrated deep neural network to implement both hologram generation and inverse design of the metamaterial nanostructures at the same time as a generic solution to metasurface holograms.Taking the bianisotropic metasurfaces as an example for achieving the general form of polarization holograms, the integrated network is able to design the metasurfaces profile to obtain independent phase holograms for the four combinations of polarization conversion channels.By integrating the hologram generation and inverse design components into the network, our proposed method does not need the case-by-case optimization for each target configuration and can generate the metasurface designs more automatically and efficiently.As the constraints on the Jones matrix elements are now simply hidden in the accessible geometries of the nanostructures as a library, the approach can be extended to other scenarios for metasurfaces with more DOFs, such as orbital angular momenta and diffraction orders.It can also be applied to situation that the incident or the outputting polarizations are not orthogonal but with arbitrary specifications. Jones Matrix Library for a Family of Bianisotropic Metamaterials To have a higher chance to generate very different Jones matrix elements, we start from a family of bianisotropic metasurfaces.We note that bianisotropic metasurfaces have been previously shown to give asymmetric LP conversion in forward and back-ward incidence or equivalently t xy ≠ t yx for the same side of incidence. [44,45]48][49][50] Here, the bianisotropic structures we used as template are made of silicon fins (green color, with permittivity 12) within a glass matrix (cyan with permittivity 2.25) in a square lattice (with periodicity a = 1000 nm), as shown in the upper panel of Figure 1a.The silicon fin has a three-layer structure with the top and bottom layers (both with height h = 250 nm) being L-shaped and a square pillar (of tunable width w and thickness t = 125 nm) to connect the two layers.The two L-shape bars break the mirror symmetry in the z-direction by generating bianisotropy while the pillar in the middle enhances the polarization cross-coupling between the two layers.The lack of the center pillar will cause lower transmission amplitudes in the cross-polarization channels and less covering range of phase in the co-polarization channels (please see Section S3 and Figure S3, Supporting Information for more details).The size of the silicon structure in the y-direction is l = 637.5 nm.The vertical pillar is shifted by Δx and Δy relative to the two vertical middle planes of the unit cell (dashed line frames).This structure is defined as "right-handed."The "left-handed" structure, flips its handiness by making a mirror operation with the plane x = y, which is shown in the lower panel of Figure 1a. Next, we perform full-wave simulations (COMSOL Multiphysics) to obtain the Jones matrix elements of the right-handed structure when we scan the geometric parameters.The results are shown in Figure 1b with normal incidence (along positive z-direction) and a fixed wavelength of 1550 nm.The scanning ranges of the geometric parameters in the full-wave simulations are: w from 187.5 to 237.5 nm (in steps of 5 nm) and both Δx and Δy from −125 to 125 nm (in steps of 12.5 nm).From the interpolated results, we can see that the argument of each Jones matrix element in the LP (e.g., t xy means the transmission coefficient from incident y-to x-polarization) can cover the full range of 2π, and also we have t xy significantly different from t yx now, for the sake to generate the most general form of polarization holograms.As the argument of t yy does not change too much for the whole family, we add a parameter θ, the orientation of the nano-structure, by rotating the structure in the counter-clockwise direction.In this case, the Jones matrix of the rotated (right-handed) structure is where cos sin sin cos and the Jones matrix before rotation (with the phases shown in Figure 1b) is J w x y t w x y t w x y t w x y t w x y The superscript "RH" on the Jones matrix elements indicates they are defined for the right-handed structure.The whole library also consists of the left-handed structures (with superscript "LH") with their Jones matrix related to the one of righthanded structures as in order to flip the phase relationship between t xy and t yx due to the fixed handiness of the structure.Now, both the lefthanded and right-handed structures with different geometric parameters of w, Δx, Δy, and θ form the whole metamaterial/ nanostructure library whose structures can be used to generate holograms in the next stage.We have four continuously tunable geometric parameters, on purpose from the bianisotropic structure, to control the target phase-space: four transmission phases in different polarization channels.However, whether the library is enough to browse the whole phase-space is still not clear in this stage but the question can be accessed in the later stage with the integrated neural network.For more details, we have also shown the the amplitudes and the phases of the Jones matrix at a fixed w = 202.5 nm (and θ = 0°) in Figure S1 (Supporting Information).In fact, the arguments of the Jones matrix elements cover the full 2π range through local resonances, which can also be revealed through the fluctuation of amplitudes.It is found that on average, t xy and t yx have a smaller amplitudes than t xx and t yy .Such amplitudes will be also taken into account as a constraint in the whole algorithm in generating holograms.For completeness, the Jones matrix in circular polarization (CP) basis can be expressed from the one in LP basis as in order to generate polarization holograms in the CP basis. Integrated Deep Neural Network for Complex Polarization Holograms With the nanostructure library in place, we move forward to establish an integrated deep neural network to design metasurface holograms.The schematic of the network is shown in Figure 2, which integrates an existing DeepCGH network [46] in designing scalar holograms (only with a profile of transmission phase) with an inverse-design component of nanostructures.In essence, the whole network turns an input target hologram of amplitude {B ki } into output geometric parameters {w, Δx, Δy, θ, R/L} of all the nanostructures (a total number of n 2 ) on the metasurface.Fourier transformed (F ) to the reconstructed hologram B ′ { } ki in the reverse process.A primed notation here is for the reconstructed quantities.The whole network, the weights of the farfield predictor network and those of the encoder network, are trained in an unsupervised notion with the loss function being the geometric error (GE) between the target and reconstructed holograms: , the same loss function adopted in the existing DeepCGH network [46] which is now extended by the incorporation of the nanostructure library: the encoder does the inverse design from Jones matrix elements to geometric parameters and the decoder, a forward deep learning-based surrogate network, is actually pre-trained with supervised learning using the full-wave simulation results based on Figure 1, before the training of the integrated network. Hereafter, we call this extension with the inverse-design (ID) component (the orange dashed box) as DeepCGH-ID network. For the decoder, it is constructed with 4 fully hidden linear layers with structure 7-200-200-100-100-8.We note that for cyclic variable θ, we replace it with {cos θ, sin θ} and discrete value R/L by a one-hot vector {1,0}/{0,1} for data representation to facilitate training.These account to the input dimension of 7 together with w, Δx and Δy.The four Jones matrix elements with real and imaginary parts constitute to the output dimensions of 8.The activation functions of the first five linear layers and the last layer are exponential linear units (ELU) function and hyperbolic tangent (tanh) function, respectively.300k sets of data are generated from full-wave simulations (around 41 h) with 234k sets of them for training data, 26k sets of them for validation data and the remaining for testing data.Mean square error (MSE) is used as the loss function in training with Adam optimizer and learning rate 0.0005.In the testing phase of the decoder, the Pearson correlation coefficients (PCCs) between the ground truth and the predicted Jones matrix elements (including real and imaginary parts) are all higher than 0.999, indicating the validity to replace the full-wave simulations (COMSOL) in calculating Jones matrix elements by the pretrained decoder to obtain a higher computational efficiency for the training of the integrated network in the next stage. From the perspective of deep learning, autoencoder latent space that couples the encoder and decoder together, represents a low dimensional projection from the training dataset.The latent space constitutes the set of all possible geometric parameters of the metasurfaces.The corresponding training data feeding into the encoder and out from the decoder are the required Jones matrix phase profiles.The encoder (four hidden linear layers in our implementation) solves an inverse problem by projecting training data onto latent space or converting the phase profiles to the geometric parameters.Similarly, the decoder solves a forward problem by expanding latent space into training data or mapping geometric parameters to Jones matrix elements.For the inverse design problems of metasurfaces, there may exist multiple structures that can generate nearly the same optical responses.Unlike the forward network (decoder), it is hard to train the inverse network (encoder) directly in a supervised learning approach, due to the presence of labeled data with conflict to make the convergence difficult.By combining an encoder and the pre-trained decoder together as tandem-like architecture, the non-uniqueness issue can be mitigated. [51]In our scheme, such autoencoder is then further embedded into the integrated network (DeepCGH-ID), which evaluates the loss on the hologram quality directly.Then, the training is carried out to train the far-field predictor network and the encoder all together.The advantage of this furtherembedding approach is that the entire integrated network is trained directly based on the goal of getting the best-required holograms, and the inverse function (encoder) is optimized at the same time.More details about the DeepCGH-ID network are shown in Figure S2 and Table S1 (Supporting Information).On the other hand, unlike the conventional GS algorithm in generating holograms, the current approach has constraints applied on both amplitude and phase of the metamaterial library, so that the constructed holograms during training/ algorithm are more realistic.Only using the phase profiles in the training process will reduce the quality of the holograms (please see Section S4 and Table S2, Supporting Information for more details). Performance of the Integrated Deep Neural Network Here, we generate 500 configurations of polarization holograms of dice patterns, with nine white dots to be turned on or off randomly and independently.Each set of polarization holograms consist of 4 independent holograms, which iterates 4 combinations of polarization conversion channels.450 of them are used as the training data and 50 of them for validation data for the integrated network.In fact, we do not need a very large set of data for the integrated network as the network has a capability for extrapolation, as shown later.Each hologram of a polarization conversion channel has a size of 64 by 64 pixels.Then, we can test both interpolation and extrapolation capability of the network.In the training process, the learning rate is initially 2 × 10 −6 and exponentially decays to 2 × 10 −7 at the end of the training.Two independent integrated networks are trained separately (but with the same nanostructure library) for generating the polarization holograms for LP and CP basis.After the network is well trained, we select two different configurations (dice and numbers patterns) to test the network as the first step, which are not included in the training data, as shown in the first row in Figure 3a,b.By feeding the test data to the whole network, the metasurface designs (geometric parameters) of the two configurations of target holograms can be obtained from encoder as the output.Finally, the generated holograms are calculated based on the Fourier transform of Jones matrix profiles selected from the nanostructure library using the geometric parameters. Figure 3 shows the testing results of the two sets of polarization holograms.The first and the second (third) row list the target holograms and the generated holograms in the LP (CP) basis.Each column represents one of the four polarization channels with the first (second) orange arrow drawn on the same figure to denote the analyzing (incident) polarizations.The generated polarization holograms are clear and have low crosstalk between the holograms of different polarization conversion channels (please see Section S6 and Table S3, Supporting Information for more details).The corresponding PCCs between the generated and the target holograms are all larger than 0.74 in the LP (CP) case.The transmission efficiency for the holograms of each dice pattern (numbers pattern) in the LP case (from left to right) are 39.2%, 7.8%, 14.9%, and 45.0% (36.7%, 8.6%, 19.4%, and 42.0%), respectively.The lower transmission efficiency of the holograms in the cross-polarization channels in LP basis is due to the smaller transmission amplitudes of the cross-polarization elements (see Figure S1, Supporting Information).The generated metasurface design and the distribution of its geometrical parameters are shown in Figure S4 (Supporting Information).For the polarization holograms in CP basis, the transmission efficiency for each channel of the dice patterns (numbers patterns) are 22.2%, 31.4%,29.0%, and 21.5% (21.6%, 32.0%, 29.0% and 20.8%), respectively, showing less differentiation between co-polarization and cross-polarization channels.On the other hand, we can also observe that the quality of the generated holograms for the number patterns are only slightly inferior than those for the dice patterns, with slightly smaller PCCs.These results show the extrapolation capability of the integrated network, as the training data are in a similar style to those of Figure 3a but distinctly different from those of Figure 3b.For more general testing, we have also generated 100 configurations of the polarization holograms of dice patterns, again with nine white dots to be turned on or off randomly and independently, as testing data.For LP holograms, the mean PCCs for the four channels (xx, xy, yx, and yy) are 0.87, 0.82, 0.81, 0.89, respectively.For CP holograms, the mean PCCs for the four channels (LL, LR, RL, RR) are 0.84, 0.85, 0.89, 0.82, respectively, confirming the results shown in Figure 3a.This may speed up the whole designing process when a large number of metasurface designs are needed.Now, we compare the DeepCGH-ID network to a conventional approach based on GS algorithm if we are given a generic nanostructure library.In this case, the conventional approach is set by using the GS algorithm to obtain four independent transmission phase profiles (Jones matrix) required on the metasurface.Then, at each location of nanostructure on the metasurface, we choose an optimal one from the library to fit all the four transmission phases as closely as possible.We train separately an encoder-decoder pair in Figure 2, i.e., training the encoder with the previously pre-trained decoder to minimize the MSE between the input phases (to the encoder) and output phases (from the decoder).Then the encoder can be used for such optimization and we call such overall process the "GS+Encoder" approach.The results of comparison in terms of PCCs are shown in Figure 4a for the previous 8 holograms of different polarization conversions about the dice patterns.In this case, the PCCs of the generated holograms from DeepCGH-ID method are higher than "GS+Encoder" on average.On the other hand, we now start to turn off some of the available geometric parameters in the designs.Figure 4b shows the results when we turn off w by setting it to a constant 202.5 nm (i.e., Figure S1, Supporting Information) without choice now while Figure 4c shows the results when we further turn off θ by requesting it to be always zero.The PCCs decrease for both methods as less number of geometric parameters are available for designing the metasurface structures while the "GS+Encoder" method has PCCs falling off more significantly.Such trend can be explained from our starting point of the whole design process.We have actually taken a "complication" approach in choosing a complex structure (in Figure 1) to guarantee significant effect of bianisotropy hence very different values of Jones matrix elements against the geometric parameters and among the four different elements.The mild difference between the DeepCGH-ID and the "GS+Encoder" actually reveals that our choice of structures is complicated enough to browse the whole phase-space of the four transmission phases using four geometric parameters.So the global optimization approach (in the integrated network) gives some advantage.Our machine-assisted algorithm can cope with the complexity of the design phase-space.When the number of available geometric parameters decreases, we do not have enough DOFs anymore to browse the phase-space by individual nanostructures but we will need collaborative effect from different nanostructures to construct the polarization holograms.In these cases, the DeepCGH-ID method shows PCCs much higher than the other approach.We note that the DeepCGH-ID method is generic as only the library needs to be constrained and we do not necessarily need to have the knowledge whether the library is rich enough or not.In addition to the "GS+Encoder" method, we also compared DeepCGH-ID with another conventional approach without machine learning.We obtain the required phase profiles from GS algorithm first, and then select the most matching structures for each unit cell from the material library with the lowest error (please see Section S7 and Table S4, Supporting Information for more details).DeepCGH-ID generates the polarization holograms with higher PCCs on average than this conventional approach, and also 40 times faster than the GS algorithm. This "complication" route is to guarantee the algorithm to be able to find out suitable designs to fulfill specific functionality.In the current work, we make this "complication" route possible by developing an integrated deep neural network to directly work with given a generic nanostructures library.In practice for a more general consideration, we may expect there is an existing library of metamaterial/nanostructures that can be fabricated using existing facilities without difficulty.Then, the complexity will lie on the different possible structures instead of a single complex structure used in this work.Our proposed deep learning method is not only a surrogate solver (the decoder in our whole model), since the network provides both hologram generation and inverse design capabilities, as such, it requires no prior physical knowledge.And it can also be easily extended to other structures and situations.For example, our approach is applicable in such a situation by just replacing our existing library of bianisotropic nanostructure with another library (please see Figure S3, Supporting Information for more details).Further extension of the current approach can also go in the direction to generate holograms with more specifications, e.g., going from phase holograms to amplitude plus phase holograms, and going from polarization multiplexing to orbital-angular-momentum multiplexing.In Figure S5 (Supporting Information), we also demonstrated the generation of the vectorial holograms with both required amplitude and polarization behaviors, by controlling the complex farfield profiles, rather than only the far-field amplitude profiles (please see Section S8 and Figure S5, Supporting Information for more details).Compared with the conventional physicalprinciple approaches, the deep learning method requires less physical knowledge to achieve the metasurface design, by just adding the phase consideration in the training process. Finally, we discuss polarization conversion channels more complicated that the one demonstrated.For example, we can have polarization conversion channels xx, Rx, yy and Ly for the four holograms.In this case, we only need to modify the previously pre-trained decode by adding an additional linear layer for change of basis.Figure 5 shows the target patterns on the first row and the generated ones on the second row.While we have shown the possibility on generalizing the algorithm to more complicated situations, further investigations can be performed on the optimal choice or more general choices of the polarizations. Conclusion In conclusion, to fully utilize the polarization holograms, we start with a possible metamaterials or nanostructures library as the template.By varying the tunable geometric parameters, a rich enough library guarantees a large range of the phases and the difference between the cross-polarization elements in the Jones matrix.We developed a machine-assisted approach to accelerate the metasurfaces design process by integrating both the hologram design procedure and the inverse design of nanostructures into the same neural network.Without detailed physical knowledge on the nanostructure constraints, this integrated network method can generate metasurface designs directly from the required complex polarization holograms.Our information-driven (machine-learning) approach enables a more systematic and automated process in designing metasurface holograms with high quality and efficiency.Our approach can be extended to more degrees of freedom in the multiplexing holograms, e.g., OAM, or to metamaterial structures other than the one used in this work in straightforward manner. Figure 1 . Figure 1.Bianisotropic structural units.a) Schematic of the unit cell with the silicon-made "fin" structure embedded in glass in a square lattice of periodicity a = 1000 nm.The structure has two layers of L-shape bars (width w and thickness h = 250 nm) connected by a square pillar (width w and thickness t = 125 nm) in between.The square pillar has a shift of Δx and Δy from the vertical middle planes of the unit cell (the dashed frame).All bars have width w.The total length of the "fin" along the y-direction is l = 637.5 nm.The upper panel shows the right-handed structure while the lower panel shows its mirrored structure (with mirror plane x = y) defined as the left-handed structure.b) The phases of the Jones matrix in LP by scanning parameters w, Δx, and Δy (with dimensions in nm) obtained from full-wave simulations at normal incidence along z-direction and at a fixed wavelength of 1550 nm. Figure 2 . Figure 2. Integrated deep neural network to design metasurface holograms: turning input target holograms {B ki } (upper green box) to the output geometric parameters of the n 2 nanostructures on the metasurface ({w, Δx, Δy, θ, R/L}) (the cyan box).Subscript k iterates the polarization channels (xx, xy, yx, yy or LL, LR, RL, RR for LP or CP holograms).Subscript i iterates the pixels on the hologram.The decoder is pre-trained with supervised learning to turn geometric parameters to the Jones matrix elements (transmission coefficients) from the full-wave simulation results based on Figure 1.The integrated network trains the far-field predictor network and the encoder network with unsupervised learning.The encoder-decoder pair (orange dashed box) is an autoencoder variant to do inverse design of nanostructures from Jones matrix elements.It is further embedded into the integrated network as an autoencoder in turning target holograms to geometry and to reconstructed holograms with loss function being the reconstruction error. Figure 3 . Figure 3. Testing results of two sets of polarization holograms on a) the dice patterns (an interpolation example) and b) the numbers (an extrapolation example).The first row lists the target holograms.The second (third) row lists the generated holograms in the LP (CP) basis.Each column shows one of the four polarization channels with first (second) arrow indicating the analyzing (incident) polarizations.Horizontal (vertical) arrow means x (y) polarization.Clockwise (counterclockwise) arrow means right (left)-handed CP.PCCs between the reconstructed and the target holograms are shown in upper left corner of each hologram. Figure 4 . Figure 4. Comparison between DeepCGH-ID and "GS+Encoder" methods against different number of geometric parameters.The horizontal and vertical axes denote different polarization channels and Pearson correlation coefficients (ρ), respectively.The orange and blue bars mean the results from DeepCGH-ID and "GS+Encoder" approaches, respectively. Figure 5 . Figure 5. Testing results of polarization holograms on the dice patterns with specifications for xx, Rx, yy, and Ly polarization conversion channels.The second row lists the generated holograms as results.The PCCs are shown in the upper left corner while the first (second) arrow represents the output (input) polarization. Such a process is shown in the upper part of the diagram.The target hologram {B ki } is transformed by the far-field predictor network to guess the far fields with phases {B ki exp (iφ ki )} at the hologram, which is inverse Fou-To formulate such process, the whole network is trained as an autoencoder of the holograms with the reverse process.The lower part of the diagram starts from the geometric parameters feeding to a decoder network, which transforms them into Jones matrix elements t ′ { } Subscript k iterates the polarization conversion channels: xx, xy, yx, yy for a LP hologram or LL, LR, RL, RR for a CP one, and subscript i iterates the pixels on the hologram.R/L is discrete to indicate it is a right-handed or left-handed structure.ki .t ′ { } ki is then Adv.Optical Mater.2024, 12, 2202663
2023-02-22T16:07:47.969Z
2023-02-19T00:00:00.000
{ "year": 2023, "sha1": "ec9be7d071e698901934b51597afc0129412655a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adom.202202663", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "614254a1d3d4f3fffbbe3e69ca04a9d773885dd2", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [] }
218119198
pes2o/s2orc
v3-fos-license
Association of Serum Thyroid Hormones with the Risk and Severity of Chronic Kidney Disease Among 3563 Chinese Adults Background Chronic kidney disease (CKD) is a global health problem with an increasing prevalence. We explored the association of serum thyroid hormones with the risk and severity of CKD among Chinese adults. Material/Methods This retrospective study involved 3563 participants. CKD was diagnosed according to the clinical practice guidelines of the 2012 Kidney Disease Improving Global Outcomes guidelines. Effect-size estimates are expressed as odds ratio (OR) and 95% confidence interval (CI). Results Given the strong magnitude of correlation, only 3 thyroid hormones were analyzed: free triiodothyronine (FT3), free thyroxin (FT4), and thyroid-stimulating hormone (TSH). After propensity score matching on age, sex, diabetes, and hypertension, per 0.2 pg/mL increase in FT3 was significantly associated with 35–38% reduced risk of CKD at stage 1–4, and per 0.3 ng/dL increase in FT4 was only significantly associated with 21% reduced risk of CKD at stage 5 (OR, 95% CI: 0.79, 0.69–0.89), and per 0.5 μIU/mL increment in TSH increased the risk of CKD stage 5 by 8% (1.08, 1.02–1.14). Importantly, 3 thyroid hormones acted interactively, particularly for the interaction between FT3 and FT4 in predicting CKD at stage 5 (OR, 95% CI: 1.81, 1.30–2.55 for high FT3-low FT4, 17.72, 7.18–43.74 for low FT3-high FT4, and 22.28, 9.68–51.30 for low FT3-low FT4). Conclusions Our findings indicate that serum FT3 can be used as an early-stage biomarker for CKD, and FT4 and TSH can be used as advanced-stage biomarkers among Chinese adults. Background Chronic kidney disease (CKD) is a global health problem that has reached epidemic proportions, with an estimated prevalence rate of 8-16% [1]. Currently, CKD is gaining increased attention as a potential driver of cardiovascular and cerebrovascular diseases [2,3]. In China, CKD affects approximately 10.8% of adults [4], resulting in disability and heavy socioeconomic burden [5,6]. There are several known risk factors for CKD, such as diabetes and hypertension [7][8][9]. Clinical investigations have revealed substantial variation in the onset and progression of CKD that cannot be fully explained by preexisting risk factors [10,11]. Thus, efforts are needed to identify more potential risk factors in the development of CKD. It is widely recognized that thyroid hormones are of clinical and public health importance for renal physiology and functional development [12,13]. From epidemiological aspects, a nationally representative cohort of U.S. patients with moderate-to-severe CKD showed an inverse association between estimated glomerular filtration rate (eGFR) and the risk of hypothyroidism [14]. In addition, a retrospective cohort study in Taiwanese indicated that higher concentrations of thyroidstimulating hormone (TSH) were associated with greater risk of subsequent CKD [12]. Moreover, another prospective cohort study in middle-aged and elderly Shanghainese revealed that high free thyroxin (FT4), but not TSH and free triiodothyronine (FT3), was associated with increased risk of incident CKD and rapid eGFR decline [15]. The reasons for these inconsistences are multiple, likely due to heterogeneous study populations, different study designs, or unaccounted-for residual confounders. Moreover, a literature search revealed sparse data directly comparing thyroid hormones and various CKD stages in the current medical literature. To fill this gap in knowledge and generate more information for future studies, we developed a hypothesis that abnormal serum thyroid hormones are potential risk predictors for CKD among Chinese adults, and tested this hypothesis in a retrospective study by assessing the association of 5 thyroid hormone biomarkers with the risk and severity of CKD, both individually and interactively, via propensity score matching analysis. Study Participants This was a hospital-based retrospective study. All study participants were Han Chinese adults aged 18 to 80 years at the time of enrollment from the Department of Endocrinology and Department of Nephrology at China-Japan Friendship Hospital, and they were consecutively recruited during the period January 2010 to December 2018. The study protocol was approved by the Ethics Committee of China-Japan Friendship Hospital, and was in accordance with the principles of the Declaration of Helsinki. All participants in this study gave their informed written consent. Initially, 5294 participants were recruited, and 1731 of them were excluded due to the following reasons: (i) under dialysis or after kidney transplantation; (ii) essential relevant information including serum creatinine and urinary albumin-to-creatinine ratio (ACR) was missing; (iii) taking medication affecting thyroid function, such as thyroxine, anti-thyroid drugs, glucocorticoid, antiepileptic and contraceptive drugs prehospitalization; and (iv) being pregnant or having diabetic ketoacidosis, acute cardiovascular events, or other severe disorders including tumors. Thus, there were 3563 eligible patients with complete data in the final analysis. Among these 3563 eligible patients, including 2396 males and 1167 females, 289 were clinically confirmed to be free of CKD and they formed the control group, while 3274 were diagnosed with CKD and they formed the case group. In this present study, we did not attempt to define persistent proteinuria, because a majority of patients had proteinuria measured only once. Clinical and biochemical indexes Hypertension and diabetes were diagnosed at the time of enrollment. Hypertension was defined as systolic blood pressure ³140 mmHg, diastolic blood pressure ³90 mmHg, or previous treatment with antihypertensive drugs [22]. Diabetes was defined as fasting plasma glucose ³7.0 mmol/L [23] or taking hypoglycemic drugs or receiving parenteral insulin therapy. Statistical analyses The c 2 tests for categorical data and Wilcoxon rank sum tests for continuous data were used to assess whether baseline characteristics were different between the 4 CKD stage groups and the control group. Spearman correlation analyses were used to assess the relationship across 5 thyroid hormones due to their skewed distributions. Logistic regression analyses were conducted to assess the association of thyroid hormones with the risk and severity of CKD at a significance level of 5% before and after adjusting for confounders, as well as by applying propensity score matching method to reduce selection bias by equating groups based on confounding factors. Effect-size estimates are expressed as odds ratio (OR) and 95% confidence interval (95% CI). Prediction accuracy gained by adding significant thyroid hormones was assessed from both discrimination and calibration viewpoints. In this study, net reclassification improvement (NRI) and integrated discrimination improvement (IDI) [24,25] were calculated to judge the discrimination capability of significant thyroid hormones. Calibration capability was evaluated using the -2 log likelihood ratio test, Akaike information criterion (AIC), and Bayesian information criterion (BIC) to see how closely the prediction probability for the addition of significant thyroid hormones reflected the actual observed risk and the global fit of modified risk model [26]. Moreover, the net benefit for the addition was also inspected by decision curve analysis [27], and in this curve, the X-axis denotes thresholds for CKD risk, and the Y-axis denotes net benefits on different thresholds. Statistical analyses were completed using the STATA software Release 14.1 (Stata Corp, TX, USA). Wherever appropriate, statistics were then adjusted for multiple comparisons using a Bonferroni correction. A P value of less than 0.05 was considered statistically significant. Table 1 shows the baseline characteristics of study participants. CKD patients at stages 1-2, stage 3, and stage 4 were older than controls (P<0.001), but CKD patients at stage 5 were younger (P<0.001). Except for stage 3 (P=0.308), the sex composition differed significantly between CKD patients at different stages and controls (P<0.001). As for thyroid hormones, concentrations of T3, T4, FT3, and FT4 were significantly lower in CKD patients at different stages than in controls (P<0.05); TSH concentrations were significantly lower in CKD patients with stages 1-2 (P<0.05), but significantly higher in CKD patients with stage 4 and stage 5 (P<0.05), than in controls. Correlation analyses The correlation plot of 5 thyroid hormones under study is presented in Figure 1. Due to the strong magnitude of correlation between T3 and FT3 (Spearman correlation coefficient r: 0.939, P<0.0001), T4 and FT4 (Spearman correlation coefficient r: 0.711, P<0.0001), only FT3 and FT4 were retained in the following analyses. Risk prediction for CKD Prediction of 3 thyroid hormones (FT3, FT4, and TSH) for the risk of CKD at different stages before and after propensity score matching analysis is provided in Table 2. Based on Bonferroni correction for 12 comparisons, associations were considered significant at a P value of less than 0.004. After balancing age, sex, diabetes, and hypertension between cases and controls through propensity score matching analysis, per 0.2 pg/mL increment in serum FT3 was significantly associated with 35-38% reduced risk of CKD at stages 1-4, and the reduction in association with CKD stage 5, albeit marginally significant, was only 2%. With regard to serum FT4, the per 0.3 ng/dL increase was e922910-3 only significantly associated with a 21% reduced risk of CKD at stage 5 (OR: 0.79, 95% CI: 0.69 to 0.89, P<0.001). By contrast, per 0.5 μIU/mL increment in serum TSH increased the risk of CKD stage 5 by 8% (OR: 1.08, 95% CI: 1.02 to 1.14, P=0.003). P values are calculated by nonparametric Wilcoxon rank sum tests for continuous variables expressed as median (interquartile range) and c 2 tests for categorical variables expressed as count and percent. TG -triglycerides; TC -total cholesterol; LDL-C -lowdensity lipoprotein cholesterol; HDL-C -high-density lipoprotein cholesterol; HbA1c -hemoglobin A1c; UA -uric acid; Scr -serum creatinine; ACR -albumin-to-creatinine ratio; BUN -blood urea nitrogen; T3 -total triiodothyronine; T4 -total thyroxine; FT3 -free triiodothyronine; FT4 -free thyroxine; TSH -thyroid-stimulating hormone. * P<0.05; ** P<0.01. e922910-4 and UA) for CKD at different stages. From calibration aspects, reduction in both AIC and BIC statistics was greater than 10 after adding each of 3 thyroid hormones to the basic model across all CKD stages. Additionally, likelihood ratio tests revealed statistical significance for FT3 and FT4 at stages 1-4, and for TSH at stages 4-5 at a level of 4%. From discrimination aspects, both NRI and IDI indicated that addition of FT3 to the basic model was significant at stages 1-5, FT4 at stages 3-5, and TSH at stages 1-2 and 4-5 (P <0.004), which was further confirmed by decision curve analysis (Figure 2). Interaction of thyroid hormones In view of the significant individual association of 3 thyroid hormones with CKD risk, further interaction exploration was conducted (Table 4). To facilitate interpretation, each thyroid hormone was binarized according to the median value among all study participants, and classified into high and low groups accordingly. To avoid false-positive association, Bonferroni P values of less than 0.004 were considered significant in interaction analysis. Discussion In this retrospective study, we aimed to test the hypothesis that serum thyroid hormones are potential risk predictors for the risk and severity of CKD among Chinese adults. Our findings support this hypothesis that serum FT3 may serve as an early-stage biomarker for CKD, and FT4 and TSH as advancedstage biomarkers among Chinese adults. Moreover, it is also worth noting that these 3 thyroid hormones may act interactively in predisposition to developing CKD. To the best of our knowledge, this is the first study that has evaluated the association between thyroid hormones and CKD severity among Chinese adults. The involvement of thyroid hormones in the development of CKD is biologically plausible. It is well known that thyroid hormones play an important role in differentiation, growth, and metabolism, and they are necessary for the normal function of virtually all tissues, including the kidneys [28,29]. Although the molecular mechanisms behind the association between thyroid hormones and CKD are not yet completely understood, it has been reported that thyroid hormones affect renal function via some pre-renal or direct renal effects on cardiac output and the systemic vascular resistance, which further regulates renal blood flow [29][30][31]. There is also evidence that thyroid hormones affect renal clearance of water load by their effects on GFR [32], and they affect Na reabsorption at the proximal convoluted tubules, primarily by increasing Na/K ATPase activity [33] and tubular potassium permeability [34]. It is thus reasonable to speculate that unfavorable changes in serum thyroid hormones may worsen renal function, which then leads to the development and progression of CKD. Some studies have examined the association between thyroid hormones and CKD risk, and the results are not often reproducible. For example, in a prospective cohort involving older adults from the Netherlands, there was no detectable association between baseline thyroid hormone concentrations and changes in renal function during follow-up [35]. However, a large prospective study conducted in South Korea showed that normal-to-low levels of FT3 and normal-to-high levels of TSH were associated with an increased risk of incident CKD [36]. By contrast, in a cross-sectional studies from China, TSH was negatively associated with eGFR, and high FT4 was associated with an increased risk of CKD in euthyroid individuals [37]. Another prospective study showed that high FT4, but not TSH and FT3, was associated with an increased risk of incident CKD and rapid eGFR decline in middle-aged and elderly Chinese [15]. Several factors could be behind these conflicting findings. The first factor might be heterogeneity across study populations. Mounting evidence suggests that genetic influences in CKD are polygenic [38], and different populations usually have different genetic profiles and diverse lifestyle patterns [39,40]. This is exemplified by differences between northern and southern Chinese, as wide geographic differences cause various discrepancies due to historical and cultural e922910-9 influences. For instance, the average urinary sodium excretion in northern Chinese is nearly double that in southern Chinese, leading to an average of 7.4 and 6.9 mm Hg increased in systolic and diastolic blood pressure,22 the established risk factor for CKD [8,9]. The second reason might be related to unaccounted residual confounding, which might yield a possible selection bias. The third reason is that the contribution of any individual biomarker to CKD risk is likely to be small, in view of the complex nature of CKD development [41]. Most studies on the association between thyroid hormones and CKD have were conducted in Western countries and among southern Chinese. In this present study, we focused on northern Chinese adults and employed the propensity score matching method to reduce selection bias by equating groups based on age and sex when assessing the association between serum thyroid hormones and the risk and severity of CKD. Our findings revealed that serum FT3 and FT4 are protective factors against CKD susceptibility at different stages, and serum TSH is a risk factor for advanced CKD stages. In clinical practice, thyroid hormones can be easily measured, and thus might be useful in predicting the onset and progression of CKD. Another important finding of this study is that the 3 thyroid hormones we studied might act in an interactive manner, especially for the interaction between FT3 and FT4 when predicting the risk of CKD stage 5. In most previous studies, thyroid hormones were only investigated individually, and their potential interactions were often overlooked. However, due to multiple subgroups in interaction analysis and the limited number of study participants, especially after propensity score matching, our findings should be considered preliminary and require confirmation in other independent populations. Several possible limitations should be acknowledged in this study. First, the cross-sectional design precludes further comments on the cause-effect relationship between thyroid hormones and CKD, and all study participants were recruited from one center, requiring further external validation. Second, some unmeasured characteristics of the study participants, such as obesity, might confound the association of thyroid hormones with CKD risk and severity. Third, insufficient sample size or random measurement error may have limited our power to detect phenotype-disease associations. Fourth, all study participants were enrolled from a single hospital, which might have led to population stratification. Fifth, this study involved Chinese adults of Han ethnicity, and extrapolation to other ethnic or racial groups is restricted. Conclusions Despite these limitations, our findings indicated that serum FT3 may serve as an early-stage biomarker for CKD, and FT4 and TSH could be used as advanced-stage biomarkers among Chinese adults. Importantly, these thyroid hormones can act interactively in predisposition to the development of CKD. Nevertheless, for practical reasons, we hope that this study will not remain just another endpoint of research instead of beginning to establish background data to further investigate the contributory role of thyroid hormones in the pathogenesis of CKD and as adjuvant therapy to control CKD progression.
2020-04-30T09:04:35.113Z
2020-04-23T00:00:00.000
{ "year": 2020, "sha1": "bbad914e14e12013f3f23f994451310e403ae9d0", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc7331475?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c6fd583e9c00f9acdb6b626d76884ba1b22e02cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }